1. 09 4月, 2021 2 次提交
  2. 07 4月, 2021 1 次提交
  3. 06 4月, 2021 1 次提交
  4. 02 4月, 2021 1 次提交
    • W
      support save/load single tensor (#31756) · 43367e4b
      WeiXin 提交于
      * support save/load single tensor
      
      * compatibility modification according to unnittest
      
      * Some python2.7 don't have 'copyreg' modules
      
      * Handle a syntax error.
      
      * Dealing with compatibility problems on Mac.
      
      * Dealing with compatibility problems on Mac.
      
      * edit unittest to improve coverage.
      
      * Modify the code according to the review comments
      
      * Reduce redundant code.
      
      * support for static graph loading dygraph state_dict
      
      * edit code according to CI
      
      * edit unittest
      
      * edit unnittest
      
      * delete redundant file
      
      * edit code according to Comments
      
      * edit english doc
      
      * edit english doc
      
      * edit English DOC.
      
      * get/set_tensor->get/set_value; return_numpy=False
      
      * get/set_tensor->get/set_value; return_numpy=False
      
      * edit unnittest
      
      * edit unnittest
      
      * polish code.
      43367e4b
  5. 01 4月, 2021 3 次提交
    • S
      Support control flow in DataParallel (#31625) · 8460698b
      ShenLiang 提交于
      * support control flow
      
      * supoort sync_parameters_buffers
      
      * fix the bug of sparse embedding
      8460698b
    • C
      add custom init grad for backward function (#31540) · 83b953f5
      chentianyu03 提交于
      * add custom init grad for backward function
      
      * add custom init grad for backward function
      
      * handle when the grad_tensor is none
      
      * handle when the grad_tensor is none
      
      * fix the args type error on windows platform
      
      * modify the args order and doc
      
      * format code
      
      * add grad_tensor to xpu
      
      * modify the grad_tensor type check
      
      * add paddle.backward api to support multi tensors gradient compute
      
      * add paddle.backward api to support multi tensors gradient compute
      
      * add paddle.atuograd module and backward api
      
      * change tensor.backward func args
      
      * modify tensor backward api
      
      * remove create_graph intputs args
      
      * add doc and examplex code for backward api
      
      * when have the same tensor, throw error
      
      * modify test Init func args
      
      * modify the execute.Init func args in test files
      
      * add paddle.autograd package in setup.py.in
      
      * modify error msg, remove _run_backward method in class Tensor
      
      * add test cases for backward api
      83b953f5
    • C
      Refactor and simplify hook design & add Tensor.register_hook API (#31775) · dbeb3ea4
      Chen Weihang 提交于
      * refactor and simplify hook design
      
      * fix reducer add hook error
      
      * add Tensor.register_hook basic impl
      
      * refine prepare data impl
      
      * revert prepare data change
      
      * support register_hook for Tensor
      
      * add hook test in model
      
      * polish tests and doc example
      
      * fix double grad test failed
      
      * remove reduce hook func
      
      * fix set empty error
      
      * polish code by comments
      
      * change reduce_hook to mutable_hook
      
      * remove useless tmp_ins
      
      * fix shape code format error
      
      * fix shape code format error
      dbeb3ea4
  6. 31 3月, 2021 1 次提交
  7. 24 3月, 2021 1 次提交
  8. 11 3月, 2021 2 次提交
  9. 04 3月, 2021 3 次提交
    • L
      522c91ec
    • H
      Fix comment (#31424) · c40b98e0
      Huihuang Zheng 提交于
      Fix wrong code comment
      c40b98e0
    • H
      [Dy2stat] Fix Read-Only Attribute as while_loop Output (#31415) · 6bf02a12
      Huihuang Zheng 提交于
      Fix Read-Only Attribute as while_loop Output:
      
      Usually, our convert_while_loop will be like:
      ```
          [a, b, c] = paddle.jit.dy2static.convert_while_loop(
                  condition_name, body_name, [a, b, c])
      ```
      where a, b, c are in loop_var_names.
      
      However, if loop_var_names contains property such as foo.x, we cannot
      assign the attribute as output of convert_while_loop because Python
      property is a kind of read-only attribute. To handle the case, we replace
      the attributes which are output of convert_while_loop with generated
      variables, then if we know the attribute is not read-only at runtime, we
      assign the attribute. The created statements are like:
      ```
          [a, b, __attribute_variable_1] = paddle.jit.dy2static.convert_while_loop(
                  condition_name, body_name, [a, b, foo.x])
          if not isinstance(getattr(type(foo), x, None), property): foo.x = __attribute_variable_1
      ```
      6bf02a12
  10. 26 2月, 2021 1 次提交
  11. 24 2月, 2021 1 次提交
  12. 22 2月, 2021 1 次提交
    • H
      [Dy2stat] Refactoring tensor_shape_transformer.py to Fix Change after Assign Bug (#31082) · cf43a321
      Huihuang Zheng 提交于
      **Problem**
      In our old shape transformer logic, if user write:
      ```
      s = tensor.shape
      ...
      y = paddle.some_api(s)
      ```
      Dy2stat will change it to
      ```
      ...
      y = paddle.some_api(convert_var_shape(tensor))
      ```
      However it will cause fatal bug if user changes the shape of `x` after assign. For example:
      ```
      s = tensor.shape
      ...
      tensor = paddle.some_change_shape_api(tensor)
      ...
      y = paddle.some_api(s)
      ```
      Then the Dy2stat will get wrong result because the code is translated into:
      ```
      tensor = paddle.some_change_shape_api(tensor)
      ...
      y = paddle.some_api(convert_var_shape(tensor)) # tensor shape has been changed, not origin `s` value
      ```
      
      **Solution Logic**
      
      It can not be solved in the old logic, so I refactoring tensor_shape_transformer logic. Now we will use `s` to store shape attribute and generate a var `s__STATIC_CONVERT_VAR_SHAPE_SUFFIX` to store static shape API `shape(tensor)`
      ```
      s = tensor.shape
      ...
      y = paddle.some_api(s)
      ```
      Dy2stat will change it to
      ```
      s = tensor.shape
      s__STATIC_CONVERT_VAR_SHAPE_SUFFIX = shape(tensor)
      ...
      y = paddle.some_api(choose_shape_attr_or_api(s, s__STATIC_CONVERT_VAR_SHAPE_SUFFIX ))
      ```
      In this case, the code is consistent with origin dygraph meaning and it fixed the change after assign bug.
      
      **Code Key Note**
      
      To help reviewers, the key change of this PR is changing `self.name_to_var_shape` from "mapping name to shape node" to "mapping name to its STATIC_CONVERT_VAR_SHAPE_SUFFIX name", then if a variable name has the SUFFIX, we can choose to use attribute shape or shape api. Other changes go with the key change.
      
      **Consideration**
      The issue of this PR is that we store extra static `shape` API result, will it harms the speed of Dy2stat? In some cases it will, but we argue that the benefit would be greater than the cost.
      
      1. The extra calling to static `shape` API will happen when coder assign among shape variables. Take the following dygraph code as an instance:
      ```
      s1 = tensor.shape
      s2 = s1
      s3 = s2
      ...
      ```
      Then we called extra static `shape` APIs again and again, however users seldom write code like this.
      
      2. If the shape variable is used a lot, for example:
      ```
      s = tensor.shape
      y1 = paddle.some_api1(s)
      y2 = paddle.some_api2(s)
      y3 = paddle.some_api3(s)
      ```
      Our old logic will create 3 shape APIs but now just 1. This is more common user code pattern. In fact, if reviewers take a look at the current unit test in this PR, you could see the op numbers decrease after this PR. So we argue that this PR can also improve speed in this code pattern.
      cf43a321
  13. 20 2月, 2021 1 次提交
  14. 19 2月, 2021 1 次提交
  15. 18 2月, 2021 1 次提交
    • H
      Add Support for Tuple in for Loop (#30998) · c1375783
      Huihuang Zheng 提交于
      Dy2stat didn't support tuple as iteration variable in the past. This PR added there main cases:
      
             1). Non-enumerate case: for var1, var2 in var|var.numpy() will be re-written as:
                for FOR_ITER_TUPLE_PREFIX_x in var | var.numpy():
                  var1 = FOR_ITER_TUPLE_PREFIX_x[0]
                  var2 = FOR_ITER_TUPLE_PREFIX_x[1]
              2). Enumerate out tuple case: for t in enumerate(var|var.numpy) will be rewritten as:
                for FOR_ITER_TUPLE_INDEX_PREFIX_x, FOR_ITER_TUPLE_PREFIX_x in enumerate(var|var.numpy):
                  t = (FOR_ITER_TUPLE_INDEX_PREFIX_x, FOR_ITER_TUPLE_PREFIX_x)
              3). Enumerate inner tuple case: for i, (var1, (var2, va3)) in enumerate(var|var.numpy()) will
              be re-written as:
                for i, FOR_ITER_TUPLE_PREFIX_x in var | var.numpy():
                  var1 = FOR_ITER_TUPLE_PREFIX_x[0]
                  var2 = FOR_ITER_TUPLE_PREFIX_x[1][0]
                  var3 = FOR_ITER_TUPLE_PREFIX_x[1][1]
      c1375783
  16. 07 2月, 2021 1 次提交
  17. 05 2月, 2021 1 次提交
  18. 03 2月, 2021 2 次提交
  19. 27 1月, 2021 1 次提交
  20. 26 1月, 2021 1 次提交
  21. 20 1月, 2021 3 次提交
  22. 14 1月, 2021 1 次提交
  23. 13 1月, 2021 2 次提交
  24. 11 1月, 2021 3 次提交
  25. 08 1月, 2021 3 次提交
  26. 07 1月, 2021 1 次提交