1. 20 3月, 2020 1 次提交
    • Z
      Add dygraph double grad implementation (#22939) · a31d7328
      Zeng Jinle 提交于
      * add double grad implementation for dygraph, test=develop
      
      * polish code, add uts, test=develop
      
      * fix place bug, test=develop
      
      * polish codes, add more uts for coverages, test=develop
      
      * add no_grad_set, test=develop
      
      * add star gan ut, test=develop
      
      * follow comments, test=develop
      a31d7328
  2. 19 3月, 2020 1 次提交
  3. 17 3月, 2020 1 次提交
  4. 03 3月, 2020 1 次提交
  5. 23 2月, 2020 1 次提交
  6. 10 2月, 2020 1 次提交
  7. 07 2月, 2020 1 次提交
    • A
      polish no_grad_set of gradient and append_backward (#22440) · 50af6b5d
      Aurelius84 提交于
      * polish backward api doc test=develop, test=document_preview,
             test=document_fix
      
      * polish backward api doc test=develop, test=document_preview, test=document_fix
      
      * no_grad supports set of Variable test=develop, test=document_preview
      
      * polish sample code of append_backward test=develop, test=document_preview
      
      * modify assert into Raise TypeError test=develop,test=document_preview
      
      * fix unittest failed test=develop
      
      * rm useless file test=develop
      
      * polish en doc test=develop
      
      * polish code of no_grad_set test=develop
      
      * polish code of no_grad_set test=develop
      50af6b5d
  8. 20 1月, 2020 1 次提交
    • Z
      Polish backward.py to prune more ops (#22246) · 039bb505
      Zeng Jinle 提交于
      * polish backward prune, test=develop
      
      * fix control flow op bug, test=develop
      
      * add some unittests, test=develop
      
      * fix unittest args, test=develop
      
      * follow huihuang's comments, test=develop
      039bb505
  9. 16 1月, 2020 1 次提交
  10. 04 1月, 2020 1 次提交
    • L
      control flow: support optimizer called (#21851) · 7d8d4599
      liym27 提交于
      * append optimize op in the grad block of current block if current block is in control flow. test=develop
      
      * add conditional grad op when optimizer used in control flow. test=develop
      
      * add comment and modify typo. test=develop
      
      * fix append_backward to support control flow. test=develop
      
      * add test. test=develop
      
      * fix copy_var_to_parent_block and conditional_block_grad. test=develop
      
      * fix bug: revert to append conditional_block_grad vars to sub grad block. test=develop
      
      * fix bug: revert to assign var to parent block even if var already is in parent block
      
      * fix bug: consider outputs is empty. test=develop
      
      * move _rename_grad_ out. test=develop
      
      * modify code according to reviews from Huihuang. test=develop
      
      * modify code according to reviews from Jinle. test=develop
      7d8d4599
  11. 01 1月, 2020 1 次提交
  12. 18 12月, 2019 1 次提交
  13. 10 12月, 2019 1 次提交
  14. 06 12月, 2019 1 次提交
    • H
      Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532) · 1dcf6a72
      Huihuang Zheng 提交于
      Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
      
      Fix bugs:
      
      1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
      
      2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
      
      This PR also did some code clean up:
      1. Print the var name when sgd_op catches shape error so that it is easier to debug
      2. Fix a typo: dicta -> dict
      1dcf6a72
  15. 29 11月, 2019 1 次提交
    • H
      Fix Cond Bug for Nested Control Flow (#21340) · 630be319
      Huihuang Zheng 提交于
      * Commit before merging develop
      
      test=develop
      
      * Backup after working with Huihuang logs
      
      * Commit before deleting Huihuang debug loggings
      
      * Commit before debug
      
      test=develop
      
      * Fix bug commit
      
      test=develop
      
      * Backup of fixing bugs
      
      test=develop
      
      * Clean up code
      
      test=develop
      
      * Fix a bug in sum_op
      
      test=develop
      630be319
  16. 30 10月, 2019 1 次提交
  17. 19 10月, 2019 1 次提交
  18. 13 10月, 2019 1 次提交
    • L
      fill_constant support Tensor; (#20521) · fc6ec3b9
      liym27 提交于
      2. fix bug in backward.py: using fill_constant instead of fill_constant_batch_size_like
      3. fix bug in ExpandGradOp.
      
      test=develop
      fc6ec3b9
  19. 09 10月, 2019 2 次提交
  20. 26 9月, 2019 1 次提交
    • M
      fix doc of apply_optimize (#19965) · d62360fe
      mapingshuo 提交于
      * fix doc of apply_optimize
      test=document_fix
      test=document_preview
      
      * modify doc of backward
      test=develop
      test=document_fix
      
      * modify document hash
      test=develop
      test=document_preview
      d62360fe
  21. 23 9月, 2019 1 次提交
    • M
      Forward recompute3 (#19913) · 9901f696
      mapingshuo 提交于
      * add recompute based checkpoints methods for large batch training
      test=develop
      
      * add append_backward_with_forward_recomputation
      test=develop
      
      * refine optimizer
      test=develop
      
      * update backward and optimizer
      test=develop
      
      * make Variable usable
      test=develop
      
      * add recompute code
      
      * refine optimizer
      test=develop
      
      * refine addup _append_backward_ops_with_checkpoints_
      1) for recompute part, just cache the grad_op_desc without appending to block
      2) before appending grad_op_desc to backward part, addup_repetitive_vars, remove unused branch
      test=develop
      
      * make method private
      
      * add recompute strategy into DistributedStrategy
      test=develop
      
      * checkpoint version3
      test=develop
      
      * remove some print information
      test=develop
      
      * remove unused sumop
      test=develop
      
      * try to fix recompute with graph building modules
      
      * add input names to vars should be held
      
      * add memory debug tool
      
      * backup backward
      
      * Fix bugs
      
      * add backward desc for op not in any segments
      
      * add exception info for sub_block
      
      test=develop
      
      * modify code style
      
      test=develop
      
      * modify code style
      
      test=develop
      
      * remove print functions
      
      test=develop
      
      * add API spec
      
      test=develop
      test=document_preview
      
      * make Recompute a child class of Optimizer
      
      test=develop
      test=document_preview
      
      * add API spec
      
      test=develop
      test=document_preview
      
      * modify API spec
      
      test=develop
      test=document_preview
      
      * add document for Recompute
      
      test=develop
      test=document_preview
      
      * change API doc of Rcompute
      
      test=develop
      test=document_preview
      
      * code cleaning
      
      test=develop
      test=document_preview
      
      * modify API spec
      
      * fix bugs when segments hold no element
      
      * add testcase for Recompute Optimizer
      
      test=develop
      test=document_preview
      
      * add test for apply_gradient, and code cleaning
      
      test=develop
      test=document_preview
      
      * add test case for load function
      
      * enable CI
      
      test=develop
      test=document
      
      * add test case
      
      test=develop
      test=document_preview
      
      * add sample code for 4 function of recompute optimizer
      
      test=develop
      test=document_preview
      9901f696
  22. 11 9月, 2019 1 次提交
    • Y
      fix api-doc error for dygraph and backward (#19721) · 3e5fb636
      Youwei Song 提交于
      * update dygraph api-doc and backward api-doc, test=develop
      
      * update dygraph api-doc and backward api-doc, update api.spec, test=develop
      
      * update dygraph api-doc and backward api-doc, update api.spec, test=develop
      
      * update API.spec, test=develop
      3e5fb636
  23. 26 8月, 2019 1 次提交
  24. 24 7月, 2019 1 次提交
  25. 02 7月, 2019 1 次提交
  26. 01 7月, 2019 1 次提交
  27. 16 6月, 2019 1 次提交
  28. 16 5月, 2019 1 次提交
  29. 08 5月, 2019 1 次提交
    • L
      Repair api example (#17221) · e388a1fb
      lujun 提交于
      Fix the following API examples:
      
      paddle.fluid.scope_guard
      paddle.fluid.backward.append_backward
      paddle.fluid.cpu_places
      paddle.fluid.cuda_pinned_places
      paddle.fluid.cuda_places
      paddle.fluid.in_dygraph_mode
      paddle.fluid.CUDAPlace
      paddle.fluid.CPUPlace
      paddle.fluid.CUDAPinnedPlace
      e388a1fb
  30. 23 4月, 2019 1 次提交
    • Q
      Support backward of backward for Relu and add a new gradient checker by... · c1c2633a
      qingqing01 提交于
      Support backward of backward for Relu and add a new gradient checker by comparing theoretical and numerical Jacobian. (#16862)
      
      * Support backward of backward and a new gradient checker
      * Rename decorators.py to decorator_helper.py, since Python on Windows CI has decorators package.
      
      1. Add ReluDoubleGradMaker when register relu_grad.
      2. Add a new gradient checker by comparing theoretical and numerical Jacobian.  Check double gradients by double_grad_check.
      c1c2633a
  31. 03 4月, 2019 1 次提交
  32. 19 12月, 2018 1 次提交
  33. 18 12月, 2018 1 次提交
  34. 13 12月, 2018 1 次提交
  35. 26 9月, 2018 1 次提交
  36. 18 9月, 2018 1 次提交
  37. 15 8月, 2018 2 次提交
  38. 14 8月, 2018 1 次提交