1. 19 10月, 2021 1 次提交
  2. 13 10月, 2021 1 次提交
    • J
      [New Feature] Support triple grad in Paddle (#36187) · 2c44ee7e
      Jiabin Yang 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      Co-authored-by: Nveyron95 <veyron_wu@163.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      2c44ee7e
  3. 28 9月, 2021 1 次提交
    • X
      [hybrid] seed and dropout op support force-cpu (#35820) · 58c8f6b3
      xiayanming 提交于
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] fix seed ci failed issue
      
      * add AsExtra for force_cpu of seed op
      58c8f6b3
  4. 05 8月, 2021 1 次提交
  5. 04 8月, 2021 1 次提交
    • C
      Add gradient with optimizer API (#34395) · d9e63a81
      chentianyu03 提交于
      * add gradients_with_optimizer api
      
      * modify gradients_with_optimizer
      
      * add gradients_with_optimizer api into paddle.auto.backward_mode
      
      * add gradients_with_optimizer test case
      
      * add doc for gradients_with_optimizer
      
      * add doc for gradients_with_optimizer
      d9e63a81
  6. 14 7月, 2021 1 次提交
  7. 05 7月, 2021 1 次提交
  8. 02 7月, 2021 1 次提交
  9. 09 6月, 2021 1 次提交
  10. 26 4月, 2021 1 次提交
  11. 07 4月, 2021 1 次提交
  12. 02 4月, 2021 1 次提交
  13. 12 1月, 2021 1 次提交
  14. 24 12月, 2020 1 次提交
  15. 26 11月, 2020 1 次提交
    • C
      Add static_only decorator for static apis (#29015) · d0129fcd
      Chen Weihang 提交于
      * add static_only for static api
      
      * addd static_only for class init
      
      * remove static_only for default_main_program
      
      * remove creater_parameter & startup_program
      
      * remove failed apis
      
      * revert py_func import
      
      * remove global scope
      
      * remove some api
      
      * remove cuda pinned place
      d0129fcd
  16. 14 10月, 2020 1 次提交
  17. 28 9月, 2020 1 次提交
  18. 21 9月, 2020 1 次提交
  19. 11 9月, 2020 1 次提交
  20. 13 7月, 2020 1 次提交
  21. 14 5月, 2020 1 次提交
  22. 30 4月, 2020 1 次提交
    • Q
      Fix double_grad bug in statig-graph (#24190) · 84cf5db8
      qingqing01 提交于
      * Rename internal gradient variables in multiple backward
      * so that they have different names with previous backward
      * For example:
      *  y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x)
      * In second-time backward, gradient variable names of partial
      * forward network (y * y) may be have same names with first-time
      * fluid.gradients(y, x).
      
      test=develop
      84cf5db8
  23. 15 4月, 2020 1 次提交
  24. 10 4月, 2020 1 次提交
  25. 09 4月, 2020 1 次提交
  26. 20 3月, 2020 1 次提交
    • Z
      Add dygraph double grad implementation (#22939) · a31d7328
      Zeng Jinle 提交于
      * add double grad implementation for dygraph, test=develop
      
      * polish code, add uts, test=develop
      
      * fix place bug, test=develop
      
      * polish codes, add more uts for coverages, test=develop
      
      * add no_grad_set, test=develop
      
      * add star gan ut, test=develop
      
      * follow comments, test=develop
      a31d7328
  27. 19 3月, 2020 1 次提交
  28. 17 3月, 2020 1 次提交
  29. 03 3月, 2020 1 次提交
  30. 23 2月, 2020 1 次提交
  31. 10 2月, 2020 1 次提交
  32. 07 2月, 2020 1 次提交
    • A
      polish no_grad_set of gradient and append_backward (#22440) · 50af6b5d
      Aurelius84 提交于
      * polish backward api doc test=develop, test=document_preview,
             test=document_fix
      
      * polish backward api doc test=develop, test=document_preview, test=document_fix
      
      * no_grad supports set of Variable test=develop, test=document_preview
      
      * polish sample code of append_backward test=develop, test=document_preview
      
      * modify assert into Raise TypeError test=develop,test=document_preview
      
      * fix unittest failed test=develop
      
      * rm useless file test=develop
      
      * polish en doc test=develop
      
      * polish code of no_grad_set test=develop
      
      * polish code of no_grad_set test=develop
      50af6b5d
  33. 20 1月, 2020 1 次提交
    • Z
      Polish backward.py to prune more ops (#22246) · 039bb505
      Zeng Jinle 提交于
      * polish backward prune, test=develop
      
      * fix control flow op bug, test=develop
      
      * add some unittests, test=develop
      
      * fix unittest args, test=develop
      
      * follow huihuang's comments, test=develop
      039bb505
  34. 16 1月, 2020 1 次提交
  35. 04 1月, 2020 1 次提交
    • L
      control flow: support optimizer called (#21851) · 7d8d4599
      liym27 提交于
      * append optimize op in the grad block of current block if current block is in control flow. test=develop
      
      * add conditional grad op when optimizer used in control flow. test=develop
      
      * add comment and modify typo. test=develop
      
      * fix append_backward to support control flow. test=develop
      
      * add test. test=develop
      
      * fix copy_var_to_parent_block and conditional_block_grad. test=develop
      
      * fix bug: revert to append conditional_block_grad vars to sub grad block. test=develop
      
      * fix bug: revert to assign var to parent block even if var already is in parent block
      
      * fix bug: consider outputs is empty. test=develop
      
      * move _rename_grad_ out. test=develop
      
      * modify code according to reviews from Huihuang. test=develop
      
      * modify code according to reviews from Jinle. test=develop
      7d8d4599
  36. 01 1月, 2020 1 次提交
  37. 18 12月, 2019 1 次提交
  38. 10 12月, 2019 1 次提交
  39. 06 12月, 2019 1 次提交
    • H
      Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532) · 1dcf6a72
      Huihuang Zheng 提交于
      Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
      
      Fix bugs:
      
      1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
      
      2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
      
      This PR also did some code clean up:
      1. Print the var name when sgd_op catches shape error so that it is easier to debug
      2. Fix a typo: dicta -> dict
      1dcf6a72
  40. 29 11月, 2019 1 次提交
    • H
      Fix Cond Bug for Nested Control Flow (#21340) · 630be319
      Huihuang Zheng 提交于
      * Commit before merging develop
      
      test=develop
      
      * Backup after working with Huihuang logs
      
      * Commit before deleting Huihuang debug loggings
      
      * Commit before debug
      
      test=develop
      
      * Fix bug commit
      
      test=develop
      
      * Backup of fixing bugs
      
      test=develop
      
      * Clean up code
      
      test=develop
      
      * Fix a bug in sum_op
      
      test=develop
      630be319