1. 03 1月, 2020 1 次提交
  2. 17 9月, 2019 1 次提交
    • L
      fix pow op, support tensor for agument factor. (#19313) · 677e7144
      liym27 提交于
      improve pow op according to reviews:
      1. Delete unnecessary judgement statements in PowGradOpDescMaker;
      2. Improve test of test_api;
      
      overload GetKernelTypeForVar
      
      add stop_gradient=True when attr(factor) is tensor Variable, change examples in API pow.
      test=develop,test=document_preview
      677e7144
  3. 15 5月, 2019 1 次提交
    • L
      Double backward sqrt (#17387) · 4ef63101
      lvmengsi 提交于
      * double backward sqrt
      
      * refine unittest. test=develop
      
      * refine test. test=develop
      
      * remove alpha in unittest. test=develop
      4ef63101
  4. 13 5月, 2019 1 次提交
    • K
      add double grad for square op (#17173) · 11d3a38f
      Kaipeng Deng 提交于
      * add double grad for square. test=develop
      
      * formax code. test=develop
      
      * fix for grad sum. test=develop
      
      * refine shape. test=develop
      
      * refine extract. test=develop
      11d3a38f
  5. 26 4月, 2019 1 次提交
  6. 23 4月, 2019 1 次提交
    • Q
      Support backward of backward for Relu and add a new gradient checker by... · c1c2633a
      qingqing01 提交于
      Support backward of backward for Relu and add a new gradient checker by comparing theoretical and numerical Jacobian. (#16862)
      
      * Support backward of backward and a new gradient checker
      * Rename decorators.py to decorator_helper.py, since Python on Windows CI has decorators package.
      
      1. Add ReluDoubleGradMaker when register relu_grad.
      2. Add a new gradient checker by comparing theoretical and numerical Jacobian.  Check double gradients by double_grad_check.
      c1c2633a
  7. 10 4月, 2019 1 次提交
  8. 07 11月, 2018 1 次提交
    • C
      Add fp16 backward support (#14202) · a9b5d42d
      chengduo 提交于
      * add fp16 backward support
      test=develop
      
      * add sum_op fp16 test
      
      * disable test_dist_save_load
      test=develop
      
      * add check_grad for sum
      
      * add unit test for softmax_grad fp16
      test=develop
      
      * add scale_op unit test
      
      * add mul_grad_op unit test for fp16
      
      * add cross_entropy_grad and eman_grad unit test for fp16
      test=develop
      
      * fix cross_entropy unit test
      
      * add pool2d fp16 unit test
      
      * refine conv2d fp16 unit test
      test=develop
      
      * refine activation unit test
      test=develop
      
      * fix ci
      test=develop
      
      * follow zhihong's comment, copy from https://github.com/PaddlePaddle/Paddle/pull/12796
      test=develop
      a9b5d42d
  9. 17 8月, 2018 1 次提交
  10. 16 8月, 2018 1 次提交
  11. 17 4月, 2018 1 次提交
  12. 10 4月, 2018 1 次提交
  13. 21 3月, 2018 1 次提交
  14. 12 2月, 2018 1 次提交
  15. 10 2月, 2018 2 次提交
  16. 26 12月, 2017 1 次提交
  17. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  18. 27 10月, 2017 1 次提交
    • Y
      Gradient check use graph (#5027) · be00b0c4
      Yu Yang 提交于
      * Simplize Gradient Check
      
      * Stash
      
      * Extract apply_backward_pass to backward.py
      
      Rename apply_backward_pass to append_backward_ops
      
      * Use graph API to check gradient
      
      * Fix ci
      
      * Fix CI
      
      * Fix backward for double precision
      
      * Stash
      
      * Fix CI
      
      * Fix ci
      
      * Ignore GRU test
      
      * Ignore xe op
      
      * Fix CI
      
      * Fix softmax with xe gradient
      
      The correct equation should be IG = OG * (d_softmax_with_xe())
      
      * Fix typo
      
      * Fix merge error
      
      * Disable LRN
      be00b0c4
  19. 09 10月, 2017 1 次提交
  20. 06 10月, 2017 2 次提交
  21. 29 9月, 2017 3 次提交
  22. 14 9月, 2017 3 次提交
  23. 13 9月, 2017 1 次提交
  24. 08 8月, 2017 1 次提交
  25. 07 8月, 2017 1 次提交
  26. 04 8月, 2017 1 次提交
  27. 02 8月, 2017 1 次提交
  28. 31 7月, 2017 1 次提交
  29. 25 7月, 2017 1 次提交
  30. 19 7月, 2017 1 次提交
  31. 18 7月, 2017 1 次提交
  32. 17 7月, 2017 1 次提交