1. 27 2月, 2020 1 次提交
    • Z
      Refine adam op to improve performance, test=develop (#22346) · 72dde4ab
      zhaoyuchen2018 提交于
      * Refine adam op, test=develop
      
      * Fuse kernels together to reduce cpu time.
      
      * Refine paddle enforce, test=develop
      
      * Remove some comments, test=develop
      
      * Refine code,test=develop
      
      * Refine cuda kernel, test=develop
      
      * Refine code according to comments, test=develop
      72dde4ab
  2. 11 12月, 2018 1 次提交
  3. 16 11月, 2018 1 次提交
    • W
      Refine operator cmake (#14413) · a2d9b344
      Wu Yi 提交于
      * wip simplify operator framework
      
      * wip
      
      * wip
      
      * done test=develop
      
      * clean test=develop
      
      * fix test=develop
      
      * fix deps test=develop
      
      * fix cpu build test=develop
      
      * fix tensorrt build test=develop
      
      * fix tests test=develop
      
      * fix test=develop
      
      * fix cpu build test=develop
      a2d9b344
  4. 12 2月, 2018 1 次提交
  5. 10 2月, 2018 2 次提交
  6. 26 12月, 2017 1 次提交
  7. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  8. 21 11月, 2017 1 次提交
  9. 13 10月, 2017 1 次提交
    • A
      Adding the Adam Optimizer operator (#4733) · 11680037
      Abhinav Arora 提交于
      * add adam op
      
      moment1_out = beta1 * moment1 + (1 − beta1) * grad
      moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad
      moment1_hat =  moment1_out / (1 - beta1^t)
      moment2_hat =  moment2_out / (1 - beta2^t)
      param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) +
      epsilon)
      
      * fix moment 2
      
      * Adding the Adam optimization operator
      
      * Adding more tests for Adam op
      11680037
  10. 07 8月, 2017 1 次提交
  11. 04 8月, 2017 1 次提交
  12. 31 7月, 2017 1 次提交
  13. 25 7月, 2017 1 次提交
  14. 19 7月, 2017 1 次提交