1. 14 9月, 2021 1 次提交
  2. 17 8月, 2021 1 次提交
  3. 22 7月, 2021 1 次提交
  4. 14 7月, 2021 1 次提交
  5. 13 5月, 2021 1 次提交
    • L
      [NPU] support global accumulator for adam (#32780) · dace3fd5
      Leo Chen 提交于
      * add use_global_beta_pow
      
      * add use_global_beta_pow
      
      * update npu kernel
      
      * update python api
      
      * refine code
      
      * add ut for use_global_beta_pow
      
      * fix npu kernel
      
      * add ut for api
      
      * add ut for exception
      
      * add ut for save/load
      dace3fd5
  6. 28 4月, 2021 1 次提交
  7. 08 1月, 2021 1 次提交
    • Z
      Support pure fp16 training for AMP API. (#29544) · 7f7dfccf
      Zhen Wang 提交于
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      7f7dfccf
  8. 07 4月, 2020 1 次提交
  9. 27 2月, 2020 1 次提交
    • Z
      Refine adam op to improve performance, test=develop (#22346) · 72dde4ab
      zhaoyuchen2018 提交于
      * Refine adam op, test=develop
      
      * Fuse kernels together to reduce cpu time.
      
      * Refine paddle enforce, test=develop
      
      * Remove some comments, test=develop
      
      * Refine code,test=develop
      
      * Refine cuda kernel, test=develop
      
      * Refine code according to comments, test=develop
      72dde4ab
  10. 24 12月, 2019 1 次提交
    • A
      Optimize adam speed (#21777) · 51a86d2b
      Aurelius84 提交于
      * optimize adam speed by removing _finish_update test=develop
      
      * fix SparseAdamFunctor param list test=develop
      
      * Remove scale_op in expect_list of adam_op test=develop
      
      * fix test optimizer loss assert error test=develop
      
      * fix test optimizer loss assert error test=develop
      
      * modify PADDLE_ENFORCE usage test=develop
      
      * fix op_type in lamb_op.cc test=develop
      
      * fix errors ostream format bug test=develop
      
      * add betaPowOut in ngraph op test=develop
      
      * fix ngraph::op api for gcc8 test=develop
      
      * clean code test=develop
      
      * modify struct into class test=develop
      
      * remove code of beta1Tensor in lamb_op test=develop
      51a86d2b
  11. 28 11月, 2019 1 次提交
  12. 28 10月, 2019 1 次提交
  13. 04 9月, 2019 1 次提交
  14. 21 5月, 2019 1 次提交
    • Y
      Add LAMB Optimizer support (#17489) · f9796b12
      Yibing Liu 提交于
      * Add LAMB optimizer
      
      * Expose LAMB Optimizer's APIs
      
      test=develop, test=document_preview
      
      * Cleanup code & doc
      
      test=develop, test=document_preview
      
      * Update lamb optimizer's formula
      
      test=develop
      f9796b12
  15. 15 1月, 2019 1 次提交
  16. 07 1月, 2019 1 次提交
  17. 14 12月, 2018 1 次提交
  18. 13 12月, 2018 1 次提交
  19. 12 12月, 2018 2 次提交
  20. 11 12月, 2018 1 次提交
  21. 16 11月, 2018 1 次提交
    • W
      Refine operator cmake (#14413) · a2d9b344
      Wu Yi 提交于
      * wip simplify operator framework
      
      * wip
      
      * wip
      
      * done test=develop
      
      * clean test=develop
      
      * fix test=develop
      
      * fix deps test=develop
      
      * fix cpu build test=develop
      
      * fix tensorrt build test=develop
      
      * fix tests test=develop
      
      * fix test=develop
      
      * fix cpu build test=develop
      a2d9b344
  22. 22 10月, 2018 1 次提交
  23. 27 6月, 2018 1 次提交
  24. 11 6月, 2018 1 次提交
  25. 08 5月, 2018 1 次提交
    • Y
      Clean OpProtoAndCheckerMaker · 0e78cb69
      Yu Yang 提交于
      Do not use ctor
      
      * Reduce line of codes.
      * We can use virtual function for Maker now.
      * The implementation does not care what maker holds, it is easier to
      refactor later.
      0e78cb69
  26. 06 5月, 2018 1 次提交
  27. 12 2月, 2018 1 次提交
  28. 10 2月, 2018 2 次提交
  29. 20 12月, 2017 1 次提交
  30. 12 12月, 2017 2 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
    • K
      Updating the Latex equation for Adagrad (#6009) · 35420cdf
      kavyasrinet 提交于
      * Updating the Latex equation for Adagrad
      
      * Fixing Latex euqations for adadelta, adam and adamax
      35420cdf
  31. 21 11月, 2017 1 次提交
  32. 05 11月, 2017 1 次提交
  33. 20 10月, 2017 1 次提交
  34. 17 10月, 2017 1 次提交
  35. 13 10月, 2017 1 次提交
    • A
      Adding the Adam Optimizer operator (#4733) · 11680037
      Abhinav Arora 提交于
      * add adam op
      
      moment1_out = beta1 * moment1 + (1 − beta1) * grad
      moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad
      moment1_hat =  moment1_out / (1 - beta1^t)
      moment2_hat =  moment2_out / (1 - beta2^t)
      param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) +
      epsilon)
      
      * fix moment 2
      
      * Adding the Adam optimization operator
      
      * Adding more tests for Adam op
      11680037