1. 09 12月, 2020 1 次提交
  2. 02 12月, 2020 2 次提交
    • Z
      Add pure fp16 training with master weights. (#27712) · be3777a5
      Zhen Wang 提交于
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      
      * For CI Coverage Checking.
      be3777a5
    • F
      Layer norm fp16 (#29169) · 7584bb50
      furnace 提交于
      * add fp16 for layer_norm op
      
      * revert layernorm api
      
      * fix forward
      
      * fix forward
      
      * fix backward for layernorm with fp16
      
      * fix unit test for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U>
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      7584bb50
  3. 30 11月, 2020 1 次提交
  4. 18 11月, 2020 1 次提交
  5. 04 11月, 2020 1 次提交
  6. 12 10月, 2020 1 次提交
  7. 23 9月, 2020 1 次提交
  8. 14 9月, 2020 1 次提交
    • Z
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for... · d708b210
      Zhen Wang 提交于
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
      
      * update amp_check_finite_and_scale_op for static_amp.
      
      * use amp_check_finite_and_scale in static graph amp.
      
      * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op).
      
      * add update_loss_scaling op in cpp.
      
      * add update_loss_scaling_op unit test.
      
      * update the doc of the check_finite_and_unscale op
      
      * Update the process of gradients updating skipping if the gradients have infinite values.
      
      * update the way to zero grads.
      
      * update test_update_loss_scaling_op.py
      
      * add log info when find infinite grads.
      
      * add the unit test for UpdateLossScaling Layer.
      d708b210
  9. 03 9月, 2020 1 次提交
  10. 15 4月, 2020 1 次提交
  11. 08 1月, 2020 1 次提交
  12. 26 11月, 2019 1 次提交
  13. 30 10月, 2019 1 次提交
  14. 15 10月, 2019 1 次提交
  15. 10 10月, 2019 1 次提交
  16. 19 9月, 2019 1 次提交
  17. 10 9月, 2019 1 次提交
  18. 06 9月, 2019 1 次提交
  19. 03 9月, 2019 1 次提交
  20. 31 8月, 2019 1 次提交
  21. 28 6月, 2019 1 次提交
  22. 25 6月, 2019 1 次提交
  23. 16 5月, 2019 1 次提交
    • J
      init auto loss scaling (#17194) · 30e178fa
      Jie Fang 提交于
      * init auto loss scaling
      
      test=develop
      
      * change API.spec
      
      * change ifelse to switch and use reduce_sum to optimize checking isfinite
      
      test=develop
      
      * Remove redundant code
      
      test=develop
      30e178fa
  24. 25 4月, 2019 1 次提交
    • Y
      Init mixed precision training interface (#16856) · beda7825
      Yibing Liu 提交于
      * Init mixed precision training interface
      
      * Add fp16 test script
      
      test=develop
      
      * All initializers support float16
      
      test=develop
      
      * Code cleanup & add more code annotations
      
      test=develop
      
      * Update API spec
      
      test=develop
      
      * Add usage example in doc
      
      test=develop
      beda7825