1. 20 12月, 2021 1 次提交
    • Z
      Add multi_tensor for momentum optimizer and clear_grads (#37564) · 0cc5e22c
      zhangbo9674 提交于
      * add multi_tensor for momentum and clear_grads for optimizer
      
      * fix bug for dygraph
      
      * add unittest
      
      * refine comment
      
      * add param_group
      
      * refine regularizaiton logic
      
      * del clear_grads
      
      * add clear_grads
      
      * add dispensable check of None
      
      * refine clear_grad
      
      * fix build bug
      
      * refine code by comment
      
      * refine code
      
      * add multi tensor check
      
      * refine param_group update
      
      * add multi tensor for static mode
      
      * refine comments
      
      * delete useless comma for momentum
      
      * refine comment for momentum
      
      * refine code by commment
      0cc5e22c
  2. 17 12月, 2021 1 次提交
    • S
      Refine some AMP operators for BERT (#37923) · d80fe268
      sneaxiy 提交于
      * support multi precision update for LAMB
      
      * hide some api
      
      * fix ci uts
      
      * fix lamb output of dygraph
      
      * remove some changes to some PR
      
      * try to fix Py3 CI compile error
      
      * fix test_imperative_optimizer, add lars ut, add layer_norm ut
      
      * fix ut, fix format
      
      * fix ut
      
      * fix windows ci
      d80fe268
  3. 27 10月, 2021 1 次提交
  4. 18 10月, 2021 1 次提交
  5. 29 9月, 2021 1 次提交
  6. 26 9月, 2021 1 次提交
  7. 22 9月, 2021 1 次提交
  8. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  9. 14 9月, 2021 1 次提交
  10. 10 9月, 2021 1 次提交
    • S
      Fix warning (#34875) · 966f042d
      sunzhongkai588 提交于
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      966f042d
  11. 01 9月, 2021 1 次提交
  12. 23 8月, 2021 1 次提交
  13. 17 8月, 2021 1 次提交
  14. 02 8月, 2021 1 次提交
  15. 30 7月, 2021 1 次提交
  16. 27 7月, 2021 2 次提交
  17. 15 7月, 2021 1 次提交
  18. 14 7月, 2021 1 次提交
  19. 08 7月, 2021 1 次提交
  20. 24 6月, 2021 1 次提交
  21. 21 6月, 2021 1 次提交
  22. 15 6月, 2021 1 次提交
  23. 11 6月, 2021 1 次提交
  24. 10 6月, 2021 1 次提交
  25. 31 5月, 2021 1 次提交
  26. 29 4月, 2021 1 次提交
  27. 28 4月, 2021 1 次提交
  28. 27 4月, 2021 1 次提交
  29. 23 4月, 2021 1 次提交
  30. 22 4月, 2021 1 次提交
  31. 21 4月, 2021 1 次提交
  32. 25 3月, 2021 1 次提交
  33. 24 3月, 2021 1 次提交
  34. 05 2月, 2021 1 次提交
  35. 04 2月, 2021 1 次提交
  36. 01 2月, 2021 1 次提交
  37. 19 1月, 2021 1 次提交
  38. 17 1月, 2021 1 次提交
  39. 08 1月, 2021 1 次提交
    • Z
      Support pure fp16 training for AMP API. (#29544) · 7f7dfccf
      Zhen Wang 提交于
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      7f7dfccf