1. 24 2月, 2023 1 次提交
    • W
      Revert grad scale optimization pr (#50839) · 8a503522
      Weilong Wu 提交于
      * Revert "fixoptminizer _set_auxiliary_var bug (#50335)"
      
      This reverts commit c44005f0.
      
      * Revert "refine optimizer create accumulators (#50188)"
      
      This reverts commit 244e7546.
      
      * Revert "fix found_inf bug for custom optimizer (#50158)"
      
      This reverts commit 64573f9f.
      
      * Revert "refine amp scaler found_inf (#49864)"
      
      This reverts commit 382e9a06.
      
      * fix code format
      
      * fix conflict
      8a503522
  2. 06 2月, 2023 1 次提交
  3. 30 1月, 2023 1 次提交
  4. 03 1月, 2023 1 次提交
  5. 30 12月, 2022 1 次提交
  6. 25 12月, 2022 1 次提交
  7. 09 12月, 2022 1 次提交
  8. 29 11月, 2022 1 次提交
  9. 17 11月, 2022 1 次提交
  10. 23 10月, 2022 1 次提交
  11. 23 9月, 2022 1 次提交
  12. 14 9月, 2022 1 次提交
  13. 26 8月, 2022 1 次提交
  14. 15 8月, 2022 1 次提交
  15. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  16. 01 6月, 2022 1 次提交
  17. 15 4月, 2022 1 次提交
  18. 28 3月, 2022 1 次提交
  19. 25 3月, 2022 1 次提交
    • J
      Refactor Dygraph Flags (#40786) · 3085d5e4
      Jiabin Yang 提交于
      * refactor eager flags
      
      * fix flags error when we switch from eager to dygraph
      
      * fix ci problem
      
      * fix ci
      
      * fix ci
      
      * merge develop and fix code style
      
      * merge develop and fix code style
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * merge develop
      3085d5e4
  20. 27 10月, 2021 1 次提交
  21. 18 10月, 2021 1 次提交
  22. 29 9月, 2021 1 次提交
  23. 26 9月, 2021 1 次提交
  24. 22 9月, 2021 1 次提交
  25. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  26. 14 9月, 2021 1 次提交
  27. 01 9月, 2021 1 次提交
  28. 23 8月, 2021 1 次提交
  29. 17 8月, 2021 1 次提交
  30. 02 8月, 2021 1 次提交
  31. 30 7月, 2021 1 次提交
  32. 27 7月, 2021 1 次提交
  33. 31 5月, 2021 1 次提交
  34. 29 4月, 2021 1 次提交
  35. 27 4月, 2021 1 次提交
  36. 23 4月, 2021 1 次提交
  37. 22 4月, 2021 1 次提交
  38. 05 2月, 2021 1 次提交
  39. 19 1月, 2021 1 次提交
  40. 08 1月, 2021 1 次提交
    • Z
      Support pure fp16 training for AMP API. (#29544) · 7f7dfccf
      Zhen Wang 提交于
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      7f7dfccf