- 24 2月, 2023 1 次提交
-
-
由 Weilong Wu 提交于
* Revert "fixoptminizer _set_auxiliary_var bug (#50335)" This reverts commit c44005f0. * Revert "refine optimizer create accumulators (#50188)" This reverts commit 244e7546. * Revert "fix found_inf bug for custom optimizer (#50158)" This reverts commit 64573f9f. * Revert "refine amp scaler found_inf (#49864)" This reverts commit 382e9a06. * fix code format * fix conflict
-
- 06 2月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine optimizer create accumulators * refine
-
- 30 1月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine _found_inf
-
- 03 1月, 2023 1 次提交
-
-
由 骑马小猫 提交于
-
- 30 12月, 2022 1 次提交
-
-
由 Sanbu 提交于
* 1219 * temporarily change the num_diff_files limit, test=document_fix * Revert "temporarily change the num_diff_files limit, test=document_fix" This reverts commit 8e70f00ef468d2dad0e38b3da06295ed62990d20. * for codestyle * remove duplicate license * `static mode` -> `static graph mode` * Update hybrid_parallel_inference.py * Update layer_function_generator.py * Update manipulation.py * reset Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com> Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
- 25 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 09 12月, 2022 1 次提交
-
-
由 cyber-pioneer 提交于
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* isort all files * revert conflicting files * revert conflicting files * revert conflicting files
-
- 17 11月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
Support bfloat16 for adamw and adam optimizer. Fit the lr for pure bf16 training with tensor fusion. (#48041) * add bfloat16 for adamw * set lr not to bfloat16 for pure bf16 training * update the logic * update the adamw optimizer * support bfloat for adam
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 23 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 14 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* trim trailing whitespace * fix `.cmake-format.py` * revert npu ut changes, avoid npu ci error
-
- 26 8月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 15 8月, 2022 1 次提交
-
-
由 Charles-hit 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 01 6月, 2022 1 次提交
-
-
由 Guoxia Wang 提交于
* fix the bug of adamw which set the attribute in param group not working * fix undefined variable * fix api example typo * add unittest * fix unittest typo
-
- 15 4月, 2022 1 次提交
-
-
由 chentianyu03 提交于
* add adamw yaml * fix test case error * make the name of weight and bias in linear1 and linear2 to be constant
-
- 28 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
-
- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 27 10月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 18 10月, 2021 1 次提交
-
-
由 taixiurong 提交于
[XPU AMP] 1. xpu support gradient acc 2. xpu support create tensor in dygraph 3. xpu support update weight params in amp (#36439)
-
- 29 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* update func name * skip cpu * update unittest * update unittest
-
- 26 9月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* adam to adamw in AdamW * add lr_ratio in adamw * refine logic bug in cpu adamw * delete fix bug for cpu adamw * delete fix bug for cpu adamw
-
- 22 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 17 9月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add pure fp16 major function in auto_cast & tracer * support master weight in dygraph for pure fp16 * check mix dtype of fp16&fp32 for check_finite_and_unscale op * change pure fp16 funtion name * refine some bug in auto_cast * refine auto_cast interface logic * add param _casted_by_pure_fp16 for class Layer * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator * refine pure_fp16_decorator as decorator * add unittest * add comment * add comment * support recompute * add comment for auto_cast and decorator * support to_static_state_dict for paddle.jit.save * unlimite models num and optimizers num * add lookup_table in black_list * fix momentum and layer state_dict * fix bug in layer state_dict * fix bug in layer state_dict_helper * refine unittest * refine test_momentun_op * refine interface and some code * refine amp_decorator interface * refine pure fp16 interface * refine master weight interface
-
- 14 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* add layerwise learning rate for adamw * fix format * add unitest * add NotImplementedError * add gpu unitest * update gpuplace
-
- 01 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 23 8月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adamw support cuda * adamw support cuda
-
- 17 8月, 2021 1 次提交
-
-
由 Roc 提交于
-
- 02 8月, 2021 1 次提交
-
-
由 sunzhongkai588 提交于
* fix paddle.optimizer test=document_fix * fix paddle.optimizer test=document_fix
-
- 30 7月, 2021 1 次提交
-
-
由 wangguanzhong 提交于
* fix lr in param group * add unittest for adamw
-
- 27 7月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 31 5月, 2021 1 次提交
-
-
由 wangguanzhong 提交于
* support params groups, test=develop * simplify updating opt attr * update according to review
-
- 29 4月, 2021 1 次提交
-
-
由 zhiboniu 提交于
add __all__=[] to python files not in API public list; import * only support in API public list files (#32643)
-
- 27 4月, 2021 1 次提交
-
-
由 xiemoyuan 提交于
* fixed docs. * Fixed docs. test=document_fix code bak. fixed docs. test=document_fix * Revert to previous version of python/paddle/fluid/backward.py * fixed bugs. * test=document_fix. Fixed examples.
-
- 23 4月, 2021 1 次提交
-
-
由 zhiboniu 提交于
-
- 22 4月, 2021 1 次提交
-
-
由 hutuxian 提交于
-
- 05 2月, 2021 1 次提交
-
-
由 Zhen Wang 提交于
* Use correct master weights in AdamW. * Just modify the master weight. * Update for CI Coverage.
-
- 19 1月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 08 1月, 2021 1 次提交
-
-
由 Zhen Wang 提交于
* add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way.
-