- 22 5月, 2023 1 次提交
-
-
由 Meteor Liu 提交于
* [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * fixed cyclic reference that caused patial import * fixed bad change * fix bad import * fix bad import * fix bad import * fix ut failed caused by change in_dynamic_mode * fix ut failed caused by change in_dynamic_mode * fixed usage of in_dynamic_mode() or in_dygraph_mode() * revert python3 to python in .pre-commit-config.yaml * fix merge conflicts
-
- 18 5月, 2023 1 次提交
-
-
由 shaojie_wang 提交于
* add master gradients on static graph * add unit test for bf16 master grad static graph * use float16 as v100 test dtype * only skip GPU which do not support bf16 * use linear layer to test master grad * 1.push master grad creation before all optimizer ops; 2.remove useless unittest; 3.use a function to create master grad states
-
- 11 5月, 2023 1 次提交
-
-
由 jjyaoao 提交于
-
- 27 4月, 2023 1 次提交
-
-
由 hua-zi 提交于
* updata Adamw.py out.backward() -> loss.backward() * Update adamw.py
-
- 04 4月, 2023 1 次提交
-
-
由 Nyakku Shigure 提交于
Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
-
- 23 3月, 2023 1 次提交
-
-
由 PuQing 提交于
[CodeStyle][C408][C409][C410] Fix unnecessary <dict/list/tuple> call and unnecessary <list/tuple> passed to <list/tupule>() (#51928) * autofix * add select config * autofix C410 * add C410 select
-
- 20 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 15 3月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine _found_inf
-
- 10 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 28 2月, 2023 1 次提交
-
-
由 taixiurong 提交于
-
- 24 2月, 2023 1 次提交
-
-
由 Weilong Wu 提交于
* Revert "fixoptminizer _set_auxiliary_var bug (#50335)" This reverts commit c44005f0. * Revert "refine optimizer create accumulators (#50188)" This reverts commit 244e7546. * Revert "fix found_inf bug for custom optimizer (#50158)" This reverts commit 64573f9f. * Revert "refine amp scaler found_inf (#49864)" This reverts commit 382e9a06. * fix code format * fix conflict
-
- 06 2月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine optimizer create accumulators * refine
-
- 30 1月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine _found_inf
-
- 03 1月, 2023 1 次提交
-
-
由 骑马小猫 提交于
-
- 30 12月, 2022 1 次提交
-
-
由 Sanbu 提交于
* 1219 * temporarily change the num_diff_files limit, test=document_fix * Revert "temporarily change the num_diff_files limit, test=document_fix" This reverts commit 8e70f00ef468d2dad0e38b3da06295ed62990d20. * for codestyle * remove duplicate license * `static mode` -> `static graph mode` * Update hybrid_parallel_inference.py * Update layer_function_generator.py * Update manipulation.py * reset Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com> Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
- 25 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 09 12月, 2022 1 次提交
-
-
由 cyber-pioneer 提交于
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* isort all files * revert conflicting files * revert conflicting files * revert conflicting files
-
- 17 11月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
Support bfloat16 for adamw and adam optimizer. Fit the lr for pure bf16 training with tensor fusion. (#48041) * add bfloat16 for adamw * set lr not to bfloat16 for pure bf16 training * update the logic * update the adamw optimizer * support bfloat for adam
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 23 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 14 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* trim trailing whitespace * fix `.cmake-format.py` * revert npu ut changes, avoid npu ci error
-
- 26 8月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 15 8月, 2022 1 次提交
-
-
由 Charles-hit 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 01 6月, 2022 1 次提交
-
-
由 Guoxia Wang 提交于
* fix the bug of adamw which set the attribute in param group not working * fix undefined variable * fix api example typo * add unittest * fix unittest typo
-
- 15 4月, 2022 1 次提交
-
-
由 chentianyu03 提交于
* add adamw yaml * fix test case error * make the name of weight and bias in linear1 and linear2 to be constant
-
- 28 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
-
- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 27 10月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 18 10月, 2021 1 次提交
-
-
由 taixiurong 提交于
[XPU AMP] 1. xpu support gradient acc 2. xpu support create tensor in dygraph 3. xpu support update weight params in amp (#36439)
-
- 29 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* update func name * skip cpu * update unittest * update unittest
-
- 26 9月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* adam to adamw in AdamW * add lr_ratio in adamw * refine logic bug in cpu adamw * delete fix bug for cpu adamw * delete fix bug for cpu adamw
-
- 22 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 17 9月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add pure fp16 major function in auto_cast & tracer * support master weight in dygraph for pure fp16 * check mix dtype of fp16&fp32 for check_finite_and_unscale op * change pure fp16 funtion name * refine some bug in auto_cast * refine auto_cast interface logic * add param _casted_by_pure_fp16 for class Layer * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator * refine pure_fp16_decorator as decorator * add unittest * add comment * add comment * support recompute * add comment for auto_cast and decorator * support to_static_state_dict for paddle.jit.save * unlimite models num and optimizers num * add lookup_table in black_list * fix momentum and layer state_dict * fix bug in layer state_dict * fix bug in layer state_dict_helper * refine unittest * refine test_momentun_op * refine interface and some code * refine amp_decorator interface * refine pure fp16 interface * refine master weight interface
-
- 14 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* add layerwise learning rate for adamw * fix format * add unitest * add NotImplementedError * add gpu unitest * update gpuplace
-
- 01 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 23 8月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adamw support cuda * adamw support cuda
-
- 17 8月, 2021 1 次提交
-
-
由 Roc 提交于
-
- 02 8月, 2021 1 次提交
-
-
由 sunzhongkai588 提交于
* fix paddle.optimizer test=document_fix * fix paddle.optimizer test=document_fix
-