- 06 9月, 2023 1 次提交
-
-
由 JYChen 提交于
* fix setvalue dtype error when using dy2st and amp O2 * add one test * remove test share_buffer since win/linux have different number
-
- 02 8月, 2023 1 次提交
-
-
由 xuxinyi389 提交于
-
- 11 7月, 2023 1 次提交
-
-
由 LoneRanger 提交于
replace the AdagradOptimizer 、adamaxOptimizer、AdadeltaOptimizer、RMSPropOptimizer、LambOptimizer and Momentum (#54152) * replace the AdadeltaOptimizer with Adadelta * replace the RMSPropOptimizer with RMSProp * replace the LambOptimizer with lamb * replace the momentum in contrib/optimizer.py with Momentum in python/paddle/optimizer/momentum.py * fix bug * fix bug * fix bug * fix bug of Lamp * fix bug of Lamp * fix bug of import * replace the AdamaxOptimizer with Admax and change the optimizer base for AdagradOptimizer * fix bug * fix bug * Update optimizer.py * fix bug * fix bug
-
- 23 5月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 22 5月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 18 5月, 2023 1 次提交
-
-
由 shaojie_wang 提交于
* add master gradients on static graph * add unit test for bf16 master grad static graph * use float16 as v100 test dtype * only skip GPU which do not support bf16 * use linear layer to test master grad * 1.push master grad creation before all optimizer ops; 2.remove useless unittest; 3.use a function to create master grad states
-
- 16 5月, 2023 2 次提交
-
-
由 niuliling123 提交于
-
由 Yiqun Liu 提交于
* Allow to switch whether to use promote strategy to choose kernel for O2 training. * Fix comparing error and add unittest.
-
- 27 4月, 2023 1 次提交
-
-
由 Zhang Ting 提交于
* support OD level and skip dynamic loss scaling for bf16
-