- 31 3月, 2023 1 次提交
-
-
由 张春乔 提交于
* autofix Co-authored-by: NLiyulingyue <83450930+Liyulingyue@users.noreply.github.com> * revert changes in python/paddle/distributed/fleet/utils/hybrid_parallel_util.py * empty commit, trigger ci * fix test_slice --------- Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
- 30 3月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* delete old dygraph op test
-
- 29 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 28 3月, 2023 2 次提交
-
-
由 Nyakku Shigure 提交于
-
由 Kim 提交于
-
- 25 3月, 2023 1 次提交
-
-
由 张春乔 提交于
-
- 23 3月, 2023 2 次提交
-
-
由 Infinity_lee 提交于
-
由 PuQing 提交于
[CodeStyle][C408][C409][C410] Fix unnecessary <dict/list/tuple> call and unnecessary <list/tuple> passed to <list/tupule>() (#51928) * autofix * add select config * autofix C410 * add C410 select
-
- 22 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 21 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
[Zero-Dim] Support 0D for numel/rank/size/optimizer/create_parameter/create_global_var, fix some usage to adapt 0D (#51566)
-
- 20 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 15 3月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine _found_inf
-
- 14 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 10 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 08 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 06 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 03 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 01 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 28 2月, 2023 1 次提交
-
-
由 taixiurong 提交于
-
- 24 2月, 2023 1 次提交
-
-
由 Weilong Wu 提交于
* Revert "fixoptminizer _set_auxiliary_var bug (#50335)" This reverts commit c44005f0. * Revert "refine optimizer create accumulators (#50188)" This reverts commit 244e7546. * Revert "fix found_inf bug for custom optimizer (#50158)" This reverts commit 64573f9f. * Revert "refine amp scaler found_inf (#49864)" This reverts commit 382e9a06. * fix code format * fix conflict
-
- 22 2月, 2023 1 次提交
-
-
由 Shuangchi He 提交于
* Fix some typos. Signed-off-by: Yulv-git <yulvchi@qq.com> * pre-commit Signed-off-by: Yulv-git <yulvchi@qq.com> --------- Signed-off-by: Yulv-git <yulvchi@qq.com>
-
- 13 2月, 2023 1 次提交
-
-
由 RedContritio 提交于
-
- 06 2月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine optimizer create accumulators * refine
-
- 02 2月, 2023 1 次提交
-
-
由 RedContritio 提交于
-
- 01 2月, 2023 1 次提交
-
-
由 zqw_1997 提交于
remove fluid.initializer.UniformInitializer, ConstantInitializer, NormalInitializer, TruncatedNormalInitializer, XavierInitializer, BilinearInitializer, MSRAInitializer, NumpyArrayInitializer and calculate_gain.. (#49498) * move UniformInitializer and ConstantInitializer * more modify * circular import resolved * another circular import resolved? * more circular import 2 * circular import 3 * change import paddle in metric.py * BuildStrategy import from fluid * modify the framework import path in common.py * change rnn.py import, from static to original framework * change import static in the nn folder * default_main_program should import from common_ops_import * add import paddle in param_attr.py * use core not paddle module for using VarDesc * another old uniform * mistake that use Uniform instead of UniformInitializer * modify UniformInitializer doc * move fluid.NormalInitializer to nn.initializer.NormalInitializer * remove import of Normal in fluid.layers.nn.py * remove more import of old Normal * remove more import of old Normal * sample code modify and tests modify import * is_listen_failed passing arg should be log file * problem solved * a mistake solved * comments resoleved and remove paddle.fluid.initializer.TruncatedNormalInitializer * remove paddle.fluid.initializer.XavierInitializer and paddle.fluid.initializer.MSRAInitializer * remove paddle.fluid.initializer.BilinearInitializer NumpyArrayInitializer and set_global_initializer * change fluid to static * change static to fluid to avoid circular import in distributed_strategy.py * fix example code and test_initializer * ValueType * sample code fix * change set_global_initializer back to fluid * put paddle.static.BuildStrategy.ReduceStrategy into the fuction to avoid circular import * remove calculate_gain, delete BilinearInitializer and revert set_global_initializer * change the time of using UniformInitializer, ConstantInitializer, NormalInitializer, TruncatedNormalInitializer, XavierInitializer, MSRAInitializer, NumpyArrayInitializer as few as possible * fix argument incampatible * fix more arg incompatible * fix test_prelu_op_xpu.py Constant * fix inaccurate doc * more doc fix: default value
-
- 31 1月, 2023 2 次提交
-
-
由 张春乔 提交于
* fix div 0 error of NoamDecay * add unittest * Update lr.py
-
由 LoneRanger 提交于
-
- 30 1月, 2023 1 次提交
-
-
由 wanghuancoder 提交于
* refine _found_inf
-
- 03 1月, 2023 1 次提交
-
-
由 骑马小猫 提交于
-
- 30 12月, 2022 1 次提交
-
-
由 Sanbu 提交于
* 1219 * temporarily change the num_diff_files limit, test=document_fix * Revert "temporarily change the num_diff_files limit, test=document_fix" This reverts commit 8e70f00ef468d2dad0e38b3da06295ed62990d20. * for codestyle * remove duplicate license * `static mode` -> `static graph mode` * Update hybrid_parallel_inference.py * Update layer_function_generator.py * Update manipulation.py * reset Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com> Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
- 25 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 13 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
* fix rmsprop_ yaml bug
-
- 09 12月, 2022 1 次提交
-
-
由 cyber-pioneer 提交于
-
- 02 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* isort all files * revert conflicting files * revert conflicting files * revert conflicting files
-
- 22 11月, 2022 2 次提交
-
-
由 ustiniankw 提交于
* fix_docx_stanh * fix einsum api en docs issue * fix model api en docs issue * for codestyle * fix_einsum.py_einsum, test=document_fix * fix_model.py_Model, test=ducument_fix * fix_creation.py_meshgrid, test=document_fix * fix_linalg.py_slogdet, test=document_fix * fix_loss.py_SoftMarginLoss_CrossEntropyLoss_NLLLoss_BCELoss, test=document_fix * norm.py_SyncBatchNorm, test=document-fix * norm.py_SyncBatchNorm, test=document_fix * norm.py_SyncBatchNorm, test=document_fix * list18-30, test=document_fix * refix_list1-15, test=document_fix * deletefiles, test=document_fix * fixedapi_pre-commit, test=document_fix * fix_list31-45, test=document_fix * list111, test=document_fix * some_fix, test=document_fix * some_fix, test=document_fix * somefix, test=document_fix * somefix, test=document_fix * refix, test=document_fix * refix, test=document_fix * refix, test=document_fix * refix, test=document_fix * rerfix, test=document_fix Co-authored-by: Ligoml <limengliu@tiaozhan.com>
-
由 wanghuancoder 提交于
-
- 17 11月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
Support bfloat16 for adamw and adam optimizer. Fit the lr for pure bf16 training with tensor fusion. (#48041) * add bfloat16 for adamw * set lr not to bfloat16 for pure bf16 training * update the logic * update the adamw optimizer * support bfloat for adam
-
- 10 11月, 2022 1 次提交
-
-
由 WangZhen 提交于
Get grads types from cpp for adam to speed up
-
- 09 11月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Get params and grads in cpp to avoid gpu idel time * Using python param instead of cpp return param to fix test_asp_optimize_dynamic.py * Get grads from cpp and construct params_grads on python * Check meta and remove comments
-