- 14 7月, 2023 1 次提交
-
-
由 caozhou 提交于
* distribute best cfg * adapt to multi args transmission * update metric extracting * fix bugs of prune and reading log * fix time default value * remove time record * adjust the order of searching dim * fix prune bugs * fix adding cfg bug * fix multi nodes bug * reset status * remove alarm and set logdir * deepcopy ctx * change alarm * fix restart bug * add exit * best no need alarm * add warmup time
-
- 13 7月, 2023 7 次提交
-
-
由 niuliling123 提交于
-
由 Ruibiao Chen 提交于
* Support nvprof for auto parallel * Fix CI errors * Fix CI errors
-
由 Charles-hit 提交于
* [prim]support fp16 for instance_norm and instance_norm_grad * support fp16 and bfp16 dtype for instance_norm prim rules * fix new ir test --------- Co-authored-by: Ncxxly <chenxx_id@163.com>
-
由 lil-Xing 提交于
* add phi operator c_concat and ut * update create_var use * update copyright
-
由 Leo Chen 提交于
* Support AMP program for onnx QAT API * Integrate QAT into distributed optimizer * Reduce the size of test data and increase time limit * Use logger and reduce time limit of unittests * Rename and move unittest into fleet test * Test qat_init API
-
由 risemeup1 提交于
* fix protobuf problem * fix protobuf problem
-
由 Yuang Liu 提交于
-
- 11 7月, 2023 7 次提交
-
-
由 pangengzheng 提交于
* support sharding parallel * fix name * fix * update * test amp for sharding --------- Co-authored-by: pangengzheng <pangengzheng.baidu.com>
-
* DOCS: Adding imformation about datatype in math.py * replaced uint16 with bfloat16.
-
由 Wennie396 提交于
* format correction * variable names adjustment * variable names adjustment, name-->type, value-->sub_program
-
由 LoneRanger 提交于
replace the AdagradOptimizer 、adamaxOptimizer、AdadeltaOptimizer、RMSPropOptimizer、LambOptimizer and Momentum (#54152) * replace the AdadeltaOptimizer with Adadelta * replace the RMSPropOptimizer with RMSProp * replace the LambOptimizer with lamb * replace the momentum in contrib/optimizer.py with Momentum in python/paddle/optimizer/momentum.py * fix bug * fix bug * fix bug * fix bug of Lamp * fix bug of Lamp * fix bug of import * replace the AdamaxOptimizer with Admax and change the optimizer base for AdagradOptimizer * fix bug * fix bug * Update optimizer.py * fix bug * fix bug
-
由 MarDino 提交于
* add rmsnorm kernel * add static graph test * fix round type * use alignas to avoid msvc compile error * remove redundant headerfile to avoid rocm compile error * fix rocm compile not found cub * Add document
-
由 FormlessUnit 提交于
* rename weight_only/llm.int8
-
由 qiuwenbo 提交于
* [尝试] 给tensor增加一个属性, 这个属性是一个定值 1 * 暴露gradnode 并构建gradnode新的方法(用来测试)进行暴露给python python端可以访问 * 开发grad_fn、next_functions两个API 并暴露到python端- 做一些规范化处理 * 增加一个单元测试 * 优化 code-style
-
- 10 7月, 2023 3 次提交
- 07 7月, 2023 2 次提交
-
-
由 gouzil 提交于
* [jit] add copy-from; test=document_fix * [jit] add copy-from; test=document_fix * fix TracedLayer
-
由 LoneRanger 提交于
* remove the extend_optimizer_with_weight_decay function * Update __init__.py * fix bug * fix bug
-
- 06 7月, 2023 7 次提交
-
-
由 gouzil 提交于
* [autograd] add copy-from; test=document_fix * [autograd] add copy-from; test=document_fix * fix
-
由 wangxiaoning 提交于
-
由 zqw_1997 提交于
* add clip_grad_value_ api * add test for ClipGradByValue * typo fix * refine and modify clip_grad_norm_ * no_grad * clip_ * remove g=p.grad * bug: AssertionError: When Variable is used as the condition of if/while , Variable can only contain one element.
-
由 cyber-pioneer 提交于
* fix prim add fill_any_like bug * polish code
-
由 Zhang Ting 提交于
-
由 zhaoyingli 提交于
* remove allreduce before c_allgather * update reshard insert_fill_constant_op func * insert_fill_constant_op add shape arg
-
由 XiaociZhang 提交于
This reverts commit 15c87528.
-
- 05 7月, 2023 5 次提交
-
-
由 Wang Xin 提交于
-
由 zhangjingwei 提交于
-
由 cyberslack_lee 提交于
-
由 GGBond8488 提交于
-
由 LUZY0726 提交于
-
- 03 7月, 2023 6 次提交
-
-
由 zhenhailiu 提交于
-
由 LoneRanger 提交于
-
由 megemini 提交于
* [Fix]fix cleandoc with a first blank line * [Fix]fix metrics.py code-block * [Fix]fix metrics.py code-block indent
-
由 LoneRanger 提交于
* add lerp bf16 support * fix bug * Update test_lerp_op.py modify the input dtype * modify the test_lerp_op.py * Update test_lerp_op.py * fix bug of import * add user_defined_grads * Update test_lerp_op.py * fix bug of grad * fix bug of grad * fix bug of grad * add the check for bfloat16 dtype
-
由 FormlessUnit 提交于
* add linear_compress API
-
由 niuliling123 提交于
-
- 30 6月, 2023 2 次提交
-
-
由 LoneRanger 提交于
replace the PolynomialDecay、NoamDecay、LinearLrWarmup、ReduceLROnPlateau in fluid with 2.0 version (#54806) * remove the ReduceLROnPlateau in fluid * fix bug * remove the PolynomialDecay in fluid * remove the LinearLrWarmup in fluid * fix bug * remove the NoamDecay in fluid * fix bug * fix bug * fix bug
-
由 sneaxiy 提交于
-