- 18 11月, 2022 5 次提交
-
-
由 zhaoyingli 提交于
* [AutoParallel] selective recompute * add cmakelist
-
由 Dandelight 提交于
-
由 GGBond8488 提交于
-
由 parap1uie-s 提交于
* Fix hAPI bug of not compatible with LayerHook https://github.com/PaddlePaddle/Paddle/issues/47000 * Fix hAPI bug of not compatible with LayerHook * Allow to specify train_bs and eval_bs separately in hapi.fit() * Update model.py * Update Model.py * Update test_model.py * update model.py
-
由 zhangyikun02 提交于
-
- 17 11月, 2022 21 次提交
-
-
由 zyfncg 提交于
* clip extra and intermediate output of op * fix bug * fix bug * polich code * polich log
-
由 傅剑寒 提交于
-
由 傅剑寒 提交于
* remove unstack in nn.py under fluid * remove unstack under fluid
-
由 HongyuJia 提交于
* clean fluid elementwise_pow, remove API * clean elem_pow doc * clean elementwise_mod * clean elementwise min, floordiv, mod
-
由 Qi Li 提交于
* [NPU] add _npu_identity op and api, test=develop * fix doc * address comments
-
由 傅剑寒 提交于
* remove swish in nn.py under fluid * fix tswish test case
-
由 傅剑寒 提交于
-
由 Wen Sun 提交于
-
由 wenbin 提交于
* int scale * round * revert commit
-
由 YuanRisheng 提交于
* standard api * fix xpu bugs
-
由 taixiurong 提交于
-
由 傅剑寒 提交于
-
由 xiaoxiaohehe001 提交于
* add_cast_bool * cast
-
由 ShenLiang 提交于
-
由 wuhuachaocoding 提交于
-
由 Kevin吴嘉文 提交于
-
由 Yuang Liu 提交于
Support bfloat16 for adamw and adam optimizer. Fit the lr for pure bf16 training with tensor fusion. (#48041) * add bfloat16 for adamw * set lr not to bfloat16 for pure bf16 training * update the logic * update the adamw optimizer * support bfloat for adam
-
由 zhouweiwei2014 提交于
-
由 sneaxiy 提交于
* add vectorized bfloat16 atomicAdd * fix compile error * fix compile error again * fix V100 compile error * fix V100 compile again
-
由 Nyakku Shigure 提交于
-
由 zyfncg 提交于
-
- 16 11月, 2022 13 次提交
-
-
由 傅剑寒 提交于
-
由 HongyuJia 提交于
* clean fluid elementwise_min * fix elementwise_min op testcase
-
由 HongyuJia 提交于
-
由 xiaoxiaohehe001 提交于
* add_fill_any_like * add_fill_any_like
-
由 wenbin 提交于
* elementwise_op * add teller * modify ut * comments * modify ut * return * modify
-
由 wangzhen38 提交于
* [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers * [remove fluid] under fleet meta_optimizers
-
由 ccrrong 提交于
-
由 ccrrong 提交于
* remove chunk_eval
-
由 HongyuJia 提交于
* simplify depthwise_conv2d phi kernel selection * fix depthwise_conv2d
-
由 Piotr Paturej 提交于
* Enable bf16 in oneDNN bilinear_interp kernel * Fix bilinear_interp_v2 not enabled in models * Remove unnecessary checks
-
由 ykkk2333 提交于
* add stat tool * add roll and roll_grad kernels and strided_slice and strided_slice_grad kernels, test=kunlun * embedding and embedding_grad add int32 input, test=kunlun
-
由 shentanyue 提交于
-
由 Wen Sun 提交于
* refactor: update pg custom * fix: use new api in ut * fix: typo * revert: recover legacy apis * fix: add GetDeviceContext
-
- 15 11月, 2022 1 次提交
-
-
由 1want2sleep 提交于
* fix some docs bugs; test=document_fix * Update batch_sampler.py * Update dataset.py * Update dataset.py * Update sampler.py * for codestyle; test=document_fix * fix copy-from issue; test=document_fix Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com> Co-authored-by: Ligoml <limengliu@tiaozhan.com>
-