- 24 11月, 2022 1 次提交
-
-
由 ustiniankw 提交于
* fixdocs, test=document_fix * fixdocs, test=document_fix
-
- 11 11月, 2022 1 次提交
-
-
由 Haohongxiang 提交于
-
- 10 11月, 2022 1 次提交
-
-
由 wuhuachaocoding 提交于
* cherry-pick recompute doc update. * update.
-
- 09 11月, 2022 1 次提交
-
-
由 JYChen 提交于
* remove functions not belong to public-api from __all__ * fix code style * fix error in distributed
-
- 07 11月, 2022 3 次提交
-
-
由 Ligoml 提交于
* #46165 * #45752 * fix some doc bug test=document_fix (#45488) * fix some doc bug test=document_fix * fix some docs issues, test=document_fix * beta -> \beta in softplus * threshold -> \varepsilon in softplus * parameter name * delta -> \delta in smooth_l1_loss * fix some docs test=document_fix * fix docs test=document_fix * fix docs && 增加空行 test=document_fix * Update python/paddle/nn/functional/activation.py, test=document_fix * Update python/paddle/nn/layer/activation.py, test=document_fix Co-authored-by: NSigureMo <sigure.qaq@gmail.com> * [docs] add ipustrategy Hyperlink (#46422) * [docs] add ipustrategy Hyperlink * fix ipu_shard_guard docs; test=document_fix * [docs] add set_ipu_shard note * [docs] fix hyperlink * update framework.py * fix mlu_places docs; test=document_fix * fix put_along_axis docs; test=document_fix * fix flake8 W293 error, test=document_fix * fix typo in typing, test=document_fix Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com> Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com> * #46659 * Update README_cn.md (#46927) 修复了错别字 * #46738 * fix paddle.get_default_dtype (#47040) Chinese and English return values are inconsistent * fix bug Co-authored-by: N张春乔 <83450930+Liyulingyue@users.noreply.github.com> Co-authored-by: NInfinity_lee <luhputu0815@gmail.com> Co-authored-by: Nmrcangye <chenloong@88.com> Co-authored-by: NSigureMo <sigure.qaq@gmail.com> Co-authored-by: Ngouzil <66515297+gouzil@users.noreply.github.com> Co-authored-by: NHamid Zare <12127420+hamidzr@users.noreply.github.com> Co-authored-by: NSqhttwl <61459740+Sqhttwl@users.noreply.github.com> Co-authored-by: NOccupyMars2025 <31559413+OccupyMars2025@users.noreply.github.com> Co-authored-by: N超级码牛 <54444805+SuperCodebull@users.noreply.github.com> Co-authored-by: Njzhang533 <jzhang533@gmail.com>
-
由 Yuang Liu 提交于
* code format change * update the split logic for uniform (#47670)
-
由 Ligoml 提交于
* #46765 * #47042 * Remove redundant numpy import (#47483) * #47555 * resolve conflict * resolve conflict * resolve conflict * resolve conflict * resolve conflict * for_codestyle * fix sample code paddle.linalg.multi_dot Co-authored-by: NKevin吴嘉文 <417333277@qq.com>
-
- 04 11月, 2022 1 次提交
-
-
由 Ligoml 提交于
* only run pre-commit * only run pre-commit
-
- 03 11月, 2022 1 次提交
-
-
由 ShenLiang 提交于
* add unbalanced data * fix utest
-
- 01 11月, 2022 1 次提交
-
-
由 sneaxiy 提交于
-
- 31 10月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* update codestyle * [AutoParallel] fix fp16 for subblock (#47189) * [AutoParallel] fix fp16 for subblock * fix engine * fix comment * [AutoParallel] fix engine _build and cost method (#47263) * fix engine build method * fix import * update engine cost * update raise error * update cmakelist * revert optimizer * revert optimizer * fix unittest * fix unittest Co-authored-by: Ncaozhou <caozhou@radi.ac.cn> Co-authored-by: Ncaozhou <caozhou@radi.ac.cn>
-
- 29 10月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* reformat hybrid_parallel_util.py by black * add fused_allreduce_gradients_with_group * add scale * fix ci
-
- 26 10月, 2022 1 次提交
-
-
由 Roc 提交于
-
- 24 10月, 2022 3 次提交
-
-
由 Yuang Liu 提交于
-
由 Ghost Screaming 提交于
* Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result is wrong. * support pure bfloat16 * support bf16 linear * update PR to pass CI * tiny fix where_grad_kernel.cu * Support bfloat16 type for reducer and sharding. * Fix some bug. * Polish code. * Polise code. * Add bfloat16 datatype in fill_grad kernels. Co-authored-by: Nsneaxiy <sneaxiy@126.com> Co-authored-by: Nsneaxiy <sneaxiy@126.com>
-
由 Roc 提交于
-
- 21 10月, 2022 1 次提交
-
-
由 Haohongxiang 提交于
-
- 20 10月, 2022 1 次提交
-
-
由 Wen Sun 提交于
* fix: fix incorrect import * fix: fix incorrect usage
-
- 19 10月, 2022 2 次提交
-
-
由 zhaoyingli 提交于
* [Auto Parallel] Make Engine class callable (#46416) * [Auto Parallel] Imporve the user-defined fetches and logging * [Auto Parallel] Make Engine class callable * [Auto Parallel] Update the data loading of tuner * Print IPS in auto parallel Engine (#46554) * [AutoParallel] fix dist_split (#46505) * [AutoParallel] fix dist_split * add unittest * update cmakelist * [AutoParallel] fix sharding (#46572) * [AutoParallel] fix process_mesh (#46583) * [AutoParallel] fix reshard when train with eval (#46605) * [AutoParallel] fix reshard when train with eval * fix mppp * [AutoParallel] fix amp when predict (#46637) * [Auto Parallel]Update comp cost and completion for gpt auto search (#46387) * update comp cost and completion for gpt auto search * add unittest * [Auto Parallel] Fix bugs caused by the inconsistent outputs of Engine API (#46633) * [Auto Parallel] Unify the logger and outputs of Engine API * [Auto Parallel] Fix the bugs of to_static * [Auto Parallel] Adjust the test_to_static.py * [Auto Parallel] Improve the fine-grained APIs (#46552) * [Auto Parallel] Suppport different dataloaders * [Auto Parallel] Add num_shards config for dataset * [Auto Parallel] Unify the logger and outputs of Engine API * [Auto Parallel] Fix the bugs of to_static * [Auto Parallel] Adjust the test_to_static.py * [Auto Parallel] Add the prepare API and replace __call__ with run * [Auto Parallel] Improve the private implementations of Engine * [Auto Parallel] Set capacity of dataloader for opt tuning * [Auto Parallel] [WIP] Change the fine-grained API * [Auto Parallel] Improve APIs to support different user cases * [Auto Parallel] Add removed config * [Auto Parallel] Add imports * [Auto Parallel] Fix bugs for to_static * [Auto Parallel] Remove unnecessary imports * bugfix (#46921) * [Auto Parallel] Fix the bug for None labels (#46987) * [AutoParallel] adapt for gpt-gen (#46771) * for gpt-gen * fix reshard * adapt assign and shape op * add dist_assign & unittest * add conditional block unittest * rename unittest * [Auto Parallel] Fix the bug of completion (#47056) * [Auto Parallel] Fix the bug for None labels * [Auto Parallel] Fix the completion bug * [AutoParallel] add callbacks (#47014) * [AutoParallel] add callbacks * fix unittest * fix dist_context * fix engine * fix cmakelist * fix unittest's returns * fix cmakelist * [Auto Parallel] Add cost interface (#47043) * add cost interface * update inferface and add unittest * update unittest * update inferface * [Auto Parallel]Add parallel tuner (#46189) * add parallel tuner * add unittest * fix unittest * set timeout of unittest * set unittest timeout * fix auto_mode setting * update unittest * sync from develop and update unittest * remove unused import * update unittest * update cmakelist * add unittests Co-authored-by: NYulong Ao <aoyulong@baidu.com> Co-authored-by: NRuibiao Chen <chenruibiao@baidu.com> Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
-
由 Ghost Screaming 提交于
* Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result is wrong. * Support allow_partial switch, which can be configure in pipeline_configs. If sent tensor are not the same from different hosts, they shouldn't been sent partially and then concated as a whole tensor. * Change name allow_partial to enable_partial_send_recv. * Add global variable _enable_partial_send_recv
-
- 18 10月, 2022 2 次提交
-
-
由 Yuang Liu 提交于
* [dygraph sharding] Overlap the reduce and the caculation for sharding stage 2. (#46495) * [dygraph sharding stage 2] sharding broadcast overlap (#46656) * Multi groups for broadcast of sharding stage 2 (#46894)
-
由 Haohongxiang 提交于
* [Dygraph] Fix performance of pp+mp by using send/recv_calc_stream instead of send/recv (#46116) * [Dygraph] Fix Perf of FusedFeedForward and FusedAttention with AllReduce (#46780) * update
-
- 17 10月, 2022 1 次提交
-
-
由 Wen Sun 提交于
* Support both use_calc_stream and sync_op in send recv APIs (#46023) * Support both use_calc_stream and sync_op in allgather API (#46295) * Support both use_calc_stream and sync_op in collective communication API (#46761) * Move group and all reduce from collective to communication (#45848) * Completes bfloat16 dtype for collective api in eager mode (#45844) * Fix collective APIs cannot be recognized when building docs (#46962) Co-authored-by: NLiYuRio <63526175+LiYuRio@users.noreply.github.com>
-
- 11 10月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
* bug fix for virtual pipeline parallel (#45922) * dont wait for send op under dygraph pp (#46209) * [interleave pp] sync recv for 1f1b (#46399) * [dygraph pp] all sync for allgather partial (#46483)
-
- 27 9月, 2022 2 次提交
-
-
由 zhaoyingli 提交于
-
由 LiYuRio 提交于
-
- 26 9月, 2022 1 次提交
-
-
由 ziyoujiyi 提交于
* back fl * delete ssl cert * . * make warning * . * unittest paral degree * solve unittest * heter & multi cloud commm ready * . * . * fix gloo compile warning * adapt for nn fl-ps * flps del fake-init op * add learning_rate_0 intializer op * bug fix * . * .
-
- 22 9月, 2022 3 次提交
-
-
由 Roc 提交于
uniform logger manager in FleetAPI. hidde API under distributed/utils which users don't need.
-
由 Haohongxiang 提交于
* fix bugs of mp * fix bugs of mp * update * update * fix bug
-
由 zhaoyingli 提交于
-
- 20 9月, 2022 3 次提交
-
-
由 ziyoujiyi 提交于
* back fl * delete ssl cert * . * make warning * . * unittest paral degree * solve unittest * heter & multi cloud commm ready * . * . * fix gloo compile warning * adapt for nn fl-ps * flps del fake-init op * add learning_rate_0 intializer op
-
由 HongyuJia 提交于
* polish code comments * polish data_device_transform.cc
-
由 zhaoyingli 提交于
* [Auto Parallel] Change the import way of Auto Parallel (#46115) * fix strategy (#46256) * [Auto Parallel] performance improvement for Sharding-DP hybrid parallelism (#46180) * remove no need grad allreduce communication when sharding-dp * remove no need grad allreduce communication when sharding-dp * bugfix * bugfix * bugfix Co-authored-by: NYulong Ao <aoyulong@baidu.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
-
- 19 9月, 2022 6 次提交
-
-
由 wuhuachaocoding 提交于
-
由 Xiaoxu Chen 提交于
* [cherry-pick] extend reduce_sum,reduce_sum,eq,ne,ge,abs,pow,etc higher order operators * add reduce_mean,reduce_sum primitive ops * add ne_p gt_p primitive operators * add ge_p abs_p primitive oparators * add cast primitive operators * add pow,square prim2oirg rules * add elementwise_div orig2prim rule * [cherry-pick] add mean,sum,ge,gt,ne,abs,etc higher-order differentiation operators(#45888) * add reduce_mean,reduce_sum primitive ops * add ne_p gt_p primitive operators * add ge_p abs_p primitive oparators
-
由 wuhuachaocoding 提交于
* refactor mp. * update setup.py. * update mp_layers.py for compatibility. * add documents for mp_layers.py * update init.py * update collective.py. * update. * update mp_ops.py * update. * update code style. * update code style.
-
由 Yulong Ao 提交于
* [AutoParallel] adapt gradient merge pass (#45915) * adapt gradient merge * fix op_role * fix strategy * [Auto Parallel] Gradient Fuse Allreduce (#45643) * bugfix (#45332) * dist embedding support lookup table v1 * add unitest * customize wait_comm * group gradients * bugfix * update program * [Auto Parallel] Improve the APIs (#45776) * [Auto Parallel] Use c++ dist attr in the completion process * [Auto Parallel] Add minor changes * [Auto Parallel] Use c++ dist attr in the completion process * [Auto Parallel] Add minor changes * [Auto Parallel] Add the serialization process for dist attrs * [Auto Parallel] Remove unnecessary comments * [Auto Parallel] Fix some bugs * [Auto Parallel] Fix the code style * [Auto Parallel] Remove unnecessary impls * [Auto Parallel] Fix the importing error * [Auto Parallel] Fix the copy from bugs of op dist attr * [Auto Parallel] Replace the use of constexpr if * [Auto Parallel] Redesign the shard_tensor, shard_op and ProcessMesh * [Auto Parallel] Change API of the completion unittest * [Auto Parallel] Fix the bug when set_attr an int * [Auto Parallel] Add the unittest for the serialization * [Auto Parallel] Add some unit tests * [Auto Paralle] Unify the strategy * [Auto Parallel] Improve the engine api * [Auto Parallel] Reset the changes made to the framework * [Auto Parallel] Change the engine unittest * [Auto Parallel] Update API of the completion and partitioner * [Auto Parallel] Update unit tests using engine api * update shard annotation * [Auto Parallel] Remove the modifications of other modules * [Auto Parallel] Add docs for APIs * add new strategy * [Auto Parallel] Replace the logger * [Auto Parallel] Restore the test_program.py * [Auto Parallel] Change the import rules * [Auto Parallel] Add the examples for Engine * [Auto Parallel] Do some minor changes * [Auto Parallel] Remove yaml dependency * [Auto Parallel] Fix the unittests * add valid after train * bug fix Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com> Co-authored-by: Ncaozhou <caozhou@radi.ac.cn> Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com> * [Auto Parallel] Bugfix allreduce fuse for MP (#46086) * bugfix * bugfix * typos fixed * update strategy (#46138) Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com> Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com> Co-authored-by: Ncaozhou <caozhou@radi.ac.cn> Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com>
-
由 Chen Weihang 提交于
This reverts commit c252b1de.
-
由 ShenLiang 提交于
-
- 17 9月, 2022 1 次提交
-
-
由 ziyoujiyi 提交于
* back fl * delete ssl cert * . * make warning * . * unittest paral degree * solve unittest * heter & multi cloud commm ready * . * . * fix gloo compile warning * adapt for nn fl-ps
-