- 20 7月, 2022 1 次提交
-
-
由 danleifeng 提交于
* add adam/sharedadam optimzier for gpups;edit optimizer struct;test=develop
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 02 6月, 2022 1 次提交
-
-
由 ziyoujiyi 提交于
* back fl * delete ssl cert * . * make warning * . * unittest paral degree * solve unittest * heter & multi cloud commm ready * . * . * fl-ps v1.0 * . * support N + N mode * . * . * . * . * delete print * . * . * . * .
-
- 19 5月, 2022 1 次提交
-
-
由 danleifeng 提交于
-
- 12 5月, 2022 1 次提交
-
-
由 Shuangchi He 提交于
-
- 19 4月, 2022 1 次提交
-
-
由 wangguanqun 提交于
* double accessor and show_scale * double accessor and show_scale * rename * fix bug in pslib config * add unittest
-
- 13 4月, 2022 1 次提交
-
-
由 wangguanqun 提交于
* the one ps proto * the one ps proto * fix * fix * fix * fix windows ci * fix windows ci * add dependency * add dependency
-
- 31 3月, 2022 1 次提交
-
-
由 wangguanqun 提交于
* fix load bug and add distributed strategy from pslib * add unittest * use cvm config * trainer and worker config * add unittest * add unittest * add test * code style
-
- 17 1月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* add no reduce mode for pe * add NoReduce ut
-
- 09 12月, 2021 1 次提交
-
-
由 wangguanqun 提交于
* default accessor and multi table config * add unittest * add unittest * delete print
-
- 06 12月, 2021 1 次提交
-
-
由 kuizhiqing 提交于
-
- 30 11月, 2021 1 次提交
-
-
由 zhaocaibei123 提交于
-
- 26 11月, 2021 1 次提交
-
-
由 zhaocaibei123 提交于
* test * test * rm test * update * update * update * add unittest * update * update save
-
- 24 11月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adapt auto search * adapt auto search * fix matmulv2 compatible * del debug
-
- 08 9月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * add dist * update * update * update * update * update * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update * update * update * update * update * update, test=develop * update, test=develop * update * update * delete unused proto * resotre op_desc * restore type_defs * update var_desc * remove dimss_mapping for proto_pybind * update interface.py * update framework.py * update * update * add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * [WIP] Add the auto completion feature and related codes * [WIP] Improve the auto completion and related codes * [WIP] Make the auto completion to support data-parallel * [WIP] Make the completion support mp and dp+mp * [WIP] Refactor auto completion unit test for MLP * [WIP] Refactor the implementation of DistributedOperatorImpl * [WIP] Improve dims_mapping update rule and fix a bug * [WIP] Support auto completion for one transformer decoder layer * [WIP] Add a minor change * [WIP] Fix a bug within the uint test * Shard XShape tensor, add embedding completion and refactor code * Add the distributed_operators dir to setup.py.in * Improve the completion process and add the unittest for gpt * fix process_mesh ut * fix process_mesh ut * update * update, test=develop * Add support for automatically completing distributed attrs of special ops * update * update * update * fix doc sample codes, test=develop * improve coverage, test=develop * add static_mode check, test=develop * Model the cluster for cost model and physical mapping * update, test=develop * add set_placement, test=develop * Add the check to make sure the candidate tensors' size is great than zero * update doc, test=develop * update doc, test=develop * update doc, test=develop * update doc, test=develop * update, test=develop * Auto mark dist attrs annotated by user * update ndarray to nested list, test=develop * update, test=develop * Add auto-completion module for auto-parallel (based on PR#33804) * Remove unnecessary files * Remove unrelated files for the auto completion pr * Update the unit test to improve the coverage * Modify codes based on reviews * Minor changes for CI * Improve some codes based on new comments * Fix bugs caused by shallow copy in attributes.py * Imporve amend_distributed_attr_for_program in context.py * Other changes for weihang's comments * support shard reader * support shard reader * add parallel mode * update process mesh * add method to compute comm_group * implement dist_embedding forward func * implement dist matmul forward func * implement dist reshape forward func * add transpiler framework * add transpiler forward * implement transpiler forward * implement transpiler backward & update * add process * add unitest * chmod * chmod * chmod * update unitest * add unitest for gpt * remove unused print * rename transpiler --> partitioner * rename transpiler --> partitioner * chmod * chmod * bug fixed * remove amp function * update case for dp mode * update case for dp mode * [Auto Parallel] Integrate all parts with the newest code * Integrate all parts of auto parallel and improve codes * Integrate all parts by AutoParallelizer * Add unit test for AutoParallelizer * Improve auto completion module for pipeline parallel * Add support for matmul_v2 in dist_matmul * Correct the typo "stratergy" to "strategy" * Modify distributed_strategy.proto to conform the main stream * Restore parts of distributed_strategy to conform the develop branch Co-authored-by: Nsandyhouse <lilong12@baidu.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
-
- 20 8月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 18 8月, 2021 1 次提交
-
-
由 WangXi 提交于
[Hybrid Performance] Move the cast op of AMP which cast fp32 param to fp16 param to the optimizer (#34965)
-
- 30 7月, 2021 1 次提交
-
-
由 wangguanqun 提交于
* add trainer desc config to distributed strategy * code style modified
-
- 08 7月, 2021 1 次提交
-
-
由 Ming-Xu Huang 提交于
-
- 01 7月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 21 6月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 10 6月, 2021 1 次提交
-
-
由 Baibaifan 提交于
-
- 09 6月, 2021 1 次提交
-
-
由 wanghuancoder 提交于
* modify API nn.Bilinear's doc, test=develop
-
- 07 6月, 2021 1 次提交
-
-
由 zhangchunle 提交于
-
- 26 5月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 17 5月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* fix precision of mp * fix bug of seed * fix dp * print group
-
- 11 5月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* fix find_unused_parameters default value
-
- 08 5月, 2021 1 次提交
-
-
由 lilong12 提交于
* add raw program, test=develop
-
- 06 5月, 2021 1 次提交
-
-
由 zhiboniu 提交于
-
- 25 4月, 2021 1 次提交
-
-
由 lilong12 提交于
* update
-
- 20 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* sharding: update config DOC * update pipeline config * sharding update doc
-
- 17 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* add model parallel support in dygraph
-
- 01 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* support control flow * supoort sync_parameters_buffers * fix the bug of sparse embedding
-
- 24 2月, 2021 1 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
- 01 2月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 12 1月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 09 12月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* add tensor_indices in AssignGroupBySize * add rebuild group in reducer
-
- 01 12月, 2020 2 次提交
- 26 11月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest * revise sharding en doc * update sharding utils api * add doc for sharding * fixed bug in sharding var size count * update varsize count in sharding * fix sharding num_nccl_comm * Revert "fix sharding num_nccl_comm" This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
-