- 09 11月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 12 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][F401] remove unused import in python/paddle/distributed * remove pass * empty commit * Fix ValueError: list.remove(x): x not in list for meta_optimizer_names. Fix ValueError: list.remove(x): x not in list for meta_optimizer_names. * Fix split import. Fix split import. * add noqa after meta_optimizers in factory * restort collective ops * expand `import *` * add noqa after required imports * try to fix APIs without core.ops * Revert "try to fix APIs without core.ops" This reverts commit 6172beaf601e84bf61f2490c12c4739f0edaa5eb. * fix an increment * empty commit * add noqa after required imports * expand `import *`, fix ci error Co-authored-by: NShuangchi He <34329208+Yulv-git@users.noreply.github.com>
-
- 14 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* trim trailing whitespace * fix `.cmake-format.py` * revert npu ut changes, avoid npu ci error
-
- 12 8月, 2022 1 次提交
-
-
由 caozhou 提交于
* update reshard for auto search * fix unittest bug * update dist tensor * update reshard output * fix unittests bug * merge develop
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 01 6月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Add the parallel tuner * [Auto Parallel] Improve the parallel tuner and fix some bugs * upodate cost model * update import Resharder by dist op * update cost model * fix comp cost bug * update cost model * [Auto Parallel] Amend the dist attr for #processses=1 * update cost model and tuner * update cost model and tuner * update cost model and tuner * update cluster * update reshard * [Auto Parallel] Add the estimation from the cost model * [Auto Parallel] Reimplement the backup and restore functions * [Auto Parallel] Fix the bugs of the parallel tuner * [Auto Parallel] Update the engine api and dist context * [Auto Parallel] Work around the high order grad problem * [Auto Parallel] Add some miscellaneous improvements * [Auto Parallel] Add a unittest for DistributedContext Co-authored-by: Ncaozhou <caozhou@radi.ac.cn>
-
- 16 3月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Support the auto completion of while_op * [Auto Parallel] Improve the completion algorithms * [Auto Parallel] Fix bugs for ernie inference * [Auto Parallel] Remove attrs which cannot be pickled * [Auto Parallel] make the dims_mappings of LodTensorArray vars empty * [Auto Parallel] Fix bugs for the ernie inference in the pipeline parallel * [Auto Parallel] Remove unncessary comments * [Auto Parallel] Fix a bug of the CMakeLists * [Auto Parallel] Use the newest APIs to write the unit test * [Auto Parallel] Remove unnecessary statements
-
- 11 1月, 2022 1 次提交
-
-
由 caozhou 提交于
* update dist tensor * add unitest * update unitest * refactor dist tensor * update dist tensor and unitest
-
- 24 11月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adapt auto search * adapt auto search * fix matmulv2 compatible * del debug
-
- 29 10月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* default dist op * add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2 * unify api * unify api * todo * update distop forward func * update distop forward func * auto parallel backward * update dist op * autoparallel backward * add backward for embedding * temp1 * temp2 * temp3 * temp4 * backward done1 * backward done2 * backward done3 * dist embedding remove mp mode * dist matmul remove mp mode * update dist embedding 『 * dist op init1 * dist op init 2 * update unitest * context remove parallel mode * partitioner remove parallel mode * update unitest * a more general method to support varying mesh in pipeline parallel * support varying mesh in pipeline parallel * embedding support varying mesh in pipeline parallel * matmul support varying mesh in pipeline parallel * default dist op support varying mesh in pipeline parallel * dist attribute for startup program * default dist op support varying mesh in pipeline parallel 2 * partitoner support varying mesh in pipeline parallel * revise logic for auto compeletion * revise framework.py * revise reshard unitest * revise unitest for parallelize * chmod * fixed bug for dist embedding name mapping * Improve the interface and the underlying mechanisms of auto parallel * revise completion for backward * revise completion for update * revise completion for update * update unitest * chmod * bugfix for grad_op output var's mesh * Modify codes for pr 36744 * Remove unnecessary comments in framework.py * Remove unnecessary comments in completion.py Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com> Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com> Co-authored-by: NJZ-LIANG <38102074+JZ-LIANG@users.noreply.github.com>
-