- 12 4月, 2023 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Speedup the completion process * [Auto Parallel] Skip the property of dist_context when deepcopying * [Auto Parallel] Remove the unnecessary print * [Auto Parallel] Move some changes from 2.4 branch to develop * Update engine.py * [Auto Parallel] Fix a bug
-
- 31 3月, 2023 1 次提交
-
-
由 张春乔 提交于
* autofix Co-authored-by: NLiyulingyue <83450930+Liyulingyue@users.noreply.github.com> * revert changes in python/paddle/distributed/fleet/utils/hybrid_parallel_util.py * empty commit, trigger ci * fix test_slice --------- Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
- 27 3月, 2023 1 次提交
-
-
由 Infinity_lee 提交于
[CodeStyle][C413][C414] Unnecessary <list/reversed> call around sorted(),<list/reversed/set/sorted/tuple> call within <list/set/sorted/tuple>() (#52065)
-
- 22 3月, 2023 1 次提交
-
-
由 Ainavo 提交于
* replace assert false with AssertionError * 修改配置文件多余的部分
-
- 20 3月, 2023 1 次提交
-
-
由 GGBond8488 提交于
* migrate fill_constant to paddle.tensor * move fill_constant to paddle.tensor and repalce the reference * add missing fill_constant replacement * fix typro * remove unused import fill_constant * fix zeros import error * fix circle import * fix layers.zeros * fix unitest * fix unitests * fix unitest * use paddle.full replace fill_constant in samplecode * fix sample code * recovery xpu test * recovery xpu test * fix circle import * fix utils import error * fix utils error * fix circle import * redo * fix circle import * fix prim fill constant import * fix type error * fix increase error * fix test error * fix fill_constant
-
- 27 2月, 2023 1 次提交
-
-
由 chenxujun 提交于
-
- 16 2月, 2023 1 次提交
-
-
由 shentanyue 提交于
* support xpu multi-card infer * add ut * clean code * clean code * fix * fix * fix * fix
-
- 10 1月, 2023 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Remove some fluid APIs * [Auto Parallel] Fix the wrong import * [Auto Parallel] Remove unnecessary comments * [Auto Parallel] Fix the importing bug
-
- 04 1月, 2023 1 次提交
-
-
由 JZ-LIANG 提交于
* remove deps and prior comm * grad comm fuse * add deps for amp&global norm * stage2 broadcast prior deps * stage2 grad overlap * stream_analyzer bugfix * overlap enable * dep op namescope * depend support multiple inputs * check finite deps * stage2 param comm overlap * Set kD2HStream * grad comm hierarchical * grad comm hierarchical * new unitest Co-authored-by: Nchenruibiao <chenruibiao@baidu.com>
-
- 25 12月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
* delete legacy dygraph code in python/paddle/distributed * refine
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* isort all files * revert conflicting files * revert conflicting files * revert conflicting files
-
- 01 11月, 2022 2 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][E711] use `is`/`is not` for comparison with `None` * `self.assertTrue($A is None)` -> `self.assertIsNone($A)` * `self.assertTrue($A is not None)` -> `self.assertIsNotNone($A)` * `self.assertFalse($A is None)` -> `self.assertIsNotNone($A)` * `self.assertEqual($A, None)` -> `self.assertIsNone($A)` * `self.assertNotEqual($A, None)` -> `self.assertIsNotNone($A)`
-
由 Nyakku Shigure 提交于
* [CodeStyle][E712] use `if cond`/`if cond is True` for comparison with `True` * revert changes in fluid * revert unrelated file * revert changes in norm * revert changes in auto_parallel_amp * fix norm and auto_parallel_amp * revert a typo fix due to fixed at #47477
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 11 10月, 2022 1 次提交
-
-
由 caozhou 提交于
-
- 21 9月, 2022 1 次提交
-
-
由 LiYuRio 提交于
-
- 12 8月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* bugfix * remove scaling * support rescale_grad opt
-
- 29 7月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* fixed bug for pass & engine * fixed bug for benchmark GPT-3 * add tuner & profiler * add algorithms & config
-
- 21 7月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* fix unittest * fix log_dir * _enable_legacy_dygraph
-
- 13 7月, 2022 1 次提交
-
-
由 caozhou 提交于
* add comm init control by socket * avoid single card instance failure
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 12 5月, 2022 1 次提交
-
-
由 Shuangchi He 提交于
-
- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 12 1月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* auto parallel sharding base * chmod * add unitest * set unitest cmake dist label * revise code according to rewiew * chmod * bugfix for grad_clip and param broadcast * chmod * update unitest * chmod * add clip * chmod * add amp pass * chmod * add unitest * remove grad update * fixed bug * fixed bug * fixed typose * fixed typoes
-
- 17 12月, 2021 1 次提交
-
-
由 caozhou 提交于
* add planner * add planner * add cost model update * add relaunch updation * update process_group * fix error * add unitest * update unitest * update cost model * avoid api problem
-
- 07 12月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Add the unified cluster representation * [Auto Parallel] Add the graph class for physical mapping * [Auto Parallel] Add the simple physical mapper * Set the timeout of the mapper * Merge the upstream develop unittests cmake files * Fix a bug of the process group * Remove mapper unittest from platforms which is not GPU * Move the instantiation of process group after resharding * Add the local id for devices * Update the rank mapping format * [Auto Parallel] Relaunch with the rank mapping file * Remove the unnecessary json file * Avoid entering get_device_proc_info for auto mapping * Correct the mapper unit test * Add some comments * Remove the related files about mapping * Update the unittest for auto mapping * Remove unused rank_mapping unittest * Improve the unittest coverage * Improve the unittest coverage * Improve the unittest of relaunch * Fix the unittest problem in CI * Improve the unittest of relaunch * Remove unnecessary statements * Update the unittest cmakefile * Correct the cmakefile of auto parallel unittests * Modify codes based on the new elastic change * Use the GPUs exclusively in the unittest * Correct the cmakefile * Set the timeout of the unittest
-
- 27 11月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Add the unified cluster representation * [Auto Parallel] Add the graph class for physical mapping * [Auto Parallel] Add the simple physical mapper * Set the timeout of the mapper * Merge the upstream develop unittests cmake files * Fix a bug of the process group * Remove mapper unittest from platforms which is not GPU * Move the instantiation of process group after resharding * Add the local id for devices * Update the rank mapping format * Add some comments * Remove the related files about mapping * Remove unused rank_mapping unittest * Improve the unittest coverage
-
- 29 10月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* default dist op * add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2 * unify api * unify api * todo * update distop forward func * update distop forward func * auto parallel backward * update dist op * autoparallel backward * add backward for embedding * temp1 * temp2 * temp3 * temp4 * backward done1 * backward done2 * backward done3 * dist embedding remove mp mode * dist matmul remove mp mode * update dist embedding 『 * dist op init1 * dist op init 2 * update unitest * context remove parallel mode * partitioner remove parallel mode * update unitest * a more general method to support varying mesh in pipeline parallel * support varying mesh in pipeline parallel * embedding support varying mesh in pipeline parallel * matmul support varying mesh in pipeline parallel * default dist op support varying mesh in pipeline parallel * dist attribute for startup program * default dist op support varying mesh in pipeline parallel 2 * partitoner support varying mesh in pipeline parallel * revise logic for auto compeletion * revise framework.py * revise reshard unitest * revise unitest for parallelize * chmod * fixed bug for dist embedding name mapping * Improve the interface and the underlying mechanisms of auto parallel * revise completion for backward * revise completion for update * revise completion for update * update unitest * chmod * bugfix for grad_op output var's mesh * Modify codes for pr 36744 * Remove unnecessary comments in framework.py * Remove unnecessary comments in completion.py Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com> Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com> Co-authored-by: NJZ-LIANG <38102074+JZ-LIANG@users.noreply.github.com>
-
- 02 9月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* support shard reader * support shard reader * add parallel mode * update process mesh * add method to compute comm_group * implement dist_embedding forward func * implement dist matmul forward func * implement dist reshape forward func * add transpiler framework * add transpiler forward * implement transpiler forward * implement transpiler backward & update * add process * add unitest * chmod * chmod * chmod * update unitest * add unitest for gpt * remove unused print * rename transpiler --> partitioner * rename transpiler --> partitioner * chmod * chmod * bug fixed * remove amp function * update case for dp mode * update case for dp mode
-