- 14 4月, 2023 1 次提交
-
-
由 JZ-LIANG 提交于
* pr1 * pr2 * pr3 * fixed unitest * adopt for scale
-
- 19 10月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* [Auto Parallel] Make Engine class callable (#46416) * [Auto Parallel] Imporve the user-defined fetches and logging * [Auto Parallel] Make Engine class callable * [Auto Parallel] Update the data loading of tuner * Print IPS in auto parallel Engine (#46554) * [AutoParallel] fix dist_split (#46505) * [AutoParallel] fix dist_split * add unittest * update cmakelist * [AutoParallel] fix sharding (#46572) * [AutoParallel] fix process_mesh (#46583) * [AutoParallel] fix reshard when train with eval (#46605) * [AutoParallel] fix reshard when train with eval * fix mppp * [AutoParallel] fix amp when predict (#46637) * [Auto Parallel]Update comp cost and completion for gpt auto search (#46387) * update comp cost and completion for gpt auto search * add unittest * [Auto Parallel] Fix bugs caused by the inconsistent outputs of Engine API (#46633) * [Auto Parallel] Unify the logger and outputs of Engine API * [Auto Parallel] Fix the bugs of to_static * [Auto Parallel] Adjust the test_to_static.py * [Auto Parallel] Improve the fine-grained APIs (#46552) * [Auto Parallel] Suppport different dataloaders * [Auto Parallel] Add num_shards config for dataset * [Auto Parallel] Unify the logger and outputs of Engine API * [Auto Parallel] Fix the bugs of to_static * [Auto Parallel] Adjust the test_to_static.py * [Auto Parallel] Add the prepare API and replace __call__ with run * [Auto Parallel] Improve the private implementations of Engine * [Auto Parallel] Set capacity of dataloader for opt tuning * [Auto Parallel] [WIP] Change the fine-grained API * [Auto Parallel] Improve APIs to support different user cases * [Auto Parallel] Add removed config * [Auto Parallel] Add imports * [Auto Parallel] Fix bugs for to_static * [Auto Parallel] Remove unnecessary imports * bugfix (#46921) * [Auto Parallel] Fix the bug for None labels (#46987) * [AutoParallel] adapt for gpt-gen (#46771) * for gpt-gen * fix reshard * adapt assign and shape op * add dist_assign & unittest * add conditional block unittest * rename unittest * [Auto Parallel] Fix the bug of completion (#47056) * [Auto Parallel] Fix the bug for None labels * [Auto Parallel] Fix the completion bug * [AutoParallel] add callbacks (#47014) * [AutoParallel] add callbacks * fix unittest * fix dist_context * fix engine * fix cmakelist * fix unittest's returns * fix cmakelist * [Auto Parallel] Add cost interface (#47043) * add cost interface * update inferface and add unittest * update unittest * update inferface * [Auto Parallel]Add parallel tuner (#46189) * add parallel tuner * add unittest * fix unittest * set timeout of unittest * set unittest timeout * fix auto_mode setting * update unittest * sync from develop and update unittest * remove unused import * update unittest * update cmakelist * add unittests Co-authored-by: NYulong Ao <aoyulong@baidu.com> Co-authored-by: NRuibiao Chen <chenruibiao@baidu.com> Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
-
- 05 9月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* dist_matmul trans * update unittest * update cmakelist
-
- 16 8月, 2022 1 次提交
-
-
由 caozhou 提交于
* update reshard cost and cost estimator * add unittest * add dropout cost * fix import error * fix reshard code style error * improve unittest coverage
-
- 09 8月, 2022 1 次提交
-
-
由 caozhou 提交于
* add mul dist op cost * add mul unittest
-
- 03 8月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 29 7月, 2022 1 次提交
-
-
由 caozhou 提交于
-
- 13 7月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* avoid sync with cpp in partition op * delay eval & predict mode * bugfix for gradient merge pass
-
- 07 7月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* fix op_role * fix engine * update op_role
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 07 5月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Replace the old planner by the new partition tuner * [Auto Parallel] Improve the completion and distributed context * [Auto Parallel] Fix some bugs of the compatible check of some dist ops * [Auto Parallel] Fix some bugs
-
- 25 3月, 2022 2 次提交
-
-
由 JZ-LIANG 提交于
* [Auto Parallel] Support the auto completion of while_op * align infer accuracy
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 23 3月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Add distributed mul for the old version
-
- 16 3月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto Parallel] Support the auto completion of while_op * [Auto Parallel] Improve the completion algorithms * [Auto Parallel] Fix bugs for ernie inference * [Auto Parallel] Remove attrs which cannot be pickled * [Auto Parallel] make the dims_mappings of LodTensorArray vars empty * [Auto Parallel] Fix bugs for the ernie inference in the pipeline parallel * [Auto Parallel] Remove unncessary comments * [Auto Parallel] Fix a bug of the CMakeLists * [Auto Parallel] Use the newest APIs to write the unit test * [Auto Parallel] Remove unnecessary statements
-
- 02 3月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* adapot dist op * add dist_fill_constant_batch_size_like * remvoe print * update compitable * add unitest
-
- 22 2月, 2022 1 次提交
-
-
由 JZ-LIANG 提交于
* add subblock logic for context and partitioner * partitioner support sub blocks * revise typos * fixed param init bug for while * chmod 644 * add unitest * mv forward parser * update unitest * update dist op ctx * update dist op ctx * fixed bug in dist op ctx * fixed bug for recompute subblock
-
- 27 1月, 2022 1 次提交
-
-
由 caozhou 提交于
* update planner * update unitest * update dist matmul * update auto converter
-
- 20 1月, 2022 1 次提交
-
-
由 Yulong Ao 提交于
* Add the backward support for QR * Remove unnecessary comments * [Auto Parallel] Improve the dist op interface and compatible computation * Remove unnecessary modification * Recover some modifications * Add lost files * Fix a minor bug * Fix the bug of the planner * Fix the format problem
-
- 18 1月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* [AutoParallel] Recompute Pass * update unittest * reshard for amp * add comment
-
- 30 12月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto parallel] Make the id of var and op unique * [Auto Parallel] Rename back dist_context to distop_context
-
- 29 12月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* auto parallel sharding base * chmod * add unitest * set unitest cmake dist label * revise code according to rewiew * chmod
-
- 24 12月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 17 12月, 2021 1 次提交
-
-
由 caozhou 提交于
* add planner * add planner * add cost model update * add relaunch updation * update process_group * fix error * add unitest * update unitest * update cost model * avoid api problem
-
- 10 12月, 2021 1 次提交
-
-
由 沉潜的鱼儿 提交于
* dist matmul op compatible * modify common dist op * modify common * add a space
-
- 24 11月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adapt auto search * adapt auto search * fix matmulv2 compatible * del debug
-
- 29 10月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* default dist op * add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2 * unify api * unify api * todo * update distop forward func * update distop forward func * auto parallel backward * update dist op * autoparallel backward * add backward for embedding * temp1 * temp2 * temp3 * temp4 * backward done1 * backward done2 * backward done3 * dist embedding remove mp mode * dist matmul remove mp mode * update dist embedding 『 * dist op init1 * dist op init 2 * update unitest * context remove parallel mode * partitioner remove parallel mode * update unitest * a more general method to support varying mesh in pipeline parallel * support varying mesh in pipeline parallel * embedding support varying mesh in pipeline parallel * matmul support varying mesh in pipeline parallel * default dist op support varying mesh in pipeline parallel * dist attribute for startup program * default dist op support varying mesh in pipeline parallel 2 * partitoner support varying mesh in pipeline parallel * revise logic for auto compeletion * revise framework.py * revise reshard unitest * revise unitest for parallelize * chmod * fixed bug for dist embedding name mapping * Improve the interface and the underlying mechanisms of auto parallel * revise completion for backward * revise completion for update * revise completion for update * update unitest * chmod * bugfix for grad_op output var's mesh * Modify codes for pr 36744 * Remove unnecessary comments in framework.py * Remove unnecessary comments in completion.py Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com> Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com> Co-authored-by: NJZ-LIANG <38102074+JZ-LIANG@users.noreply.github.com>
-
- 20 10月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* default dist op * add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2 * unify api * unify api * todo * update distop forward func * update distop forward func * auto parallel backward * update dist op * autoparallel backward * add backward for embedding * temp1 * temp2 * temp3 * temp4 * backward done1 * backward done2 * backward done3 * dist embedding remove mp mode * dist matmul remove mp mode * update dist embedding 『 * dist op init1 * dist op init 2 * update unitest * context remove parallel mode * partitioner remove parallel mode * update unitest * a more general method to support varying mesh in pipeline parallel * support varying mesh in pipeline parallel * embedding support varying mesh in pipeline parallel * matmul support varying mesh in pipeline parallel * default dist op support varying mesh in pipeline parallel * dist attribute for startup program * default dist op support varying mesh in pipeline parallel 2 * partitoner support varying mesh in pipeline parallel * revise logic for auto compeletion * revise framework.py * revise reshard unitest * revise unitest for parallelize * chmod * fixed bug for dist embedding name mapping Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
-
- 15 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2
-
- 08 9月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * add dist * update * update * update * update * update * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update * update * update * update * update * update, test=develop * update, test=develop * update * update * delete unused proto * resotre op_desc * restore type_defs * update var_desc * remove dimss_mapping for proto_pybind * update interface.py * update framework.py * update * update * add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * [WIP] Add the auto completion feature and related codes * [WIP] Improve the auto completion and related codes * [WIP] Make the auto completion to support data-parallel * [WIP] Make the completion support mp and dp+mp * [WIP] Refactor auto completion unit test for MLP * [WIP] Refactor the implementation of DistributedOperatorImpl * [WIP] Improve dims_mapping update rule and fix a bug * [WIP] Support auto completion for one transformer decoder layer * [WIP] Add a minor change * [WIP] Fix a bug within the uint test * Shard XShape tensor, add embedding completion and refactor code * Add the distributed_operators dir to setup.py.in * Improve the completion process and add the unittest for gpt * fix process_mesh ut * fix process_mesh ut * update * update, test=develop * Add support for automatically completing distributed attrs of special ops * update * update * update * fix doc sample codes, test=develop * improve coverage, test=develop * add static_mode check, test=develop * Model the cluster for cost model and physical mapping * update, test=develop * add set_placement, test=develop * Add the check to make sure the candidate tensors' size is great than zero * update doc, test=develop * update doc, test=develop * update doc, test=develop * update doc, test=develop * update, test=develop * Auto mark dist attrs annotated by user * update ndarray to nested list, test=develop * update, test=develop * Add auto-completion module for auto-parallel (based on PR#33804) * Remove unnecessary files * Remove unrelated files for the auto completion pr * Update the unit test to improve the coverage * Modify codes based on reviews * Minor changes for CI * Improve some codes based on new comments * Fix bugs caused by shallow copy in attributes.py * Imporve amend_distributed_attr_for_program in context.py * Other changes for weihang's comments * support shard reader * support shard reader * add parallel mode * update process mesh * add method to compute comm_group * implement dist_embedding forward func * implement dist matmul forward func * implement dist reshape forward func * add transpiler framework * add transpiler forward * implement transpiler forward * implement transpiler backward & update * add process * add unitest * chmod * chmod * chmod * update unitest * add unitest for gpt * remove unused print * rename transpiler --> partitioner * rename transpiler --> partitioner * chmod * chmod * bug fixed * remove amp function * update case for dp mode * update case for dp mode * [Auto Parallel] Integrate all parts with the newest code * Integrate all parts of auto parallel and improve codes * Integrate all parts by AutoParallelizer * Add unit test for AutoParallelizer * Improve auto completion module for pipeline parallel * Add support for matmul_v2 in dist_matmul * Correct the typo "stratergy" to "strategy" * Modify distributed_strategy.proto to conform the main stream * Restore parts of distributed_strategy to conform the develop branch Co-authored-by: Nsandyhouse <lilong12@baidu.com> Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
-
- 02 9月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* support shard reader * support shard reader * add parallel mode * update process mesh * add method to compute comm_group * implement dist_embedding forward func * implement dist matmul forward func * implement dist reshape forward func * add transpiler framework * add transpiler forward * implement transpiler forward * implement transpiler backward & update * add process * add unitest * chmod * chmod * chmod * update unitest * add unitest for gpt * remove unused print * rename transpiler --> partitioner * rename transpiler --> partitioner * chmod * chmod * bug fixed * remove amp function * update case for dp mode * update case for dp mode
-
- 24 8月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * add dist * update * update * update * update * update * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update * update * update * update * update * update, test=develop * update, test=develop * update * update * delete unused proto * resotre op_desc * restore type_defs * update var_desc * remove dimss_mapping for proto_pybind * update interface.py * update framework.py * update * update * add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * [WIP] Add the auto completion feature and related codes * [WIP] Improve the auto completion and related codes * [WIP] Make the auto completion to support data-parallel * [WIP] Make the completion support mp and dp+mp * [WIP] Refactor auto completion unit test for MLP * [WIP] Refactor the implementation of DistributedOperatorImpl * [WIP] Improve dims_mapping update rule and fix a bug * [WIP] Support auto completion for one transformer decoder layer * [WIP] Add a minor change * [WIP] Fix a bug within the uint test * Shard XShape tensor, add embedding completion and refactor code * Add the distributed_operators dir to setup.py.in * Improve the completion process and add the unittest for gpt * fix process_mesh ut * fix process_mesh ut * update * update, test=develop * Add support for automatically completing distributed attrs of special ops * update * update * update * fix doc sample codes, test=develop * improve coverage, test=develop * add static_mode check, test=develop * Model the cluster for cost model and physical mapping * update, test=develop * add set_placement, test=develop * Add the check to make sure the candidate tensors' size is great than zero * update doc, test=develop * update doc, test=develop * update doc, test=develop * update doc, test=develop * update, test=develop * Auto mark dist attrs annotated by user * update ndarray to nested list, test=develop * update, test=develop * Add auto-completion module for auto-parallel (based on PR#33804) * Remove unnecessary files * Remove unrelated files for the auto completion pr * Update the unit test to improve the coverage * Modify codes based on reviews * Minor changes for CI * Improve some codes based on new comments * Fix bugs caused by shallow copy in attributes.py * Imporve amend_distributed_attr_for_program in context.py * Other changes for weihang's comments Co-authored-by: Nsandyhouse <lilong12@baidu.com>
-