- 09 4月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* [feature] support npu allocator (#30840) [feature] support npu allocator * [feature] support npu operator (#30951) [feature] support npu operator * [feature] support npu allocator, part 2 (#30972) * support npu allocator * add npu device context * fix some compile problem * fix some compile problem * add npu info * compile ok * fix include dir * support naive_best_fit_allocator * run ut ok, bug failed to exit * call aclrtResetDevice before exit * fix aclFinilize * add system allocatot test * add selected_gpus in gtest * add tensor_test for npu * support npu op, initial commit * add npu stream * add elementwise_add_op * compile ok * fix typo * fix elementwise_add_op_npu_test * support op run * test can run but failed * change aclopExecuteV2 to aclopCompileAndExecute * support parsing ascend rank table file (#31000) support parsing ascend rank table file * Fix reshape on GE graph. (#31084) Fix reshape on GE graph * add npu kernel for elementwise_sub and elementwise_sub_grad (#30973) * add npu sub op * fix typo * rename test * fix bug * fix bug * add fp16 kernel * fix typo * support sub grad op * support elementwise_sub_grad op Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com> * Fix compilation problem (#31100) Fix compilation problem (#31100) * fix compile * fix code stype * remove const_cast * support adding correct npu op in pybind.h (#31143) * support adding correct npu op in pybind.h * refine code * [NPU] Support executor with NPU (#31057) * [NPU] Support executor with NPU * Fix code according to reviews * Fix code * Add unittest for sub op npu * refactor npu device manager (#31154) refactor npu device manager (#31154) * fix selected npus * fix compile * fix reading flags from env * format Co-authored-by: Nxiayanming <41795079@qq.com> Co-authored-by: Ngongweibao <weibao.gong@gmail.com> Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com> Co-authored-by: Nliym27 <33742067+liym27@users.noreply.github.com>
-
- 07 4月, 2021 1 次提交
-
-
由 zhang wenhui 提交于
* Ascend rc (#30483) * Fix compilcation on CANN20.1 and older (#30494) Fix compilcation on CANN20.1 and older * Add distribution supported (#30578) Add distribution supported * Build praser for Hcom* operators (#30627) Build praser for Hcom* operators * Pass device_ids info from launch to trainer. (#30632) Pass device_ids info from launch to trainer * Add Hccl program group (#30642) Add Hccl program group * Add startup bash files of test_ascend_group. (#30645) Add startup bash files of test_ascend_group * cleanup (#30646) cleanup test_ascend_group.py * [Feature] Build parser to support distributed training (#30658) [Feature] Build parser to support distributed training * fix compilation on ascend-20.1 (#30722) fix compilation on ascend-20.1 * Dev/fix ascend string (#30749) Dev/fix ascend string * code style (#30781) code style * Merge ascend_optimizer and ascend_parser. (#30776) Merge ascend_optimizer and ascend_parser. * Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug (#30797) Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug * Add paddle ascend distribution training supported (#30796) Add paddle ascend distribution training supported * pass cxx_flags to gloo cmake (#30857) * Destroy session first. (#30954) Destroy session first. * merge * fix, test=develop * fix, test=develop * fix style, test=develop * fix, test=develop * fix * fix log fatal, test=develop * fix enforce style, test=develop * fix, test=develop * fix, test=develop * fix rccl, test=develop * fix test, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix node_num, test=develop * fix ids str, test=develop * fix ids str, test=develop * fix ids str, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix style code, test=develop * fix style code, test=develop * fix style code, test=develop * fix style code, test=develop Co-authored-by: Nhutuxian <hutuxian2011@sina.cn> Co-authored-by: Ngongweibao <weibao.gong@gmail.com> Co-authored-by: NVoid Main <voidmain1313113@gmail.com> Co-authored-by: NLeo Chen <chenqiuliang@baidu.com> Co-authored-by: Ndingsiyu <18369187719@163.com> Co-authored-by: NOleNet <olenet@126.com>
-
- 03 2月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 05 1月, 2021 1 次提交
-
-
由 gongweibao 提交于
-
- 31 12月, 2020 1 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
- 27 11月, 2020 1 次提交
-
-
由 lilong12 提交于
-
- 26 11月, 2020 1 次提交
-
-
由 gongweibao 提交于
-
- 24 11月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* upgrade comment string to raw string * fix string in * fix string with ' ' * revert update on comments * upgrade only necessary * fix sample code checker * fix comments with '''
-
- 14 10月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop, bug fix for parameter_recv * test=develop, for unittest, test_fleet_rolemaker_new
-
- 13 10月, 2020 1 次提交
-
-
由 Chengmo 提交于
* refine fleetrun.ps_launch * update fleet run for multi device support * ps_graph support ps-gpu * fix heter save * add heter save unittest * fix unittest & simple code * update fleetrun * fix fleetrun * fix launch barrier * fix role maker * add paddlecloud rolemaker unittest * rename heter_worker_device_guard
-
- 29 9月, 2020 1 次提交
-
-
由 lilong12 提交于
* add gloo initializer, test=develop
-
- 24 9月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop, bug fix
-
- 22 9月, 2020 1 次提交
-
-
由 danleifeng 提交于
-
- 21 9月, 2020 1 次提交
-
-
由 danleifeng 提交于
-
- 18 9月, 2020 1 次提交
-
-
由 tangwei12 提交于
* fix worker endpoints * fix gloo wrapper for hdfs * GPU fleetrun support gloo * parameterserver fleetrun support gloo * fix get server endpoint
-
- 17 9月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop, util documents
-
- 16 9月, 2020 1 次提交
-
-
由 danleifeng 提交于
* fix ports conflict when launching multi-nodes in paddlecloud;test=develop * add DISTRIBUTED_TRAINER_ENDPOINTS env for cloud;test=develop
-
- 03 9月, 2020 1 次提交
-
-
由 danleifeng 提交于
* print detailed and clear log infos; test=develop
-
- 13 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* move paddle.fleet to paddle.distributed.fleet
-
- 11 8月, 2020 1 次提交
-
-
由 danleifeng 提交于
-
- 10 8月, 2020 1 次提交
-
-
由 danleifeng 提交于
* support multi-ps training mode for fleetrun; test=develop
-
- 05 8月, 2020 1 次提交
-
-
由 danleifeng 提交于
* add fleetrun command for distributed running; test=develop
-