- 30 11月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop, rm pathlib
-
- 27 11月, 2020 3 次提交
-
-
由 ShenLiang 提交于
* add reducer * refine envent for memorycopy * add concat&split for allreduce * apply concat & split for fuse tensor * fix nccl dep * fix the untest, compile problem and ddp initialize problem * fix untest for mac & add some comments & solve the repeated param in sublayers * fix untest for windows & fix document
-
由 Chen Long 提交于
-
由 lilong12 提交于
-
- 26 11月, 2020 5 次提交
-
-
由 ShenLiang 提交于
* add Inmemorydataset
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest * revise sharding en doc * update sharding utils api * add doc for sharding * fixed bug in sharding var size count * update varsize count in sharding * fix sharding num_nccl_comm * Revert "fix sharding num_nccl_comm" This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
-
由 lilong12 提交于
* update, test=develop
-
由 WangXi 提交于
-
由 gongweibao 提交于
-
- 24 11月, 2020 2 次提交
- 23 11月, 2020 1 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
- 18 11月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest
-
- 28 10月, 2020 1 次提交
-
-
由 Chengmo 提交于
* fix fleetrun heter ps on paddlecloud
-
- 26 10月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add sharding
-
- 22 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 19 10月, 2020 1 次提交
-
-
由 MRXLT 提交于
fleet support paddle.optimzier * bug fix * fix fleet_base * bug fix * fix coverage
-
- 16 10月, 2020 2 次提交
- 15 10月, 2020 3 次提交
-
-
由 tangwei12 提交于
* add size method for large scale * add large scale UT * add ut for checkpoint
-
由 123malin 提交于
* test=develop, fix geo sgd communicator * test=develop, gloo_init_method * test=develop, bug fix for gloo http_init
-
由 danleifeng 提交于
* raise error if use multi-cards in fleet non_distributed mode; test=develop
-
- 14 10月, 2020 3 次提交
-
-
由 Chengmo 提交于
* add sparse tensor load method
-
由 123malin 提交于
* test=develop, bug fix for parameter_recv * test=develop, for unittest, test_fleet_rolemaker_new
-
由 Chen Weihang 提交于
-
- 13 10月, 2020 3 次提交
-
-
由 WangXi 提交于
-
由 mapingshuo 提交于
* support gradient merge with recompute, test=develop test=develop
-
由 Chengmo 提交于
* refine fleetrun.ps_launch * update fleet run for multi device support * ps_graph support ps-gpu * fix heter save * add heter save unittest * fix unittest & simple code * update fleetrun * fix fleetrun * fix launch barrier * fix role maker * add paddlecloud rolemaker unittest * rename heter_worker_device_guard
-
- 12 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 30 9月, 2020 2 次提交
-
-
由 danleifeng 提交于
* fleet support non_distributed training in dygraph mode; test=develop
-
由 lilong12 提交于
* add double grad for expand, test=develop
-
- 29 9月, 2020 2 次提交
- 28 9月, 2020 6 次提交
-
-
由 Qinghe JING 提交于
* set default value to strategy in distributed_optimizer test=develop
-
由 yaoxuefeng 提交于
-
由 lilong12 提交于
-
由 123malin 提交于
* test=develop, rm netifaces
-
由 lilong12 提交于
* add gloo initializer, test=develop
-
由 Dong Daxiang 提交于
* add get final strategy for user to print final strategy
-
- 27 9月, 2020 1 次提交
-
-
由 Chengmo 提交于
* fix test_dist_fleet_heter_ctr & peformance update
-