- 05 1月, 2021 2 次提交
-
-
由 gongweibao 提交于
-
由 Chen Weihang 提交于
Set FLAGS_selected_gpus for spawn. When the child process starts, it will inherit the configuration of the main process and set the FLAGS once, but the environment variable has not been set at this time, which leads to the FLAGS_selected_gpus is keep same with mainprocess(usually empty), so manually update the flags here. 注:增加了一个单测,又移除了,单测打印显示CI机器nvidia-smi只有两张卡,需要大于两张卡才能测这个问题
-
- 31 12月, 2020 3 次提交
- 25 12月, 2020 1 次提交
-
-
由 tangwei12 提交于
* add ps table (#29463) * add ps table Change-Id: I468a04bd071d21ff52654926fcf4d5f3da19e178 * add service (#29560) * add service, remove ut on mac * fix heter_profiler & add heter stop method * fix code style * merge pscore Change-Id: Ie7f60d1cdde6755a0c29db26863c6283e9843d57 * fix cmake Change-Id: I6773509a7b4ca79139ecc40b7bf3eb318ceff8bb * fix conflit Change-Id: I35575be0c96a8520f9d756ea7f1ff0b904a165ba * fix conflit Change-Id: Ic926ea0b0d67803226d51241397ba3b510226bfa
-
- 22 12月, 2020 2 次提交
- 17 12月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* Fix the dowanload bug in the case of multiple machines (#29551) * fix the dowanload bug * add sort for ips * Fix bug of matmul_v2 for broadcast case (#29599) * fix bug of matmul_v2 for broadcast * Rebuild group automatically in dynamic graph distributed (#29255) * add tensor_indices in AssignGroupBySize * add rebuild group in reducer * fix error message of gather nd (#29521)
-
- 16 12月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* Sharding add hybrid-dp feature * update sharding in distributed_strategy * update sharding unitest * revise code format for sharding
-
- 08 12月, 2020 1 次提交
-
-
由 lilong12 提交于
* update, test=develop (#29331)
-
- 04 12月, 2020 1 次提交
-
-
由 ShenLiang 提交于
-
- 03 12月, 2020 2 次提交
- 01 12月, 2020 1 次提交
-
-
由 123malin 提交于
* fix fleet api doc
-
- 30 11月, 2020 2 次提交
- 27 11月, 2020 4 次提交
-
-
由 ShenLiang 提交于
* add reducer * refine envent for memorycopy * add concat&split for allreduce * apply concat & split for fuse tensor * fix nccl dep * fix the untest, compile problem and ddp initialize problem * fix untest for mac & add some comments & solve the repeated param in sublayers * fix untest for windows & fix document
-
由 Chen Long 提交于
-
由 lilong12 提交于
-
由 lilong12 提交于
-
- 26 11月, 2020 5 次提交
-
-
由 ShenLiang 提交于
* add Inmemorydataset
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest * revise sharding en doc * update sharding utils api * add doc for sharding * fixed bug in sharding var size count * update varsize count in sharding * fix sharding num_nccl_comm * Revert "fix sharding num_nccl_comm" This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
-
由 lilong12 提交于
* update, test=develop
-
由 WangXi 提交于
-
由 gongweibao 提交于
-
- 24 11月, 2020 3 次提交
-
-
由 Chen Weihang 提交于
* polish parallel api impl & doc details * add unittest for coverage * remove spawn test in py2.7 * add parallel api into white list
-
由 Leo Chen 提交于
* upgrade comment string to raw string * fix string in * fix string with ' ' * revert update on comments * upgrade only necessary * fix sample code checker * fix comments with '''
-
由 123malin 提交于
* test=develop, optimize global_step
-
- 23 11月, 2020 2 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
由 Chen Weihang 提交于
-
- 18 11月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest
-
- 17 11月, 2020 1 次提交
-
-
由 lilong12 提交于
-
- 16 11月, 2020 1 次提交
-
-
由 danleifeng 提交于
-
- 28 10月, 2020 1 次提交
-
-
由 Chengmo 提交于
* fix fleetrun heter ps on paddlecloud
-
- 26 10月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add sharding
-
- 22 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 19 10月, 2020 2 次提交
- 16 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-