“29697c2e259c5b6142c2ddac8448e3f8597a63e1”上不存在“git@gitcode.net:paddlepaddle/Paddle.git”
- 24 2月, 2021 1 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
- 01 2月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 12 1月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 09 12月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* add tensor_indices in AssignGroupBySize * add rebuild group in reducer
-
- 01 12月, 2020 2 次提交
- 26 11月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest * revise sharding en doc * update sharding utils api * add doc for sharding * fixed bug in sharding var size count * update varsize count in sharding * fix sharding num_nccl_comm * Revert "fix sharding num_nccl_comm" This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
-
- 24 11月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* upgrade comment string to raw string * fix string in * fix string with ' ' * revert update on comments * upgrade only necessary * fix sample code checker * fix comments with '''
-
- 26 10月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add sharding
-
- 22 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 12 10月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 28 9月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add get final strategy for user to print final strategy
-
- 25 9月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 16 9月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* add adaptivelsgd * Todo fix the code to avoid the conflict.
-
- 14 9月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* rm auto from localsgd
-
- 09 9月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* refine launch and distributed repr string for print
-
- 07 9月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add auto parallel L1 implementation test=develop
-
- 04 9月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* fix doc * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * fix doc, test=develop * update localsgd doc test=develop * update localsgd doc test=develop * fix fleet dgc amp doc, test=develop * fix, test=develop * fix async configs Co-authored-by: Nliuyi05 <gavin1332@gmail.com> Co-authored-by: NWangXi <wangxi16@baidu.com> Co-authored-by: NseiriosPlus <tangwei12@baidu.com>
-
- 29 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* fix api document
-
- 27 8月, 2020 1 次提交
-
-
由 Yi Liu 提交于
* modified timeout value on windows and mac (#26690) * add Local SGD algorithm referenced paper test=develop
-
- 26 8月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 25 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add cudnn related strategies to DistributedStrategy
-
- 24 8月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 21 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add documentation for DistributedStrategy
-
- 18 8月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add feature to fleet2.0 role_maker, distribute_strategy, test=develop
-
- 13 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* move paddle.fleet to paddle.distributed.fleet
-
- 12 8月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
add lamb to fleet meta optimizer
-
- 10 8月, 2020 1 次提交
-
-
由 tangwei12 提交于
* add paddle.fleet.AsyncOptimizer Co-authored-by: Ndongdaxiang <dongdaxiang@baidu.com>
-
- 05 8月, 2020 1 次提交
-
-
由 WangXi 提交于
Add dgc to fleet meta optimizer, rm dgc from optimizer all
-
- 03 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* split meta optimizer files * add graph execution in execution, update two properties in DistributedStrategy, unit tests for these features
-
- 30 7月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add gradient Merge optimizer to meta, test=develop
-
- 29 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* refine strategy compiler and meta optimizers make async as a_sync
-
- 28 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add more settings for distributed strategy Basically, DistributedStrategy has several parts of configurations: - BuildStrategy: the same as paddle.fluid.BuildStrategy, but the distributed arguments are moved out of BuildStrategy - ExecutionStrategy: the same as paddle.fluid.ExecutionStrategy - collective communication configs: nccl_comm_num, hierarchical allreduce and so on - distributed algorithms: async_update(mainly used in PS), lars, lamb and so on
-
- 20 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
refactor fleet api under paddle.fleet update DistributedStrategy
-
- 08 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
test=develop
-
- 06 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add paddle.fleet.DistributedStrategy for 2.0
-