- 10 8月, 2020 4 次提交
-
-
由 lilong12 提交于
* add the support for multi-node training
-
由 gongweibao 提交于
* fix merge3 test=develop
-
由 tangwei12 提交于
* add paddle.fleet.AsyncOptimizer Co-authored-by: Ndongdaxiang <dongdaxiang@baidu.com>
-
由 danleifeng 提交于
* support multi-ps training mode for fleetrun; test=develop
-
- 07 8月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop,test=document_fix, remove the out args * fleet_util move to paddle.fleet Co-authored-by: NWuHaobo <wuhaobo1994@gmail.com> Co-authored-by: Ntangwei12 <tangwei12@baidu.com>
-
- 06 8月, 2020 1 次提交
-
-
由 xujiaqi01 提交于
* move dataset to fleet test=develop * fix test=develop * fix test=develop * fix test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop
-
- 05 8月, 2020 3 次提交
-
-
由 WangXi 提交于
Add dgc to fleet meta optimizer, rm dgc from optimizer all
-
由 Dong Daxiang 提交于
* generate context during compile
-
由 danleifeng 提交于
* add fleetrun command for distributed running; test=develop
-
- 04 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
-
- 03 8月, 2020 3 次提交
-
-
由 WangXi 提交于
-
由 JZ-LIANG 提交于
-
由 Dong Daxiang 提交于
* split meta optimizer files * add graph execution in execution, update two properties in DistributedStrategy, unit tests for these features
-
- 01 8月, 2020 1 次提交
-
-
由 Yi Liu 提交于
* add localsgd meta optimizer
-
- 31 7月, 2020 1 次提交
-
-
由 lilong12 提交于
* add pipeline optimizer
-
- 30 7月, 2020 2 次提交
-
-
由 mapingshuo 提交于
* add gradient Merge optimizer to meta, test=develop
-
由 tangwei12 提交于
Integrated Trainer of Parameter Server (API add `fluid.contrib.layers.sparse_embedding` only) (#22957) * Integrated Trainer of Parameter Server
-
- 29 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* refine strategy compiler and meta optimizers make async as a_sync
-
- 28 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add more settings for distributed strategy Basically, DistributedStrategy has several parts of configurations: - BuildStrategy: the same as paddle.fluid.BuildStrategy, but the distributed arguments are moved out of BuildStrategy - ExecutionStrategy: the same as paddle.fluid.ExecutionStrategy - collective communication configs: nccl_comm_num, hierarchical allreduce and so on - distributed algorithms: async_update(mainly used in PS), lars, lamb and so on
-
- 24 7月, 2020 1 次提交
-
-
由 xujiaqi01 提交于
* add fleet distributed metrics * test=develop
-
- 23 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* fix gen nccl id bug
-
- 20 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
refactor fleet api under paddle.fleet update DistributedStrategy
-
- 08 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
test=develop
-
- 06 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add paddle.fleet.DistributedStrategy for 2.0
-
- 23 3月, 2020 1 次提交
-
-
由 XiaoguangHu 提交于
-