- 11 5月, 2021 1 次提交
-
-
由 ShenLiang 提交于
fix error log for reducer fix doc fix bug of utest fix spawn fix converage
-
- 25 4月, 2021 1 次提交
-
-
由 lilong12 提交于
* update
-
- 17 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* add model parallel support in dygraph
-
- 15 4月, 2021 1 次提交
-
-
由 Thunderbrook 提交于
* pscore support heterps * fleet cmake * fleet wrapper * macro * solve conflict * solve conflict * add unitest * paddle enforce * unitest * unitest * unitest
-
- 07 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 02 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 01 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* support control flow * supoort sync_parameters_buffers * fix the bug of sparse embedding
-
- 22 3月, 2021 1 次提交
-
-
由 lilong12 提交于
* add 1f1b scheduler for pp, test=develop
-
- 10 3月, 2021 1 次提交
-
-
由 lilong12 提交于
* remove the send/recv of tensor size, but users have to specify the shape of the received var explicitly.
-
- 24 2月, 2021 1 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
- 01 2月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 26 1月, 2021 1 次提交
-
-
由 lilong12 提交于
-
- 12 1月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 08 1月, 2021 1 次提交
-
-
由 Chengmo 提交于
* add tensor table
-
- 17 12月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 11 12月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
* Sharding add hybrid-dp feature * update sharding in distributed_strategy * update sharding unitest * revise code format for sharding
-
- 01 12月, 2020 1 次提交
-
-
由 ShenLiang 提交于
-
- 26 10月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* add sharding
-
- 13 10月, 2020 1 次提交
-
-
由 Chengmo 提交于
* refine fleetrun.ps_launch * update fleet run for multi device support * ps_graph support ps-gpu * fix heter save * add heter save unittest * fix unittest & simple code * update fleetrun * fix fleetrun * fix launch barrier * fix role maker * add paddlecloud rolemaker unittest * rename heter_worker_device_guard
-
- 27 9月, 2020 1 次提交
-
-
由 Chengmo 提交于
* fix test_dist_fleet_heter_ctr & peformance update
-
- 25 9月, 2020 1 次提交
-
-
由 WangXi 提交于
-
- 16 9月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* add adaptivelsgd * Todo fix the code to avoid the conflict.
-
- 14 9月, 2020 1 次提交
-
-
由 ShenLiang 提交于
* rm auto from localsgd
-
- 09 9月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
add lars to fleet meta optimizer
-
- 25 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add cudnn related strategies to DistributedStrategy
-
- 12 8月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
add lamb to fleet meta optimizer
-
- 10 8月, 2020 1 次提交
-
-
由 tangwei12 提交于
* add paddle.fleet.AsyncOptimizer Co-authored-by: Ndongdaxiang <dongdaxiang@baidu.com>
-
- 05 8月, 2020 1 次提交
-
-
由 WangXi 提交于
Add dgc to fleet meta optimizer, rm dgc from optimizer all
-
- 03 8月, 2020 3 次提交
-
-
由 WangXi 提交于
-
由 JZ-LIANG 提交于
-
由 Dong Daxiang 提交于
* split meta optimizer files * add graph execution in execution, update two properties in DistributedStrategy, unit tests for these features
-
- 29 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* refine strategy compiler and meta optimizers make async as a_sync
-
- 28 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add more settings for distributed strategy Basically, DistributedStrategy has several parts of configurations: - BuildStrategy: the same as paddle.fluid.BuildStrategy, but the distributed arguments are moved out of BuildStrategy - ExecutionStrategy: the same as paddle.fluid.ExecutionStrategy - collective communication configs: nccl_comm_num, hierarchical allreduce and so on - distributed algorithms: async_update(mainly used in PS), lars, lamb and so on
-
- 20 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
refactor fleet api under paddle.fleet update DistributedStrategy
-
- 06 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add paddle.fleet.DistributedStrategy for 2.0
-