async_update.md 2.9 KB
Newer Older
Y
Yancey1989 已提交
1 2 3 4 5 6
# Design Doc: Asynchronous Update With Distributed Training

## Background

For the typical synchronous distributed training, some significant steps are as follows:

Y
Yancey1989 已提交
7
1. A Trainer will compute the gradients and SEND them to the Parameter Server(PServer) nodes.
Y
update  
Yancey1989 已提交
8 9 10 11
1. After the PServer node received gradients came from all the Trainers, It will aggregate the
gradient variables for the same parameter into one gradient variable and then apply the aggregated
gradient to the respective parameter, finally using an optimize algorithms(SGD, Monument...)
to update the parameters.
Y
Yancey1989 已提交
12 13
1. The Trainer would wait for the PServers finished the optimize stage, and GET the parameters from PServer,
so all the Trainers would get the same parameters.
Y
Yancey1989 已提交
14 15

In the synchronously distributed training, there should be a `Barrier` to synchronise the
Y
Yancey1989 已提交
16 17 18 19
parameters after the optimizing stage. The performance of a distributed training job would
depend on the slowest node if there were hundreds or thousands of training nodes in a
Job, the performance of synchronously distributed training might be very poor because of
the slow node. So this design doc would introduce an approach to implement
Y
Yancey1989 已提交
20 21 22 23
*asynchronously* distributed training in PaddlePaddle Fluid.

## Design

Y
Yancey1989 已提交
24
<img src="./src/async_update.png" width="600"/>
Y
Yancey1989 已提交
25 26 27 28 29 30

As the figure above, we describe a global view of asynchronously update process and use
the parameter `w1` as an example to introduce the steps:
1. For each gradient variables, they may distribute on different GPU card and aggregate
them while they are all calculated.
1. Split the gradient variable into multiple blocks according to the number of PServer
Y
Yancey1989 已提交
31 32 33 34 35
instances and then sent them.
1. PServer would run an `Optimize Block` using a specified optimize algorithm to update
the specified parameter.
1. The trainer will fetch the parameter before running forward Op depends on the specified
parameter.
Y
Yancey1989 已提交
36 37 38 39 40
1. Broadcast the received variable into multiple GPU cards and continue to run the next
mini-batch.

### Trainer

Y
Yancey1989 已提交
41
- For the multiple devices distributed training, we need to aggregate the gradient
Y
update  
Yancey1989 已提交
42
variables which placed on different devices firstly and then schedule a `SendVars` Operator to
Y
Yancey1989 已提交
43 44 45
send the gradient variables to the multiple PServer instances.
- Schedule `FetchVars` operator to fetch the latest parameter from PServer before running
the forward ops.
Y
Yancey1989 已提交
46
- There could be a large number of gradient variables to be sent, so we need to use another
Y
Yancey1989 已提交
47
thread pool(IO Threadpool) which a number of the schedulable threads is larger than the
Y
Yancey1989 已提交
48 49 50 51 52 53 54 55 56 57 58
computing thread pool to avoid competitive the thread resources with computing.

### Parameter Server

<img src="./src/async_pserver.png" width="750"/>

- There should be multiple trainer instances want to optimize the same parameter at
the same time, to avoid the pollution, we need one `BlockingQueue` for each gradient
variable to process them one by one.
- We need a `Map` structure to map a gradient variable name to the `OptimizeBlock` which
can optimize the respective parameter.