async_update.md 3.2 KB
Newer Older
Y
Yancey1989 已提交
1 2 3 4 5 6
# Design Doc: Asynchronous Update With Distributed Training

## Background

For the typical synchronous distributed training, some significant steps are as follows:

Y
Yancey 已提交
7 8
1. A trainer process will compute the gradients and **send** them to the parameter server (PS) nodes.
1. After the PS node received gradients came from all the Trainers, It will aggregate the
Y
update  
Yancey1989 已提交
9 10 11
gradient variables for the same parameter into one gradient variable and then apply the aggregated
gradient to the respective parameter, finally using an optimize algorithms(SGD, Monument...)
to update the parameters.
Y
Yancey 已提交
12
1. The Trainer would wait for the PS finished the optimize stage, and GET the parameters from PS,
Y
Yancey1989 已提交
13
so all the Trainers would get the same parameters.
Y
Yancey1989 已提交
14

Y
Yancey 已提交
15 16 17 18 19 20 21 22 23
In Synchronous Distributed Training, there is a **barrier** on each PS to wait until all trainers processes
have completed running current mini-batch. After that, all trainers can continue to run the next
mini-batch. So, we can find that the overall performance of Synchronous Distributed Training depends 
on the slowest node.

In Asynchronous Distributed Training, we don't need to wait for a global mini-bach, the optimizer on
the PS will run immediately when the gradient is uploaded to the PS from one trainer. This mode would
train such models that achieve scaling, better throughput. In this design doc, we will introduce how to 
implement the Asynchronous Distributed Training base on PaddlePaddle Fluid.
Y
Yancey1989 已提交
24 25 26

## Design

Y
Yancey1989 已提交
27
<img src="./src/async_update.png" width="600"/>
Y
Yancey1989 已提交
28

Y
Yancey 已提交
29
As the figure above, we describe a global view of the asynchronous update process and use
Y
Yancey1989 已提交
30 31 32
the parameter `w1` as an example to introduce the steps:
1. For each gradient variables, they may distribute on different GPU card and aggregate
them while they are all calculated.
Y
Yancey 已提交
33
1. Split the gradient variable into multiple blocks according to the number of PS
Y
Yancey1989 已提交
34
instances and then send them.
Y
Yancey 已提交
35
1. PS would run an `Optimize Block` using a specified optimize algorithm to update
Y
Yancey1989 已提交
36
the specified parameter.
Y
Yancey 已提交
37
1. The trainer will fetch the latest parameter from PS before running forward Op which depends
Y
update  
Yancey1989 已提交
38
on the specified parameter.
Y
Yancey1989 已提交
39 40 41 42 43
1. Broadcast the received variable into multiple GPU cards and continue to run the next
mini-batch.

### Trainer

Y
Yancey1989 已提交
44
- For the multiple devices distributed training, we need to aggregate the gradient
Y
update  
Yancey1989 已提交
45
variables which placed on different devices firstly and then schedule a `SendVars` Operator to
Y
Yancey 已提交
46 47
send the gradient variables to the multiple PS instances.
- Schedule `FetchVars` operator to fetch the latest parameter from PS before running
Y
Yancey1989 已提交
48
the forward ops.
Y
Yancey1989 已提交
49
- There could be a large number of gradient variables to be sent, so we need to use another
Y
Yancey1989 已提交
50
thread pool(IO Threadpool) whose a number of the schedulable threads is larger than the
Y
Yancey1989 已提交
51 52 53 54 55 56 57
computing thread pool to avoid competitive the thread resources with computing.

### Parameter Server

<img src="./src/async_pserver.png" width="750"/>

- There should be multiple trainer instances want to optimize the same parameter at
Y
Yancey1989 已提交
58
the same time, to avoid the racing, we need one `BlockingQueue` for each gradient
Y
Yancey1989 已提交
59 60 61
variable to process them one by one.
- We need a `Map` structure to map a gradient variable name to the `OptimizeBlock` which
can optimize the respective parameter.