parallel_executor.md 1.7 KB
Newer Older
Y
Yang Yang 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
# ParallelExecutor Design Doc

## Introduction

We introduce `ParallelExecutor` to run multi-GPU training in PaddlePaddle Fluid. It supports
1. keeping a copy of the parameters on each GPU
1. allreduce on a separate stream allowing computation and communication overlap

An example of switching single GPU training to multiple GPUs:
```python
cost = your_neural_network()
opt = fluid.optimizer.SGDOptimizer()
opt.minimize(avg_cost)

# change Executor -> ParallelExecutor
exe = fluid.ParallelExecutor(gpu_list=[0, 1])

for iter in xranges(iter_num):
    exe.run()
```

## Design

In the constructor, a list of parameter, whose gradients need to be allreduced, is given.

During the runtime, `ParallelExecutor` starts `#gpu` threads to run each `Executor`. For every
operator run on each GPU, it will automatically sync with different streams when necessary.

```c++
// if op's input is params' grad:
    // sync with allreduce stream
    // e.g. sgd should wait for allreduce to be finished
SyncMultipleStreams(op);

op->Run(*local_scope, place_);

// if op's output is params' grad:
//     sync with computation stream
//     e.g. allreduce shoudl wait for fc_grad to be finished.
SyncMultipleStreams(op);
```


## API

The `ParallelExecutor.run` has similar interface as `Executor.run`. Besides
1. Scope: we don't expose `scope` in `ParallelExecutor.run` since `ParallelExecutor` has its
own scope to maintain NCCL.
1. Feed: we don't expose `feed` in the API either, because the whole point of implementing
parallel_executor is the speed. The input for NN should be implemented in an reader OP.
1. Fetch: we return the fetched value on all GPUs as a list. (e.g. `exe.run(..., fetch=loss)`
with return `[loss_on_gpu0, loss_on_gpu1]`)