diff --git a/doc/fluid/design/dist_train/async_update.md b/doc/fluid/design/dist_train/async_update.md index 05175596f7a3261e788121240e32df78928c84b5..be8783a7e71f3cfccdd5741cf65c2c6ad5dc3e93 100644 --- a/doc/fluid/design/dist_train/async_update.md +++ b/doc/fluid/design/dist_train/async_update.md @@ -4,41 +4,46 @@ For the typical synchronous distributed training, some significant steps are as follows: -1. A Trainer will compute the gradients and SEND them to the Parameter -Server(PServer) nodes. -1. After the PServer node received gradients came from all the Trainers, it would apply the gradient to the respective variables, and using an optimize algorithms(SGD, - Momentment...) to update the parameters. -1. The Trainer would wait for the PServers finished the optimize stage, and GET the parameters from PServer, so all the Trainers would get the same parameters. +1. A Trainer will compute the gradients and SEND them to the Parameter Server(PServer) nodes. +1. After the PServer node received gradients came from all the Trainers, +it would apply the gradient to the respective variables, and using an optimize algorithms(SGD, +Momentment...) to update the parameters. +1. The Trainer would wait for the PServers finished the optimize stage, and GET the parameters from PServer, +so all the Trainers would get the same parameters. In the synchronously distributed training, there should be a `Barrier` to synchronise the -parameters after the optimizing stage. The performance of a distributed training job -depends on the lowest node, if there were hundreds or thousand training nodes in a Job, -the performance of synchronously distributed training might be very slow because of -the slow node. So this design doc would introduce an approach to implement +parameters after the optimizing stage. The performance of a distributed training job would +depend on the slowest node if there were hundreds or thousands of training nodes in a +Job, the performance of synchronously distributed training might be very poor because of +the slow node. So this design doc would introduce an approach to implement *asynchronously* distributed training in PaddlePaddle Fluid. ## Design - + As the figure above, we describe a global view of asynchronously update process and use the parameter `w1` as an example to introduce the steps: 1. For each gradient variables, they may distribute on different GPU card and aggregate them while they are all calculated. 1. Split the gradient variable into multiple blocks according to the number of PServer -instances and sent them. -1. PServer would run an `Optimize Block` to use a specified optimize algorithm to update -the specified parameter, such as `w1`. -1. The trainer will fetch the latest parameter after PServer finished the optimize stage. +instances and then sent them. +1. PServer would run an `Optimize Block` using a specified optimize algorithm to update +the specified parameter. +1. The trainer will fetch the parameter before running forward Op depends on the specified +parameter. 1. Broadcast the received variable into multiple GPU cards and continue to run the next mini-batch. ### Trainer -- We need a new Operator named `RemoteOptimize` to send gradients to multiple PServer -instances and fetch the latest parameter. +- For the multiple devices distributed training, we need to aggregate the gradient +variables which placed on different devices firstly, and then schedule a `SendVars` Operator to +send the gradient variables to the multiple PServer instances. +- Schedule `FetchVars` operator to fetch the latest parameter from PServer before running +the forward ops. - There could be a large number of gradient variables to be sent, so we need to use another -thread pool(IO Threadpool) which number of the schedulable threads is larger than the +thread pool(IO Threadpool) which a number of the schedulable threads is larger than the computing thread pool to avoid competitive the thread resources with computing. ### Parameter Server diff --git a/doc/fluid/design/dist_train/src/async_pserver.graffle b/doc/fluid/design/dist_train/src/async_pserver.graffle index 110eec7f9b3f26034cd90feb188b558c11f7ce02..ea6dcfa2a9e4daac1500d98743a7a5a4234470f9 100644 Binary files a/doc/fluid/design/dist_train/src/async_pserver.graffle and b/doc/fluid/design/dist_train/src/async_pserver.graffle differ diff --git a/doc/fluid/design/dist_train/src/async_pserver.png b/doc/fluid/design/dist_train/src/async_pserver.png index 1f49e31aeb255807013b9f0881ccfa004cb1e7b5..a3e18126c3ca40316d7a5c7f06a2e8dd48d5f263 100644 Binary files a/doc/fluid/design/dist_train/src/async_pserver.png and b/doc/fluid/design/dist_train/src/async_pserver.png differ diff --git a/doc/fluid/design/dist_train/src/async_update.graffle b/doc/fluid/design/dist_train/src/async_update.graffle index 040112477fcd7c2876678ef3b3ba6ec085619e00..3a631888688a0d564a873fcb16d943958c91223e 100644 Binary files a/doc/fluid/design/dist_train/src/async_update.graffle and b/doc/fluid/design/dist_train/src/async_update.graffle differ diff --git a/doc/fluid/design/dist_train/src/async_update.png b/doc/fluid/design/dist_train/src/async_update.png index 6e54d15e9951e5048cb75dfb50c6a5d9bdc89feb..3e8db973f45d6d9ac8dcce1dc7878067e79e6dcc 100644 Binary files a/doc/fluid/design/dist_train/src/async_update.png and b/doc/fluid/design/dist_train/src/async_update.png differ