This Design Doc refers to the NCCL feature in paddle. We propose an approach to support NCCL library both on a single machine and multiple machines. We wrapper the NCCL primitives `Broadcast`, `Allreduce`, `Reduce` as operators to utilize Multi-GPU powers in one script.
## Motivation
[NCCL](https://developer.nvidia.com/nccl) is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel.
- Pros
1. easily plug-in with [NCCL2](https://developer.nvidia.com/nccl) library.
1. high performance in NVIDIA GPUs.
1. MPI like primitives, which have low learning cost for users.
- Cons
1. Only design for NVIDIA GPUs, not a general multi-device solution.
1. Although NCCL1 is opensourced under BSD license, but NCCL2 is not opensourced anymore.
At the beginning of training, the framework needs to distribute the same parameters to every GPU, and merge the gradients at any time user interests.
As a result, during training, we need the operations of peer to peer copy between different GPUs, aggregating gradients/parameters from GPUs, and broadcasting parameters to GPUs. Every GPU only need to run the operator with correct place information.
Besides, it needs interfaces to synchronize model update with each different GPU Cards.
## Implementation
As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created.
### Transpiler
To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices.
1. The user-defined model will be a single device program
2. Broadcast/Reduce operators between GPUs will be inserted into the program, even for the multi-node, may insert the `Send`, `Recv` operator.
*Broadcast, AllReduce in a single machine. And Broadcast, AllReduce, [Send, Recv](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md#graph-converter) in multiple machines*
Operators are added to the sub-graphs. Every GPU assigned a role of `rank0`, `rank1` etc.
- **Broadcast**. Broadcast operator distribute initialized parameter to all the GPUs from the GPU who owns it. e.g. from`rank0` GPU.
- **AllReduce**. AllReduce operator synchronizes parameters/gradients between GPUs. AllReduce implemented in the Ring-Based communicating method, avoid of the bottle neck in a single GPU.
Need to notice that AllReduce operator force GPUs synchronized at that point. The whole training process in asynchronous or synchronous mode depends on the AllReduce point in the graph.
As it shown in the picture, when each GPU compute the gradient of `W`, followed with a `AllReduce` operator, accumulate the `dW` to full batch of data, then run the optimize process individually and apply the gradient to its `W`.
- **AllReduce**
Need to note that our AllReduce operator is a ring-base AllReduce implementation. If we use the NCCL2 AllReduce primitive, every GPU optimized full batch of data, wasted (n-1) GPU compute resources. In addition, NCCL2 built-in AllReduce will only utilize the communicating resource during synchronization, then update the gradient will be a subsequent phase. In fact, we can amortize the update gradient time cost into the communicating phase. The process is
1. Every parameter has its root card. That card will responsible for aggregating the gradients from GPUs.
2. The whole model's parameter will be hashed to different root card, ensure the load balance between GPUs.
3. Logically neighberhood card will start send parameter to the next one. After one round, the parameter main card will aggregate the full gradients.
4. Then the root card will optimize the parameter.
5. This parameter card will send its optimized result to its neighberhood, then the neighberhood will send parameter to its next one.
6. Finish the sychronization round.
The total time cost will be 2 * (n-1) * per-parameter-send-time, we reach the goal of amortize the upgrade time into communicating phase.
<liclass="toctree-l2"><aclass="reference internal"href="../howto/usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../howto/usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../howto/dev/new_layer_en.html">Write New Layers</a></li>
<spanid="design-doc-nccl-support-in-paddle-fluid"></span><h1>Design Doc: NCCL support in Paddle Fluid<aclass="headerlink"href="#design-doc-nccl-support-in-paddle-fluid"title="Permalink to this headline">¶</a></h1>
<divclass="section"id="abstract">
<spanid="abstract"></span><h2>Abstract<aclass="headerlink"href="#abstract"title="Permalink to this headline">¶</a></h2>
<p>This Design Doc refers to the NCCL feature in paddle. We propose an approach to support NCCL library both on a single machine and multiple machines. We wrapper the NCCL primitives <codeclass="docutils literal"><spanclass="pre">Broadcast</span></code>, <codeclass="docutils literal"><spanclass="pre">Allreduce</span></code>, <codeclass="docutils literal"><spanclass="pre">Reduce</span></code> as operators to utilize Multi-GPU powers in one script.</p>
</div>
<divclass="section"id="motivation">
<spanid="motivation"></span><h2>Motivation<aclass="headerlink"href="#motivation"title="Permalink to this headline">¶</a></h2>
<p><aclass="reference external"href="https://developer.nvidia.com/nccl">NCCL</a> is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel.</p>
<ulclass="simple">
<li>Pros</li>
</ul>
<olclass="simple">
<li>easily plug-in with <aclass="reference external"href="https://developer.nvidia.com/nccl">NCCL2</a> library.</li>
<li>high performance in NVIDIA GPUs.</li>
<li>MPI like primitives, which have low learning cost for users.</li>
</ol>
<ulclass="simple">
<li>Cons</li>
</ul>
<olclass="simple">
<li>Only design for NVIDIA GPUs, not a general multi-device solution.</li>
<li>Although NCCL1 is opensourced under BSD license, but NCCL2 is not opensourced anymore.</li>
</ol>
<p>At the beginning of training, the framework needs to distribute the same parameters to every GPU, and merge the gradients at any time user interests.</p>
<p>As a result, during training, we need the operations of peer to peer copy between different GPUs, aggregating gradients/parameters from GPUs, and broadcasting parameters to GPUs. Every GPU only need to run the operator with correct place information.</p>
<p>Besides, it needs interfaces to synchronize model update with each different GPU Cards.</p>
</div>
<divclass="section"id="implementation">
<spanid="implementation"></span><h2>Implementation<aclass="headerlink"href="#implementation"title="Permalink to this headline">¶</a></h2>
<p>As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created.</p>
<divclass="section"id="transpiler">
<spanid="transpiler"></span><h3>Transpiler<aclass="headerlink"href="#transpiler"title="Permalink to this headline">¶</a></h3>
<p>To be compatible with <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md">parameter server design doc</a>, the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices.</p>
<ol>
<li><pclass="first">The user-defined model will be a single device program</p>
</li>
<li><pclass="first">Broadcast/Reduce operators between GPUs will be inserted into the program, even for the multi-node, may insert the <codeclass="docutils literal"><spanclass="pre">Send</span></code>, <codeclass="docutils literal"><spanclass="pre">Recv</span></code> operator.</p>
<p><em>Broadcast, AllReduce in a single machine. And Broadcast, AllReduce, <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md#graph-converter">Send, Recv</a> in multiple machines</em></p>
<p>Operators are added to the sub-graphs. Every GPU assigned a role of <codeclass="docutils literal"><spanclass="pre">rank0</span></code>, <codeclass="docutils literal"><spanclass="pre">rank1</span></code> etc.</p>
<ulclass="simple">
<li><strong>Broadcast</strong>. Broadcast operator distribute initialized parameter to all the GPUs from the GPU who owns it. e.g. from<codeclass="docutils literal"><spanclass="pre">rank0</span></code> GPU.</li>
<li><strong>AllReduce</strong>. AllReduce operator synchronizes parameters/gradients between GPUs. AllReduce implemented in the Ring-Based communicating method, avoid of the bottle neck in a single GPU.</li>
</ul>
<p>Need to notice that AllReduce operator force GPUs synchronized at that point. The whole training process in asynchronous or synchronous mode depends on the AllReduce point in the graph.</p>
<p>As it shown in the picture, when each GPU compute the gradient of <codeclass="docutils literal"><spanclass="pre">W</span></code>, followed with a <codeclass="docutils literal"><spanclass="pre">AllReduce</span></code> operator, accumulate the <codeclass="docutils literal"><spanclass="pre">dW</span></code> to full batch of data, then run the optimize process individually and apply the gradient to its <codeclass="docutils literal"><spanclass="pre">W</span></code>.</p>
<ulclass="simple">
<li><strong>AllReduce</strong>
Need to note that our AllReduce operator is a ring-base AllReduce implementation. If we use the NCCL2 AllReduce primitive, every GPU optimized full batch of data, wasted (n-1) GPU compute resources. In addition, NCCL2 built-in AllReduce will only utilize the communicating resource during synchronization, then update the gradient will be a subsequent phase. In fact, we can amortize the update gradient time cost into the communicating phase. The process is</li>
</ul>
<olclass="simple">
<li>Every parameter has its root card. That card will responsible for aggregating the gradients from GPUs.</li>
<li>The whole model’s parameter will be hashed to different root card, ensure the load balance between GPUs.</li>
<li>Logically neighberhood card will start send parameter to the next one. After one round, the parameter main card will aggregate the full gradients.</li>
<li>Then the root card will optimize the parameter.</li>
<li>This parameter card will send its optimized result to its neighberhood, then the neighberhood will send parameter to its next one.</li>
<li>Finish the sychronization round.</li>
</ol>
<p>The total time cost will be 2 * (n-1) * per-parameter-send-time, we reach the goal of amortize the upgrade time into communicating phase.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
This Design Doc refers to the NCCL feature in paddle. We propose an approach to support NCCL library both on a single machine and multiple machines. We wrapper the NCCL primitives `Broadcast`, `Allreduce`, `Reduce` as operators to utilize Multi-GPU powers in one script.
## Motivation
[NCCL](https://developer.nvidia.com/nccl) is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel.
- Pros
1. easily plug-in with [NCCL2](https://developer.nvidia.com/nccl) library.
1. high performance in NVIDIA GPUs.
1. MPI like primitives, which have low learning cost for users.
- Cons
1. Only design for NVIDIA GPUs, not a general multi-device solution.
1. Although NCCL1 is opensourced under BSD license, but NCCL2 is not opensourced anymore.
At the beginning of training, the framework needs to distribute the same parameters to every GPU, and merge the gradients at any time user interests.
As a result, during training, we need the operations of peer to peer copy between different GPUs, aggregating gradients/parameters from GPUs, and broadcasting parameters to GPUs. Every GPU only need to run the operator with correct place information.
Besides, it needs interfaces to synchronize model update with each different GPU Cards.
## Implementation
As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created.
### Transpiler
To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices.
1. The user-defined model will be a single device program
2. Broadcast/Reduce operators between GPUs will be inserted into the program, even for the multi-node, may insert the `Send`, `Recv` operator.
*Broadcast, AllReduce in a single machine. And Broadcast, AllReduce, [Send, Recv](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md#graph-converter) in multiple machines*
Operators are added to the sub-graphs. Every GPU assigned a role of `rank0`, `rank1` etc.
- **Broadcast**. Broadcast operator distribute initialized parameter to all the GPUs from the GPU who owns it. e.g. from`rank0` GPU.
- **AllReduce**. AllReduce operator synchronizes parameters/gradients between GPUs. AllReduce implemented in the Ring-Based communicating method, avoid of the bottle neck in a single GPU.
Need to notice that AllReduce operator force GPUs synchronized at that point. The whole training process in asynchronous or synchronous mode depends on the AllReduce point in the graph.
As it shown in the picture, when each GPU compute the gradient of `W`, followed with a `AllReduce` operator, accumulate the `dW` to full batch of data, then run the optimize process individually and apply the gradient to its `W`.
- **AllReduce**
Need to note that our AllReduce operator is a ring-base AllReduce implementation. If we use the NCCL2 AllReduce primitive, every GPU optimized full batch of data, wasted (n-1) GPU compute resources. In addition, NCCL2 built-in AllReduce will only utilize the communicating resource during synchronization, then update the gradient will be a subsequent phase. In fact, we can amortize the update gradient time cost into the communicating phase. The process is
1. Every parameter has its root card. That card will responsible for aggregating the gradients from GPUs.
2. The whole model's parameter will be hashed to different root card, ensure the load balance between GPUs.
3. Logically neighberhood card will start send parameter to the next one. After one round, the parameter main card will aggregate the full gradients.
4. Then the root card will optimize the parameter.
5. This parameter card will send its optimized result to its neighberhood, then the neighberhood will send parameter to its next one.
6. Finish the sychronization round.
The total time cost will be 2 * (n-1) * per-parameter-send-time, we reach the goal of amortize the upgrade time into communicating phase.
<spanid="design-doc-nccl-support-in-paddle-fluid"></span><h1>Design Doc: NCCL support in Paddle Fluid<aclass="headerlink"href="#design-doc-nccl-support-in-paddle-fluid"title="永久链接至标题">¶</a></h1>
<p>This Design Doc refers to the NCCL feature in paddle. We propose an approach to support NCCL library both on a single machine and multiple machines. We wrapper the NCCL primitives <codeclass="docutils literal"><spanclass="pre">Broadcast</span></code>, <codeclass="docutils literal"><spanclass="pre">Allreduce</span></code>, <codeclass="docutils literal"><spanclass="pre">Reduce</span></code> as operators to utilize Multi-GPU powers in one script.</p>
<p><aclass="reference external"href="https://developer.nvidia.com/nccl">NCCL</a> is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel.</p>
<ulclass="simple">
<li>Pros</li>
</ul>
<olclass="simple">
<li>easily plug-in with <aclass="reference external"href="https://developer.nvidia.com/nccl">NCCL2</a> library.</li>
<li>high performance in NVIDIA GPUs.</li>
<li>MPI like primitives, which have low learning cost for users.</li>
</ol>
<ulclass="simple">
<li>Cons</li>
</ul>
<olclass="simple">
<li>Only design for NVIDIA GPUs, not a general multi-device solution.</li>
<li>Although NCCL1 is opensourced under BSD license, but NCCL2 is not opensourced anymore.</li>
</ol>
<p>At the beginning of training, the framework needs to distribute the same parameters to every GPU, and merge the gradients at any time user interests.</p>
<p>As a result, during training, we need the operations of peer to peer copy between different GPUs, aggregating gradients/parameters from GPUs, and broadcasting parameters to GPUs. Every GPU only need to run the operator with correct place information.</p>
<p>Besides, it needs interfaces to synchronize model update with each different GPU Cards.</p>
<p>As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created.</p>
<p>To be compatible with <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md">parameter server design doc</a>, the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices.</p>
<ol>
<li><pclass="first">The user-defined model will be a single device program</p>
</li>
<li><pclass="first">Broadcast/Reduce operators between GPUs will be inserted into the program, even for the multi-node, may insert the <codeclass="docutils literal"><spanclass="pre">Send</span></code>, <codeclass="docutils literal"><spanclass="pre">Recv</span></code> operator.</p>
<p><em>Broadcast, AllReduce in a single machine. And Broadcast, AllReduce, <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md#graph-converter">Send, Recv</a> in multiple machines</em></p>
<p>Operators are added to the sub-graphs. Every GPU assigned a role of <codeclass="docutils literal"><spanclass="pre">rank0</span></code>, <codeclass="docutils literal"><spanclass="pre">rank1</span></code> etc.</p>
<ulclass="simple">
<li><strong>Broadcast</strong>. Broadcast operator distribute initialized parameter to all the GPUs from the GPU who owns it. e.g. from<codeclass="docutils literal"><spanclass="pre">rank0</span></code> GPU.</li>
<li><strong>AllReduce</strong>. AllReduce operator synchronizes parameters/gradients between GPUs. AllReduce implemented in the Ring-Based communicating method, avoid of the bottle neck in a single GPU.</li>
</ul>
<p>Need to notice that AllReduce operator force GPUs synchronized at that point. The whole training process in asynchronous or synchronous mode depends on the AllReduce point in the graph.</p>
<p>As it shown in the picture, when each GPU compute the gradient of <codeclass="docutils literal"><spanclass="pre">W</span></code>, followed with a <codeclass="docutils literal"><spanclass="pre">AllReduce</span></code> operator, accumulate the <codeclass="docutils literal"><spanclass="pre">dW</span></code> to full batch of data, then run the optimize process individually and apply the gradient to its <codeclass="docutils literal"><spanclass="pre">W</span></code>.</p>
<ulclass="simple">
<li><strong>AllReduce</strong>
Need to note that our AllReduce operator is a ring-base AllReduce implementation. If we use the NCCL2 AllReduce primitive, every GPU optimized full batch of data, wasted (n-1) GPU compute resources. In addition, NCCL2 built-in AllReduce will only utilize the communicating resource during synchronization, then update the gradient will be a subsequent phase. In fact, we can amortize the update gradient time cost into the communicating phase. The process is</li>
</ul>
<olclass="simple">
<li>Every parameter has its root card. That card will responsible for aggregating the gradients from GPUs.</li>
<li>The whole model’s parameter will be hashed to different root card, ensure the load balance between GPUs.</li>
<li>Logically neighberhood card will start send parameter to the next one. After one round, the parameter main card will aggregate the full gradients.</li>
<li>Then the root card will optimize the parameter.</li>
<li>This parameter card will send its optimized result to its neighberhood, then the neighberhood will send parameter to its next one.</li>
<li>Finish the sychronization round.</li>
</ol>
<p>The total time cost will be 2 * (n-1) * per-parameter-send-time, we reach the goal of amortize the upgrade time into communicating phase.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.