From a2dfabb46a5dc2ac11d57b91c3fe8ed749c17c82 Mon Sep 17 00:00:00 2001 From: dzhwinter Date: Thu, 14 Dec 2017 03:07:21 -0800 Subject: [PATCH] "fix based on comments" --- doc/design/paddle_nccl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/design/paddle_nccl.md b/doc/design/paddle_nccl.md index 5e28144b95..7f0d8e14e4 100644 --- a/doc/design/paddle_nccl.md +++ b/doc/design/paddle_nccl.md @@ -7,7 +7,7 @@ This Design Doc refers to the NCCL feature in paddle. We propose an approach t ## Motivation -NCCL is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. [NCCL](https://developer.nvidia.com/nccl). With NCCL library, we can easily accelerate the training in parallel. +[NCCL](https://developer.nvidia.com/nccl) is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel. - Pros 1. easily plug-in with [NCCL2](https://developer.nvidia.com/nccl) library. -- GitLab