From a02a68dc6daecbd3d5b17b57db03e0b3f916646e Mon Sep 17 00:00:00 2001 From: dzhwinter Date: Thu, 14 Dec 2017 06:09:54 -0800 Subject: [PATCH] "fixed based on comment" --- doc/design/paddle_nccl.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/design/paddle_nccl.md b/doc/design/paddle_nccl.md index 7f0d8e14e4f..c7dac70998a 100644 --- a/doc/design/paddle_nccl.md +++ b/doc/design/paddle_nccl.md @@ -28,9 +28,9 @@ Besides, it needs interfaces to synchronize model update with each different GPU As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created. -### Graph Converter +### Transpiler -To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the graph converter converts the user defined operation graph into sub-graphs to be executed on different devices. +To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices. 1. The user-defined model will be a single device program @@ -40,7 +40,7 @@ To be compatible with [parameter server design doc](https://github.com/PaddlePad -After convert, the graph as shows +After compiling, the graph as shows -- GitLab