提交 a02a68dc 编写于 作者: D dzhwinter

"fixed based on comment"

上级 a2dfabb4
......@@ -28,9 +28,9 @@ Besides, it needs interfaces to synchronize model update with each different GPU
As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created.
### Graph Converter
### Transpiler
To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the graph converter converts the user defined operation graph into sub-graphs to be executed on different devices.
To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices.
1. The user-defined model will be a single device program
......@@ -40,7 +40,7 @@ To be compatible with [parameter server design doc](https://github.com/PaddlePad
<img src="images/multigpu_before_convert.png" width="300"/>
After convert, the graph as shows
After compiling, the graph as shows
<img src="images/multigpu_allreduce.png" width="1000"/>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册