提交 d11cf89c 编写于 作者: T Travis CI

Deploy to GitHub Pages: a0c1190f

上级 456788ba
...@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the ...@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly. The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document - This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document -
[Design Doc: Operation Graph Based Parameter Server](./dist_train.md) [Design Doc: Operation Graph Based Parameter Server](./parameter_server.md)
## Distributed Training Architecture ## Distributed Training Architecture
......
...@@ -246,7 +246,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle, ...@@ -246,7 +246,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle,
<span id="limitation-3"></span><h3>Limitation 3<a class="headerlink" href="#limitation-3" title="Permalink to this headline"></a></h3> <span id="limitation-3"></span><h3>Limitation 3<a class="headerlink" href="#limitation-3" title="Permalink to this headline"></a></h3>
<p>The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.</p> <p>The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.</p>
<p>This could be fixed by making the parameter server run the same computation definition as the trainer (the user&#8217;s Python module). For a detailed explanation, refer to this document - <p>This could be fixed by making the parameter server run the same computation definition as the trainer (the user&#8217;s Python module). For a detailed explanation, refer to this document -
<a class="reference external" href="design/refactor/dist_train.md">Design Doc: Operation Graph Based Parameter Server</a></p> <a class="reference internal" href="parameter_server.html"><span class="doc">Design Doc: Operation Graph Based Parameter Server</span></a></p>
</div> </div>
</div> </div>
<div class="section" id="distributed-training-architecture"> <div class="section" id="distributed-training-architecture">
......
...@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the ...@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly. The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document - This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document -
[Design Doc: Operation Graph Based Parameter Server](./dist_train.md) [Design Doc: Operation Graph Based Parameter Server](./parameter_server.md)
## Distributed Training Architecture ## Distributed Training Architecture
......
...@@ -247,7 +247,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle, ...@@ -247,7 +247,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle,
<span id="limitation-3"></span><h3>Limitation 3<a class="headerlink" href="#limitation-3" title="永久链接至标题"></a></h3> <span id="limitation-3"></span><h3>Limitation 3<a class="headerlink" href="#limitation-3" title="永久链接至标题"></a></h3>
<p>The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.</p> <p>The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.</p>
<p>This could be fixed by making the parameter server run the same computation definition as the trainer (the user&#8217;s Python module). For a detailed explanation, refer to this document - <p>This could be fixed by making the parameter server run the same computation definition as the trainer (the user&#8217;s Python module). For a detailed explanation, refer to this document -
<a class="reference external" href="design/refactor/dist_train.md">Design Doc: Operation Graph Based Parameter Server</a></p> <a class="reference internal" href="parameter_server.html"><span class="doc">Design Doc: Operation Graph Based Parameter Server</span></a></p>
</div> </div>
</div> </div>
<div class="section" id="distributed-training-architecture"> <div class="section" id="distributed-training-architecture">
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册