From d11cf89c03d87b205da0a53015936c1537ad5a57 Mon Sep 17 00:00:00 2001
From: Travis CI
Date: Tue, 5 Dec 2017 09:56:38 +0000
Subject: [PATCH] Deploy to GitHub Pages:
a0c1190f76b10973d40966c97485de7397990138
---
.../_sources/design/refactor/distributed_architecture.md.txt | 2 +-
develop/doc/design/refactor/distributed_architecture.html | 2 +-
.../_sources/design/refactor/distributed_architecture.md.txt | 2 +-
develop/doc_cn/design/refactor/distributed_architecture.html | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/develop/doc/_sources/design/refactor/distributed_architecture.md.txt b/develop/doc/_sources/design/refactor/distributed_architecture.md.txt
index 2b4f921ae9..d9fe7d6bbb 100644
--- a/develop/doc/_sources/design/refactor/distributed_architecture.md.txt
+++ b/develop/doc/_sources/design/refactor/distributed_architecture.md.txt
@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document -
-[Design Doc: Operation Graph Based Parameter Server](./dist_train.md)
+[Design Doc: Operation Graph Based Parameter Server](./parameter_server.md)
## Distributed Training Architecture
diff --git a/develop/doc/design/refactor/distributed_architecture.html b/develop/doc/design/refactor/distributed_architecture.html
index 6533e3ca23..2522d87b98 100644
--- a/develop/doc/design/refactor/distributed_architecture.html
+++ b/develop/doc/design/refactor/distributed_architecture.html
@@ -246,7 +246,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle,
Limitation 3
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user’s Python module). For a detailed explanation, refer to this document -
-Design Doc: Operation Graph Based Parameter Server
+Design Doc: Operation Graph Based Parameter Server
diff --git a/develop/doc_cn/_sources/design/refactor/distributed_architecture.md.txt b/develop/doc_cn/_sources/design/refactor/distributed_architecture.md.txt
index 2b4f921ae9..d9fe7d6bbb 100644
--- a/develop/doc_cn/_sources/design/refactor/distributed_architecture.md.txt
+++ b/develop/doc_cn/_sources/design/refactor/distributed_architecture.md.txt
@@ -53,7 +53,7 @@ The IR for PaddlePaddle after refactoring is called a `Block`, it specifies the
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user's Python module). For a detailed explanation, refer to this document -
-[Design Doc: Operation Graph Based Parameter Server](./dist_train.md)
+[Design Doc: Operation Graph Based Parameter Server](./parameter_server.md)
## Distributed Training Architecture
diff --git a/develop/doc_cn/design/refactor/distributed_architecture.html b/develop/doc_cn/design/refactor/distributed_architecture.html
index 2e9b7939eb..1062cedd5a 100644
--- a/develop/doc_cn/design/refactor/distributed_architecture.html
+++ b/develop/doc_cn/design/refactor/distributed_architecture.html
@@ -247,7 +247,7 @@ computation is only specified in Python code which sits outside of PaddlePaddle,
Limitation 3
The user can not directly specify the parameter update rule for the parameter server in the Python module, since the parameter server does not use the same computation definition as the trainer. Instead, the update rule is baked inside the parameter server. The user can not specify the update rule explicitly.
This could be fixed by making the parameter server run the same computation definition as the trainer (the user’s Python module). For a detailed explanation, refer to this document -
-Design Doc: Operation Graph Based Parameter Server
+
Design Doc: Operation Graph Based Parameter Server
--
GitLab