提交 26817729 编写于 作者: T Travis CI

Deploy to GitHub Pages: 47643038

上级 d83b861a
...@@ -15,7 +15,7 @@ This poses technical challenges to PaddlePaddle: ...@@ -15,7 +15,7 @@ This poses technical challenges to PaddlePaddle:
A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes: A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:
1. the *master process*, which dispatches tasks to 1. the *master server process*, which dispatches tasks to
1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via 1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via
1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters. 1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters.
...@@ -27,9 +27,9 @@ By coordinating these processes, PaddlePaddle supports use both Synchronize Stoc ...@@ -27,9 +27,9 @@ By coordinating these processes, PaddlePaddle supports use both Synchronize Stoc
When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model. When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.
### Master Process ### Master Server Process
The master process will: The master server process will:
- Partition a dataset into [tasks](#task) and dispatch tasks to trainers. - Partition a dataset into [tasks](#task) and dispatch tasks to trainers.
- Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass. - Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass.
...@@ -41,11 +41,11 @@ A task is a data shard to be trained. The total number of tasks will be much big ...@@ -41,11 +41,11 @@ A task is a data shard to be trained. The total number of tasks will be much big
#### Task Queue #### Task Queue
The master process has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master process. Each master process has three task queues. The master server has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master server. Each master server process has three task queues.
<img src="src/paddle-task-queues.png"/> <img src="src/paddle-task-queues.png"/>
- The todo queue holds tasks to be dispatched. When a job starts, the master process fills in the todo queue with all tasks. - The todo queue holds tasks to be dispatched. When a job starts, the master server fills in the todo queue with all tasks.
- The pending queue holds tasks that are currently training by trainers. - The pending queue holds tasks that are currently training by trainers.
- the done queue holds tasks that are already trained. - the done queue holds tasks that are already trained.
...@@ -54,10 +54,10 @@ The life cycle of a single task is illustrated below: ...@@ -54,10 +54,10 @@ The life cycle of a single task is illustrated below:
<img src="src/paddle-task-states.png"/> <img src="src/paddle-task-states.png"/>
1. When a new pass of training starts, all tasks will be placed in the todo queue. 1. When a new pass of training starts, all tasks will be placed in the todo queue.
1. The master process will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion. 1. The master server will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.
1. The trainer will work on its tasks and tell the master process once a task is completed. The master process will dispatch a new task to that trainer. 1. The trainer will work on its tasks and tell the master server once a task is completed. The master server will dispatch a new task to that trainer.
1. If a task timeout. the master process will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded. 1. If a task timeout. the master server will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.
1. The master process will move completed task to the done queue. When the todo queue is empty, the master process will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero. 1. The master server will move completed task to the done queue. When the todo queue is empty, the master server will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.
### Trainer Process ### Trainer Process
...@@ -93,7 +93,7 @@ The communication pattern between the trainers and the parameter servers depends ...@@ -93,7 +93,7 @@ The communication pattern between the trainers and the parameter servers depends
## Fault Tolerant ## Fault Tolerant
The training job will pause if the master processes is dead, or any of the parameter server process is dead. They will be started by [Kubernetes](https://kubernetes.io/) and recover in few minutes. Please refer to [fault recovery](#fault-recovery). The training job will pause if the master server processes is dead, or any of the parameter server process is dead. They will be started by [Kubernetes](https://kubernetes.io/) and recover in few minutes. Please refer to [fault recovery](#fault-recovery).
The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm: The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:
...@@ -113,7 +113,7 @@ Now we will introduce how each process recovers from a failure, the graph below ...@@ -113,7 +113,7 @@ Now we will introduce how each process recovers from a failure, the graph below
<img src="src/paddle-etcd.png"/> <img src="src/paddle-etcd.png"/>
### Master Process ### Master Server Process
When the master is started by the Kubernetes, it executes the following steps at startup: When the master is started by the Kubernetes, it executes the following steps at startup:
...@@ -122,7 +122,7 @@ When the master is started by the Kubernetes, it executes the following steps at ...@@ -122,7 +122,7 @@ When the master is started by the Kubernetes, it executes the following steps at
1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers. 1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers.
1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update. 1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.
When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes. When the master server process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.
### Trainer Process ### Trainer Process
...@@ -132,7 +132,7 @@ When the trainer is started by the Kubernetes, it executes the following steps a ...@@ -132,7 +132,7 @@ When the trainer is started by the Kubernetes, it executes the following steps a
1. Generates a unique ID, and sets key `/trainer/<unique ID>` with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline. 1. Generates a unique ID, and sets key `/trainer/<unique ID>` with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.
1. Waits for tasks from the master to start training. 1. Waits for tasks from the master to start training.
If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master process can discover the trainer again. If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master server can discover the trainer again.
When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training. When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.
......
...@@ -21,10 +21,10 @@ ...@@ -21,10 +21,10 @@
### 文件预处理 ### 文件预处理
在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(SSTable)。我们提供两个转换方式: 在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(RecordIO)。我们提供两个转换方式:
- 提供给用户本地转换的库,用户可以编写程序完成转换。 1. 用户在本地转换好再上传
- 用户可以上传自己的数据集,在集群运行MapReduce job完成转换。 1. 用户上传数据后,在机群上运行转换程序
转换生成的文件名会是以下格式: 转换生成的文件名会是以下格式:
...@@ -92,11 +92,11 @@ random_images-00099-of-00099 ...@@ -92,11 +92,11 @@ random_images-00099-of-00099
#### 进行训练 #### 进行训练
PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定SSTable文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的**: PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定RecordIO文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的**:
```python ```python
# ... # ...
reader = paddle.reader.creator.SSTable("/home/random_images-*-of-*") reader = paddle.reader.creator.RecordIO("/home/random_images-*-of-*")
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128) batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
trainer.train(batch_reader, ...) trainer.train(batch_reader, ...)
``` ```
......
# Design Doc: Master Server
For an overview of master server's role, please refer to [distributed training design doc](./README.md). In this design doc we will discuss the master server in more details. The master will be implemented in [Go](https://golang.org/).
## Dataset
<img src="src/dataset.png"/>
A dataset is a list of files in *RecordIO* format. A RecordIO file consists of chunks, whereas each chunk consists some records.
## Task Queue
As mentioned in [distributed training design doc](./README.md), a *task* is a data shard that the master server assigns to the trainer process to train on. A task consists of one or multiple *blocks* from one or multiple files. The master server maintains *task queues* to track the training progress.
### Task Queue Creation
1. Each trainer will make an RPC call (using Go's [rpc](https://golang.org/pkg/net/rpc/) package) to the master server, telling it the RecordIO files representing the dataset specified by the user. Since every trainer will tell the master server the same dataset, only the first RPC call will be honored.
The RPC interface is:
```go
func (m *RPCServer) ReportDataset(Paths []string, dummy *int) error {
}
```
1. The master server will scan through each RecordIO file to generate the *block index* and know how many blocks does each file have. A block can be referenced by the file path and the index of the block within the file. The block index is in memory data structure that enables fast access to each block, and the index of the block with the file is an integer start from 0, representing the n-th block within the file.
The definition of the block is:
```go
type Block struct {
Idx int // index of the block within the file
Path string
Index recordio.Index // block index
}
```
1. Blocks are grouped into tasks, and tasks are filled into the todo queue. The pending queue and the done queue are initialized with no element.
The definition of the task is:
```go
type Task struct {
Index int
Blocks []Block
}
```
The elements in the tasks queues is of type `TaskEntry`, containing a timeout counter (described in [task retry logic](#task-retry-logic)), and a task:
```go
type TaskEntry struct {
NumTimeout int
Task Task
}
```
The definition of task queues is:
```go
type TaskQueues struct {
Todo []TaskEntry
Pending map[int]TaskEntry // map from task index to task entry
Done []TaskEntry
}
```
### Task Queue Persistence
The task queues need to be persisted on [etcd](https://github.com/coreos/etcd) for fault recovery. Since the task queues only change once a task is completed or timed out, which is not very frequent, we can afford to synchronize with etcd every time the task queues change.
We will serialize the task queues data structure with [gob encoding](https://golang.org/pkg/encoding/gob/), compress with gzip, and save into etcd synchronously under key `/task_queues`.
### Task Dispatch
The trainer will make an RPC call to master to get a new task when:
- the trainer first started, or
- the trainer finishes a task.
The RPC interface is:
```go
func (m *RPCServer) GetTask(finished *Task, result *Task) error {
}
```
Argument `finished` will be `nil` when the trainer is just started.
During the RPC call the master will do the following:
- Make a copy of the task queues, and update the copy reflecting the finished tasks and the new pending tasks.
- Synchronize the copy of task queues with etcd using a transaction conditioned on holding the master lock.
- Replace the task queues with the copy and report to the trainer with the new tasks if succeeded, or discard the copy and report the error to the trainer if failed.
### Task Retry Logic
When a task is dispatched to the trainer, the master will schedule a function for execution after the timeout duration (based on the moving average of task completion time). If the task entry in still in the pending queue, its timeout counter will increase by one, and the task will be moved to todo queue. If the timeout counter is above the threshold, the master will log the error and discard the task.
Please note that since a timed out task could be completed after it has been dispatched for retry, so it is possible for a task to be processed multiple times. We do not try to prevent it from happening since it's fine to train on the same task multiple times due to the stochastic nature of the stochastic gradient decent algorithm.
...@@ -192,7 +192,7 @@ ...@@ -192,7 +192,7 @@
<span id="training-job"></span><h2>Training Job<a class="headerlink" href="#training-job" title="Permalink to this headline"></a></h2> <span id="training-job"></span><h2>Training Job<a class="headerlink" href="#training-job" title="Permalink to this headline"></a></h2>
<p>A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:</p> <p>A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:</p>
<ol class="simple"> <ol class="simple">
<li>the <em>master process</em>, which dispatches tasks to</li> <li>the <em>master server process</em>, which dispatches tasks to</li>
<li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li> <li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li> <li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li>
</ol> </ol>
...@@ -200,9 +200,9 @@ ...@@ -200,9 +200,9 @@
<p><img src="src/paddle-model-sharding.png"/></p> <p><img src="src/paddle-model-sharding.png"/></p>
<p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p> <p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p>
<p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p> <p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p>
<div class="section" id="master-process"> <div class="section" id="master-server-process">
<span id="master-process"></span><h3>Master Process<a class="headerlink" href="#master-process" title="Permalink to this headline"></a></h3> <span id="master-server-process"></span><h3>Master Server Process<a class="headerlink" href="#master-server-process" title="Permalink to this headline"></a></h3>
<p>The master process will:</p> <p>The master server process will:</p>
<ul class="simple"> <ul class="simple">
<li>Partition a dataset into <a class="reference external" href="#task">tasks</a> and dispatch tasks to trainers.</li> <li>Partition a dataset into <a class="reference external" href="#task">tasks</a> and dispatch tasks to trainers.</li>
<li>Keep track of training progress on the dataset with <a class="reference external" href="#task-queue">task queue</a>. A training job will iterate on the dataset for a full pass until it goes into next pass.</li> <li>Keep track of training progress on the dataset with <a class="reference external" href="#task-queue">task queue</a>. A training job will iterate on the dataset for a full pass until it goes into next pass.</li>
...@@ -213,10 +213,10 @@ ...@@ -213,10 +213,10 @@
</div> </div>
<div class="section" id="task-queue"> <div class="section" id="task-queue">
<span id="task-queue"></span><h4>Task Queue<a class="headerlink" href="#task-queue" title="Permalink to this headline"></a></h4> <span id="task-queue"></span><h4>Task Queue<a class="headerlink" href="#task-queue" title="Permalink to this headline"></a></h4>
<p>The master process has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master process. Each master process has three task queues.</p> <p>The master server has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master server. Each master server process has three task queues.</p>
<p><img src="src/paddle-task-queues.png"/></p> <p><img src="src/paddle-task-queues.png"/></p>
<ul class="simple"> <ul class="simple">
<li>The todo queue holds tasks to be dispatched. When a job starts, the master process fills in the todo queue with all tasks.</li> <li>The todo queue holds tasks to be dispatched. When a job starts, the master server fills in the todo queue with all tasks.</li>
<li>The pending queue holds tasks that are currently training by trainers.</li> <li>The pending queue holds tasks that are currently training by trainers.</li>
<li>the done queue holds tasks that are already trained.</li> <li>the done queue holds tasks that are already trained.</li>
</ul> </ul>
...@@ -224,10 +224,10 @@ ...@@ -224,10 +224,10 @@
<p><img src="src/paddle-task-states.png"/></p> <p><img src="src/paddle-task-states.png"/></p>
<ol class="simple"> <ol class="simple">
<li>When a new pass of training starts, all tasks will be placed in the todo queue.</li> <li>When a new pass of training starts, all tasks will be placed in the todo queue.</li>
<li>The master process will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.</li> <li>The master server will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.</li>
<li>The trainer will work on its tasks and tell the master process once a task is completed. The master process will dispatch a new task to that trainer.</li> <li>The trainer will work on its tasks and tell the master server once a task is completed. The master server will dispatch a new task to that trainer.</li>
<li>If a task timeout. the master process will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.</li> <li>If a task timeout. the master server will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.</li>
<li>The master process will move completed task to the done queue. When the todo queue is empty, the master process will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.</li> <li>The master server will move completed task to the done queue. When the todo queue is empty, the master server will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.</li>
</ol> </ol>
</div> </div>
</div> </div>
...@@ -268,7 +268,7 @@ ...@@ -268,7 +268,7 @@
</div> </div>
<div class="section" id="fault-tolerant"> <div class="section" id="fault-tolerant">
<span id="fault-tolerant"></span><h2>Fault Tolerant<a class="headerlink" href="#fault-tolerant" title="Permalink to this headline"></a></h2> <span id="fault-tolerant"></span><h2>Fault Tolerant<a class="headerlink" href="#fault-tolerant" title="Permalink to this headline"></a></h2>
<p>The training job will pause if the master processes is dead, or any of the parameter server process is dead. They will be started by <a class="reference external" href="https://kubernetes.io/">Kubernetes</a> and recover in few minutes. Please refer to <a class="reference external" href="#fault-recovery">fault recovery</a>.</p> <p>The training job will pause if the master server processes is dead, or any of the parameter server process is dead. They will be started by <a class="reference external" href="https://kubernetes.io/">Kubernetes</a> and recover in few minutes. Please refer to <a class="reference external" href="#fault-recovery">fault recovery</a>.</p>
<p>The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:</p> <p>The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:</p>
<ul> <ul>
<li><p class="first">sync-SGD</p> <li><p class="first">sync-SGD</p>
...@@ -284,8 +284,8 @@ ...@@ -284,8 +284,8 @@
<p>PaddlePaddle uses <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> to keep track of the states of processes. Because etcd is a distributed reliable key-value store, the restarted process can recover its states from etcd. The model parameters are periodically saved into distributed file system, so a restarted parameter server can recover its parameters from the saved file.</p> <p>PaddlePaddle uses <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> to keep track of the states of processes. Because etcd is a distributed reliable key-value store, the restarted process can recover its states from etcd. The model parameters are periodically saved into distributed file system, so a restarted parameter server can recover its parameters from the saved file.</p>
<p>Now we will introduce how each process recovers from a failure, the graph below shows how etcd is used:</p> <p>Now we will introduce how each process recovers from a failure, the graph below shows how etcd is used:</p>
<p><img src="src/paddle-etcd.png"/></p> <p><img src="src/paddle-etcd.png"/></p>
<div class="section" id="master-process"> <div class="section" id="master-server-process">
<span id="id1"></span><h3>Master Process<a class="headerlink" href="#master-process" title="Permalink to this headline"></a></h3> <span id="id1"></span><h3>Master Server Process<a class="headerlink" href="#master-server-process" title="Permalink to this headline"></a></h3>
<p>When the master is started by the Kubernetes, it executes the following steps at startup:</p> <p>When the master is started by the Kubernetes, it executes the following steps at startup:</p>
<ol class="simple"> <ol class="simple">
<li>Grabs a unique <em>master</em> lock in etcd, which prevents concurrent master instantiations.</li> <li>Grabs a unique <em>master</em> lock in etcd, which prevents concurrent master instantiations.</li>
...@@ -293,7 +293,7 @@ ...@@ -293,7 +293,7 @@
<li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li> <li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li>
<li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li> <li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li>
</ol> </ol>
<p>When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p> <p>When the master server process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p>
</div> </div>
<div class="section" id="trainer-process"> <div class="section" id="trainer-process">
<span id="id2"></span><h3>Trainer Process<a class="headerlink" href="#trainer-process" title="Permalink to this headline"></a></h3> <span id="id2"></span><h3>Trainer Process<a class="headerlink" href="#trainer-process" title="Permalink to this headline"></a></h3>
...@@ -303,7 +303,7 @@ ...@@ -303,7 +303,7 @@
<li>Generates a unique ID, and sets key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.</li> <li>Generates a unique ID, and sets key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.</li>
<li>Waits for tasks from the master to start training.</li> <li>Waits for tasks from the master to start training.</li>
</ol> </ol>
<p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master process can discover the trainer again.</p> <p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master server can discover the trainer again.</p>
<p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p> <p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p>
</div> </div>
<div class="section" id="parameter-server-process"> <div class="section" id="parameter-server-process">
......
...@@ -197,11 +197,11 @@ ...@@ -197,11 +197,11 @@
</div> </div>
<div class="section" id=""> <div class="section" id="">
<span id="id4"></span><h2>文件预处理<a class="headerlink" href="#" title="Permalink to this headline"></a></h2> <span id="id4"></span><h2>文件预处理<a class="headerlink" href="#" title="Permalink to this headline"></a></h2>
<p>在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(SSTable)。我们提供两个转换方式:</p> <p>在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(RecordIO)。我们提供两个转换方式:</p>
<ul class="simple"> <ol class="simple">
<li>提供给用户本地转换的库,用户可以编写程序完成转换。</li> <li>用户在本地转换好再上传</li>
<li>用户可以上传自己的数据集,在集群运行MapReduce job完成转换。</li> <li>用户上传数据后,在机群上运行转换程序</li>
</ul> </ol>
<p>转换生成的文件名会是以下格式:</p> <p>转换生成的文件名会是以下格式:</p>
<div class="highlight-text"><div class="highlight"><pre><span></span>name_prefix-aaaaa-of-bbbbb <div class="highlight-text"><div class="highlight"><pre><span></span>name_prefix-aaaaa-of-bbbbb
</pre></div> </pre></div>
...@@ -261,9 +261,9 @@ random_images-00099-of-00099 ...@@ -261,9 +261,9 @@ random_images-00099-of-00099
</div> </div>
<div class="section" id=""> <div class="section" id="">
<span id="id8"></span><h3>进行训练<a class="headerlink" href="#" title="Permalink to this headline"></a></h3> <span id="id8"></span><h3>进行训练<a class="headerlink" href="#" title="Permalink to this headline"></a></h3>
<p>PaddlePaddle提供专用的<a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc">data reader creator</a>,生成给定SSTable文件对应的data reader。<strong>无论在本地还是在云端,reader的使用方式都是一致的</strong></p> <p>PaddlePaddle提供专用的<a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc">data reader creator</a>,生成给定RecordIO文件对应的data reader。<strong>无论在本地还是在云端,reader的使用方式都是一致的</strong></p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># ...</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># ...</span>
<span class="n">reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">reader</span><span class="o">.</span><span class="n">creator</span><span class="o">.</span><span class="n">SSTable</span><span class="p">(</span><span class="s2">&quot;/home/random_images-*-of-*&quot;</span><span class="p">)</span> <span class="n">reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">reader</span><span class="o">.</span><span class="n">creator</span><span class="o">.</span><span class="n">RecordIO</span><span class="p">(</span><span class="s2">&quot;/home/random_images-*-of-*&quot;</span><span class="p">)</span>
<span class="n">batch_reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">paddle</span><span class="o">.</span><span class="n">dataset</span><span class="o">.</span><span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="p">(),</span> <span class="mi">128</span><span class="p">)</span> <span class="n">batch_reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">paddle</span><span class="o">.</span><span class="n">dataset</span><span class="o">.</span><span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="p">(),</span> <span class="mi">128</span><span class="p">)</span>
<span class="n">trainer</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">batch_reader</span><span class="p">,</span> <span class="o">...</span><span class="p">)</span> <span class="n">trainer</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">batch_reader</span><span class="p">,</span> <span class="o">...</span><span class="p">)</span>
</pre></div> </pre></div>
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Design Doc: Master Server &mdash; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="../../genindex.html"/>
<link rel="search" title="Search" href="../../search.html"/>
<link rel="top" title="PaddlePaddle documentation" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_en.html">PaddlePaddle in Docker Containers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_en.html">Debian Package installation guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/build_from_source_en.html">Installing from Sources</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_en.html">Set Command-line Parameters</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_en.html">Use Case</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_en.html">Argument Outline</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_en.html">Detail Description</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_en.html">Run Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/new_layer_en.html">Write New Layers</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_en.html">Contribute Code</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_en.html">RNN Models</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_en.html">Tune GPU Performance</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">Model Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">Data Reader Interface and DataSets</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">Training and Inference</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>Design Doc: Master Server</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="design-doc-master-server">
<span id="design-doc-master-server"></span><h1>Design Doc: Master Server<a class="headerlink" href="#design-doc-master-server" title="Permalink to this headline"></a></h1>
<p>For an overview of master server&#8217;s role, please refer to <a class="reference internal" href="README.html"><span class="doc">distributed training design doc</span></a>. In this design doc we will discuss the master server in more details. The master will be implemented in <a class="reference external" href="https://golang.org/">Go</a>.</p>
<div class="section" id="dataset">
<span id="dataset"></span><h2>Dataset<a class="headerlink" href="#dataset" title="Permalink to this headline"></a></h2>
<p><img src="src/dataset.png"/></p>
<p>A dataset is a list of files in <em>RecordIO</em> format. A RecordIO file consists of chunks, whereas each chunk consists some records.</p>
</div>
<div class="section" id="task-queue">
<span id="task-queue"></span><h2>Task Queue<a class="headerlink" href="#task-queue" title="Permalink to this headline"></a></h2>
<p>As mentioned in <a class="reference internal" href="README.html"><span class="doc">distributed training design doc</span></a>, a <em>task</em> is a data shard that the master server assigns to the trainer process to train on. A task consists of one or multiple <em>blocks</em> from one or multiple files. The master server maintains <em>task queues</em> to track the training progress.</p>
<div class="section" id="task-queue-creation">
<span id="task-queue-creation"></span><h3>Task Queue Creation<a class="headerlink" href="#task-queue-creation" title="Permalink to this headline"></a></h3>
<ol>
<li><p class="first">Each trainer will make an RPC call (using Go&#8217;s <a class="reference external" href="https://golang.org/pkg/net/rpc/">rpc</a> package) to the master server, telling it the RecordIO files representing the dataset specified by the user. Since every trainer will tell the master server the same dataset, only the first RPC call will be honored.</p>
<p>The RPC interface is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">func</span> <span class="p">(</span><span class="nx">m</span> <span class="o">*</span><span class="nx">RPCServer</span><span class="p">)</span> <span class="nx">ReportDataset</span><span class="p">(</span><span class="nx">Paths</span> <span class="p">[]</span><span class="kt">string</span><span class="p">,</span> <span class="nx">dummy</span> <span class="o">*</span><span class="kt">int</span><span class="p">)</span> <span class="kt">error</span> <span class="p">{</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
<li><p class="first">The master server will scan through each RecordIO file to generate the <em>block index</em> and know how many blocks does each file have. A block can be referenced by the file path and the index of the block within the file. The block index is in memory data structure that enables fast access to each block, and the index of the block with the file is an integer start from 0, representing the n-th block within the file.</p>
<p>The definition of the block is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">Block</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Idx</span> <span class="kt">int</span> <span class="c1">// index of the block within the file</span>
<span class="nx">Path</span> <span class="kt">string</span>
<span class="nx">Index</span> <span class="nx">recordio</span><span class="p">.</span><span class="nx">Index</span> <span class="c1">// block index</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
<li><p class="first">Blocks are grouped into tasks, and tasks are filled into the todo queue. The pending queue and the done queue are initialized with no element.</p>
<p>The definition of the task is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">Task</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Index</span> <span class="kt">int</span>
<span class="nx">Blocks</span> <span class="p">[]</span><span class="nx">Block</span>
<span class="p">}</span>
</pre></div>
</div>
<p>The elements in the tasks queues is of type <code class="docutils literal"><span class="pre">TaskEntry</span></code>, containing a timeout counter (described in <a class="reference external" href="#task-retry-logic">task retry logic</a>), and a task:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">TaskEntry</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">NumTimeout</span> <span class="kt">int</span>
<span class="nx">Task</span> <span class="nx">Task</span>
<span class="p">}</span>
</pre></div>
</div>
<p>The definition of task queues is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">TaskQueues</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Todo</span> <span class="p">[]</span><span class="nx">TaskEntry</span>
<span class="nx">Pending</span> <span class="kd">map</span><span class="p">[</span><span class="kt">int</span><span class="p">]</span><span class="nx">TaskEntry</span> <span class="c1">// map from task index to task entry</span>
<span class="nx">Done</span> <span class="p">[]</span><span class="nx">TaskEntry</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
</ol>
</div>
<div class="section" id="task-queue-persistence">
<span id="task-queue-persistence"></span><h3>Task Queue Persistence<a class="headerlink" href="#task-queue-persistence" title="Permalink to this headline"></a></h3>
<p>The task queues need to be persisted on <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> for fault recovery. Since the task queues only change once a task is completed or timed out, which is not very frequent, we can afford to synchronize with etcd every time the task queues change.</p>
<p>We will serialize the task queues data structure with <a class="reference external" href="https://golang.org/pkg/encoding/gob/">gob encoding</a>, compress with gzip, and save into etcd synchronously under key <code class="docutils literal"><span class="pre">/task_queues</span></code>.</p>
</div>
<div class="section" id="task-dispatch">
<span id="task-dispatch"></span><h3>Task Dispatch<a class="headerlink" href="#task-dispatch" title="Permalink to this headline"></a></h3>
<p>The trainer will make an RPC call to master to get a new task when:</p>
<ul class="simple">
<li>the trainer first started, or</li>
<li>the trainer finishes a task.</li>
</ul>
<p>The RPC interface is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">func</span> <span class="p">(</span><span class="nx">m</span> <span class="o">*</span><span class="nx">RPCServer</span><span class="p">)</span> <span class="nx">GetTask</span><span class="p">(</span><span class="nx">finished</span> <span class="o">*</span><span class="nx">Task</span><span class="p">,</span> <span class="nx">result</span> <span class="o">*</span><span class="nx">Task</span><span class="p">)</span> <span class="kt">error</span> <span class="p">{</span>
<span class="p">}</span>
</pre></div>
</div>
<p>Argument <code class="docutils literal"><span class="pre">finished</span></code> will be <code class="docutils literal"><span class="pre">nil</span></code> when the trainer is just started.</p>
<p>During the RPC call the master will do the following:</p>
<ul class="simple">
<li>Make a copy of the task queues, and update the copy reflecting the finished tasks and the new pending tasks.</li>
<li>Synchronize the copy of task queues with etcd using a transaction conditioned on holding the master lock.</li>
<li>Replace the task queues with the copy and report to the trainer with the new tasks if succeeded, or discard the copy and report the error to the trainer if failed.</li>
</ul>
</div>
<div class="section" id="task-retry-logic">
<span id="task-retry-logic"></span><h3>Task Retry Logic<a class="headerlink" href="#task-retry-logic" title="Permalink to this headline"></a></h3>
<p>When a task is dispatched to the trainer, the master will schedule a function for execution after the timeout duration (based on the moving average of task completion time). If the task entry in still in the pending queue, its timeout counter will increase by one, and the task will be moved to todo queue. If the timeout counter is above the threshold, the master will log the error and discard the task.</p>
<p>Please note that since a timed out task could be completed after it has been dispatched for retry, so it is possible for a task to be processed multiple times. We do not try to prevent it from happening since it&#8217;s fine to train on the same task multiple times due to the stochastic nature of the stochastic gradient decent algorithm.</p>
</div>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -15,7 +15,7 @@ This poses technical challenges to PaddlePaddle: ...@@ -15,7 +15,7 @@ This poses technical challenges to PaddlePaddle:
A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes: A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:
1. the *master process*, which dispatches tasks to 1. the *master server process*, which dispatches tasks to
1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via 1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via
1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters. 1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters.
...@@ -27,9 +27,9 @@ By coordinating these processes, PaddlePaddle supports use both Synchronize Stoc ...@@ -27,9 +27,9 @@ By coordinating these processes, PaddlePaddle supports use both Synchronize Stoc
When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model. When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.
### Master Process ### Master Server Process
The master process will: The master server process will:
- Partition a dataset into [tasks](#task) and dispatch tasks to trainers. - Partition a dataset into [tasks](#task) and dispatch tasks to trainers.
- Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass. - Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass.
...@@ -41,11 +41,11 @@ A task is a data shard to be trained. The total number of tasks will be much big ...@@ -41,11 +41,11 @@ A task is a data shard to be trained. The total number of tasks will be much big
#### Task Queue #### Task Queue
The master process has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master process. Each master process has three task queues. The master server has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master server. Each master server process has three task queues.
<img src="src/paddle-task-queues.png"/> <img src="src/paddle-task-queues.png"/>
- The todo queue holds tasks to be dispatched. When a job starts, the master process fills in the todo queue with all tasks. - The todo queue holds tasks to be dispatched. When a job starts, the master server fills in the todo queue with all tasks.
- The pending queue holds tasks that are currently training by trainers. - The pending queue holds tasks that are currently training by trainers.
- the done queue holds tasks that are already trained. - the done queue holds tasks that are already trained.
...@@ -54,10 +54,10 @@ The life cycle of a single task is illustrated below: ...@@ -54,10 +54,10 @@ The life cycle of a single task is illustrated below:
<img src="src/paddle-task-states.png"/> <img src="src/paddle-task-states.png"/>
1. When a new pass of training starts, all tasks will be placed in the todo queue. 1. When a new pass of training starts, all tasks will be placed in the todo queue.
1. The master process will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion. 1. The master server will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.
1. The trainer will work on its tasks and tell the master process once a task is completed. The master process will dispatch a new task to that trainer. 1. The trainer will work on its tasks and tell the master server once a task is completed. The master server will dispatch a new task to that trainer.
1. If a task timeout. the master process will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded. 1. If a task timeout. the master server will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.
1. The master process will move completed task to the done queue. When the todo queue is empty, the master process will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero. 1. The master server will move completed task to the done queue. When the todo queue is empty, the master server will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.
### Trainer Process ### Trainer Process
...@@ -93,7 +93,7 @@ The communication pattern between the trainers and the parameter servers depends ...@@ -93,7 +93,7 @@ The communication pattern between the trainers and the parameter servers depends
## Fault Tolerant ## Fault Tolerant
The training job will pause if the master processes is dead, or any of the parameter server process is dead. They will be started by [Kubernetes](https://kubernetes.io/) and recover in few minutes. Please refer to [fault recovery](#fault-recovery). The training job will pause if the master server processes is dead, or any of the parameter server process is dead. They will be started by [Kubernetes](https://kubernetes.io/) and recover in few minutes. Please refer to [fault recovery](#fault-recovery).
The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm: The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:
...@@ -113,7 +113,7 @@ Now we will introduce how each process recovers from a failure, the graph below ...@@ -113,7 +113,7 @@ Now we will introduce how each process recovers from a failure, the graph below
<img src="src/paddle-etcd.png"/> <img src="src/paddle-etcd.png"/>
### Master Process ### Master Server Process
When the master is started by the Kubernetes, it executes the following steps at startup: When the master is started by the Kubernetes, it executes the following steps at startup:
...@@ -122,7 +122,7 @@ When the master is started by the Kubernetes, it executes the following steps at ...@@ -122,7 +122,7 @@ When the master is started by the Kubernetes, it executes the following steps at
1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers. 1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers.
1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update. 1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.
When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes. When the master server process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.
### Trainer Process ### Trainer Process
...@@ -132,7 +132,7 @@ When the trainer is started by the Kubernetes, it executes the following steps a ...@@ -132,7 +132,7 @@ When the trainer is started by the Kubernetes, it executes the following steps a
1. Generates a unique ID, and sets key `/trainer/<unique ID>` with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline. 1. Generates a unique ID, and sets key `/trainer/<unique ID>` with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.
1. Waits for tasks from the master to start training. 1. Waits for tasks from the master to start training.
If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master process can discover the trainer again. If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master server can discover the trainer again.
When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training. When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.
......
...@@ -21,10 +21,10 @@ ...@@ -21,10 +21,10 @@
### 文件预处理 ### 文件预处理
在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(SSTable)。我们提供两个转换方式: 在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(RecordIO)。我们提供两个转换方式:
- 提供给用户本地转换的库,用户可以编写程序完成转换。 1. 用户在本地转换好再上传
- 用户可以上传自己的数据集,在集群运行MapReduce job完成转换。 1. 用户上传数据后,在机群上运行转换程序
转换生成的文件名会是以下格式: 转换生成的文件名会是以下格式:
...@@ -92,11 +92,11 @@ random_images-00099-of-00099 ...@@ -92,11 +92,11 @@ random_images-00099-of-00099
#### 进行训练 #### 进行训练
PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定SSTable文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的**: PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定RecordIO文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的**:
```python ```python
# ... # ...
reader = paddle.reader.creator.SSTable("/home/random_images-*-of-*") reader = paddle.reader.creator.RecordIO("/home/random_images-*-of-*")
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128) batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
trainer.train(batch_reader, ...) trainer.train(batch_reader, ...)
``` ```
......
# Design Doc: Master Server
For an overview of master server's role, please refer to [distributed training design doc](./README.md). In this design doc we will discuss the master server in more details. The master will be implemented in [Go](https://golang.org/).
## Dataset
<img src="src/dataset.png"/>
A dataset is a list of files in *RecordIO* format. A RecordIO file consists of chunks, whereas each chunk consists some records.
## Task Queue
As mentioned in [distributed training design doc](./README.md), a *task* is a data shard that the master server assigns to the trainer process to train on. A task consists of one or multiple *blocks* from one or multiple files. The master server maintains *task queues* to track the training progress.
### Task Queue Creation
1. Each trainer will make an RPC call (using Go's [rpc](https://golang.org/pkg/net/rpc/) package) to the master server, telling it the RecordIO files representing the dataset specified by the user. Since every trainer will tell the master server the same dataset, only the first RPC call will be honored.
The RPC interface is:
```go
func (m *RPCServer) ReportDataset(Paths []string, dummy *int) error {
}
```
1. The master server will scan through each RecordIO file to generate the *block index* and know how many blocks does each file have. A block can be referenced by the file path and the index of the block within the file. The block index is in memory data structure that enables fast access to each block, and the index of the block with the file is an integer start from 0, representing the n-th block within the file.
The definition of the block is:
```go
type Block struct {
Idx int // index of the block within the file
Path string
Index recordio.Index // block index
}
```
1. Blocks are grouped into tasks, and tasks are filled into the todo queue. The pending queue and the done queue are initialized with no element.
The definition of the task is:
```go
type Task struct {
Index int
Blocks []Block
}
```
The elements in the tasks queues is of type `TaskEntry`, containing a timeout counter (described in [task retry logic](#task-retry-logic)), and a task:
```go
type TaskEntry struct {
NumTimeout int
Task Task
}
```
The definition of task queues is:
```go
type TaskQueues struct {
Todo []TaskEntry
Pending map[int]TaskEntry // map from task index to task entry
Done []TaskEntry
}
```
### Task Queue Persistence
The task queues need to be persisted on [etcd](https://github.com/coreos/etcd) for fault recovery. Since the task queues only change once a task is completed or timed out, which is not very frequent, we can afford to synchronize with etcd every time the task queues change.
We will serialize the task queues data structure with [gob encoding](https://golang.org/pkg/encoding/gob/), compress with gzip, and save into etcd synchronously under key `/task_queues`.
### Task Dispatch
The trainer will make an RPC call to master to get a new task when:
- the trainer first started, or
- the trainer finishes a task.
The RPC interface is:
```go
func (m *RPCServer) GetTask(finished *Task, result *Task) error {
}
```
Argument `finished` will be `nil` when the trainer is just started.
During the RPC call the master will do the following:
- Make a copy of the task queues, and update the copy reflecting the finished tasks and the new pending tasks.
- Synchronize the copy of task queues with etcd using a transaction conditioned on holding the master lock.
- Replace the task queues with the copy and report to the trainer with the new tasks if succeeded, or discard the copy and report the error to the trainer if failed.
### Task Retry Logic
When a task is dispatched to the trainer, the master will schedule a function for execution after the timeout duration (based on the moving average of task completion time). If the task entry in still in the pending queue, its timeout counter will increase by one, and the task will be moved to todo queue. If the timeout counter is above the threshold, the master will log the error and discard the task.
Please note that since a timed out task could be completed after it has been dispatched for retry, so it is possible for a task to be processed multiple times. We do not try to prevent it from happening since it's fine to train on the same task multiple times due to the stochastic nature of the stochastic gradient decent algorithm.
...@@ -199,7 +199,7 @@ ...@@ -199,7 +199,7 @@
<span id="training-job"></span><h2>Training Job<a class="headerlink" href="#training-job" title="永久链接至标题"></a></h2> <span id="training-job"></span><h2>Training Job<a class="headerlink" href="#training-job" title="永久链接至标题"></a></h2>
<p>A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:</p> <p>A training job will be created once user asks Paddle cloud to train a model. The training job is made up of different processes that collaboratively consume data and produce a trained model. There are three kinds of processes:</p>
<ol class="simple"> <ol class="simple">
<li>the <em>master process</em>, which dispatches tasks to</li> <li>the <em>master server process</em>, which dispatches tasks to</li>
<li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li> <li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li> <li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li>
</ol> </ol>
...@@ -207,9 +207,9 @@ ...@@ -207,9 +207,9 @@
<p><img src="src/paddle-model-sharding.png"/></p> <p><img src="src/paddle-model-sharding.png"/></p>
<p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p> <p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p>
<p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p> <p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p>
<div class="section" id="master-process"> <div class="section" id="master-server-process">
<span id="master-process"></span><h3>Master Process<a class="headerlink" href="#master-process" title="永久链接至标题"></a></h3> <span id="master-server-process"></span><h3>Master Server Process<a class="headerlink" href="#master-server-process" title="永久链接至标题"></a></h3>
<p>The master process will:</p> <p>The master server process will:</p>
<ul class="simple"> <ul class="simple">
<li>Partition a dataset into <a class="reference external" href="#task">tasks</a> and dispatch tasks to trainers.</li> <li>Partition a dataset into <a class="reference external" href="#task">tasks</a> and dispatch tasks to trainers.</li>
<li>Keep track of training progress on the dataset with <a class="reference external" href="#task-queue">task queue</a>. A training job will iterate on the dataset for a full pass until it goes into next pass.</li> <li>Keep track of training progress on the dataset with <a class="reference external" href="#task-queue">task queue</a>. A training job will iterate on the dataset for a full pass until it goes into next pass.</li>
...@@ -220,10 +220,10 @@ ...@@ -220,10 +220,10 @@
</div> </div>
<div class="section" id="task-queue"> <div class="section" id="task-queue">
<span id="task-queue"></span><h4>Task Queue<a class="headerlink" href="#task-queue" title="永久链接至标题"></a></h4> <span id="task-queue"></span><h4>Task Queue<a class="headerlink" href="#task-queue" title="永久链接至标题"></a></h4>
<p>The master process has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master process. Each master process has three task queues.</p> <p>The master server has three task queues to track training progress. As illustrated in the graph below, Job A and Job B both have one master server. Each master server process has three task queues.</p>
<p><img src="src/paddle-task-queues.png"/></p> <p><img src="src/paddle-task-queues.png"/></p>
<ul class="simple"> <ul class="simple">
<li>The todo queue holds tasks to be dispatched. When a job starts, the master process fills in the todo queue with all tasks.</li> <li>The todo queue holds tasks to be dispatched. When a job starts, the master server fills in the todo queue with all tasks.</li>
<li>The pending queue holds tasks that are currently training by trainers.</li> <li>The pending queue holds tasks that are currently training by trainers.</li>
<li>the done queue holds tasks that are already trained.</li> <li>the done queue holds tasks that are already trained.</li>
</ul> </ul>
...@@ -231,10 +231,10 @@ ...@@ -231,10 +231,10 @@
<p><img src="src/paddle-task-states.png"/></p> <p><img src="src/paddle-task-states.png"/></p>
<ol class="simple"> <ol class="simple">
<li>When a new pass of training starts, all tasks will be placed in the todo queue.</li> <li>When a new pass of training starts, all tasks will be placed in the todo queue.</li>
<li>The master process will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.</li> <li>The master server will dispatch few tasks to each trainer at a time, puts them in the pending queue and waits for completion.</li>
<li>The trainer will work on its tasks and tell the master process once a task is completed. The master process will dispatch a new task to that trainer.</li> <li>The trainer will work on its tasks and tell the master server once a task is completed. The master server will dispatch a new task to that trainer.</li>
<li>If a task timeout. the master process will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.</li> <li>If a task timeout. the master server will move it back to the todo queue. The timeout count will increase by one. If the timeout count is above a threshold, the task is likely to cause a trainer to crash, so it will be discarded.</li>
<li>The master process will move completed task to the done queue. When the todo queue is empty, the master process will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.</li> <li>The master server will move completed task to the done queue. When the todo queue is empty, the master server will start a new pass by moving all tasks in the done queue to todo queue and reset the timeout counter of all tasks to zero.</li>
</ol> </ol>
</div> </div>
</div> </div>
...@@ -275,7 +275,7 @@ ...@@ -275,7 +275,7 @@
</div> </div>
<div class="section" id="fault-tolerant"> <div class="section" id="fault-tolerant">
<span id="fault-tolerant"></span><h2>Fault Tolerant<a class="headerlink" href="#fault-tolerant" title="永久链接至标题"></a></h2> <span id="fault-tolerant"></span><h2>Fault Tolerant<a class="headerlink" href="#fault-tolerant" title="永久链接至标题"></a></h2>
<p>The training job will pause if the master processes is dead, or any of the parameter server process is dead. They will be started by <a class="reference external" href="https://kubernetes.io/">Kubernetes</a> and recover in few minutes. Please refer to <a class="reference external" href="#fault-recovery">fault recovery</a>.</p> <p>The training job will pause if the master server processes is dead, or any of the parameter server process is dead. They will be started by <a class="reference external" href="https://kubernetes.io/">Kubernetes</a> and recover in few minutes. Please refer to <a class="reference external" href="#fault-recovery">fault recovery</a>.</p>
<p>The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:</p> <p>The training job will continue to make progress if there is at least one training process running. The strategy depends on the type of optimization algorithm:</p>
<ul> <ul>
<li><p class="first">sync-SGD</p> <li><p class="first">sync-SGD</p>
...@@ -291,8 +291,8 @@ ...@@ -291,8 +291,8 @@
<p>PaddlePaddle uses <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> to keep track of the states of processes. Because etcd is a distributed reliable key-value store, the restarted process can recover its states from etcd. The model parameters are periodically saved into distributed file system, so a restarted parameter server can recover its parameters from the saved file.</p> <p>PaddlePaddle uses <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> to keep track of the states of processes. Because etcd is a distributed reliable key-value store, the restarted process can recover its states from etcd. The model parameters are periodically saved into distributed file system, so a restarted parameter server can recover its parameters from the saved file.</p>
<p>Now we will introduce how each process recovers from a failure, the graph below shows how etcd is used:</p> <p>Now we will introduce how each process recovers from a failure, the graph below shows how etcd is used:</p>
<p><img src="src/paddle-etcd.png"/></p> <p><img src="src/paddle-etcd.png"/></p>
<div class="section" id="master-process"> <div class="section" id="master-server-process">
<span id="id1"></span><h3>Master Process<a class="headerlink" href="#master-process" title="永久链接至标题"></a></h3> <span id="id1"></span><h3>Master Server Process<a class="headerlink" href="#master-server-process" title="永久链接至标题"></a></h3>
<p>When the master is started by the Kubernetes, it executes the following steps at startup:</p> <p>When the master is started by the Kubernetes, it executes the following steps at startup:</p>
<ol class="simple"> <ol class="simple">
<li>Grabs a unique <em>master</em> lock in etcd, which prevents concurrent master instantiations.</li> <li>Grabs a unique <em>master</em> lock in etcd, which prevents concurrent master instantiations.</li>
...@@ -300,7 +300,7 @@ ...@@ -300,7 +300,7 @@
<li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li> <li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li>
<li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li> <li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li>
</ol> </ol>
<p>When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p> <p>When the master server process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p>
</div> </div>
<div class="section" id="trainer-process"> <div class="section" id="trainer-process">
<span id="id2"></span><h3>Trainer Process<a class="headerlink" href="#trainer-process" title="永久链接至标题"></a></h3> <span id="id2"></span><h3>Trainer Process<a class="headerlink" href="#trainer-process" title="永久链接至标题"></a></h3>
...@@ -310,7 +310,7 @@ ...@@ -310,7 +310,7 @@
<li>Generates a unique ID, and sets key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.</li> <li>Generates a unique ID, and sets key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> with its contact address as value. The key will be deleted when the lease expires, so the master will be aware of the trainer being online and offline.</li>
<li>Waits for tasks from the master to start training.</li> <li>Waits for tasks from the master to start training.</li>
</ol> </ol>
<p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master process can discover the trainer again.</p> <p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master server can discover the trainer again.</p>
<p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p> <p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p>
</div> </div>
<div class="section" id="parameter-server-process"> <div class="section" id="parameter-server-process">
......
...@@ -204,11 +204,11 @@ ...@@ -204,11 +204,11 @@
</div> </div>
<div class="section" id=""> <div class="section" id="">
<span id="id4"></span><h2>文件预处理<a class="headerlink" href="#" title="永久链接至标题"></a></h2> <span id="id4"></span><h2>文件预处理<a class="headerlink" href="#" title="永久链接至标题"></a></h2>
<p>在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(SSTable)。我们提供两个转换方式:</p> <p>在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(RecordIO)。我们提供两个转换方式:</p>
<ul class="simple"> <ol class="simple">
<li>提供给用户本地转换的库,用户可以编写程序完成转换。</li> <li>用户在本地转换好再上传</li>
<li>用户可以上传自己的数据集,在集群运行MapReduce job完成转换。</li> <li>用户上传数据后,在机群上运行转换程序</li>
</ul> </ol>
<p>转换生成的文件名会是以下格式:</p> <p>转换生成的文件名会是以下格式:</p>
<div class="highlight-text"><div class="highlight"><pre><span></span>name_prefix-aaaaa-of-bbbbb <div class="highlight-text"><div class="highlight"><pre><span></span>name_prefix-aaaaa-of-bbbbb
</pre></div> </pre></div>
...@@ -268,9 +268,9 @@ random_images-00099-of-00099 ...@@ -268,9 +268,9 @@ random_images-00099-of-00099
</div> </div>
<div class="section" id=""> <div class="section" id="">
<span id="id8"></span><h3>进行训练<a class="headerlink" href="#" title="永久链接至标题"></a></h3> <span id="id8"></span><h3>进行训练<a class="headerlink" href="#" title="永久链接至标题"></a></h3>
<p>PaddlePaddle提供专用的<a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc">data reader creator</a>,生成给定SSTable文件对应的data reader。<strong>无论在本地还是在云端,reader的使用方式都是一致的</strong></p> <p>PaddlePaddle提供专用的<a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc">data reader creator</a>,生成给定RecordIO文件对应的data reader。<strong>无论在本地还是在云端,reader的使用方式都是一致的</strong></p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># ...</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># ...</span>
<span class="n">reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">reader</span><span class="o">.</span><span class="n">creator</span><span class="o">.</span><span class="n">SSTable</span><span class="p">(</span><span class="s2">&quot;/home/random_images-*-of-*&quot;</span><span class="p">)</span> <span class="n">reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">reader</span><span class="o">.</span><span class="n">creator</span><span class="o">.</span><span class="n">RecordIO</span><span class="p">(</span><span class="s2">&quot;/home/random_images-*-of-*&quot;</span><span class="p">)</span>
<span class="n">batch_reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">paddle</span><span class="o">.</span><span class="n">dataset</span><span class="o">.</span><span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="p">(),</span> <span class="mi">128</span><span class="p">)</span> <span class="n">batch_reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">paddle</span><span class="o">.</span><span class="n">dataset</span><span class="o">.</span><span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="p">(),</span> <span class="mi">128</span><span class="p">)</span>
<span class="n">trainer</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">batch_reader</span><span class="p">,</span> <span class="o">...</span><span class="p">)</span> <span class="n">trainer</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">batch_reader</span><span class="p">,</span> <span class="o">...</span><span class="p">)</span>
</pre></div> </pre></div>
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Design Doc: Master Server &mdash; PaddlePaddle 文档</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="索引"
href="../../genindex.html"/>
<link rel="search" title="搜索" href="../../search.html"/>
<link rel="top" title="PaddlePaddle 文档" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_cn.html">安装与编译</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_cn.html">PaddlePaddle的Docker容器使用方式</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_cn.html">Ubuntu部署PaddlePaddle</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/cmake/build_from_source_cn.html">PaddlePaddle的编译选项</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/concepts/use_concepts_cn.html">基本使用概念</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_cn.html">设置命令行参数</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_cn.html">使用案例</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_cn.html">参数概述</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_cn.html">细节描述</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_cn.html">运行分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html">Kubernetes 简介</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_cn.html">Kubernetes单机训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_distributed_cn.html">Kubernetes分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/write_docs_cn.html">如何贡献/修改文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_cn.html">如何贡献代码</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_cn.html">RNN相关模型</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/recurrent_group_cn.html">Recurrent Group教程</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hierarchical_layer_cn.html">支持双层序列作为输入的Layer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hrnn_rnn_api_compare_cn.html">单双层RNN API对比介绍</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_cn.html">GPU性能分析与调优</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">模型配置</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">数据访问</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">训练与应用</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>Design Doc: Master Server</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="design-doc-master-server">
<span id="design-doc-master-server"></span><h1>Design Doc: Master Server<a class="headerlink" href="#design-doc-master-server" title="永久链接至标题"></a></h1>
<p>For an overview of master server&#8217;s role, please refer to <a class="reference internal" href="README.html"><span class="doc">distributed training design doc</span></a>. In this design doc we will discuss the master server in more details. The master will be implemented in <a class="reference external" href="https://golang.org/">Go</a>.</p>
<div class="section" id="dataset">
<span id="dataset"></span><h2>Dataset<a class="headerlink" href="#dataset" title="永久链接至标题"></a></h2>
<p><img src="src/dataset.png"/></p>
<p>A dataset is a list of files in <em>RecordIO</em> format. A RecordIO file consists of chunks, whereas each chunk consists some records.</p>
</div>
<div class="section" id="task-queue">
<span id="task-queue"></span><h2>Task Queue<a class="headerlink" href="#task-queue" title="永久链接至标题"></a></h2>
<p>As mentioned in <a class="reference internal" href="README.html"><span class="doc">distributed training design doc</span></a>, a <em>task</em> is a data shard that the master server assigns to the trainer process to train on. A task consists of one or multiple <em>blocks</em> from one or multiple files. The master server maintains <em>task queues</em> to track the training progress.</p>
<div class="section" id="task-queue-creation">
<span id="task-queue-creation"></span><h3>Task Queue Creation<a class="headerlink" href="#task-queue-creation" title="永久链接至标题"></a></h3>
<ol>
<li><p class="first">Each trainer will make an RPC call (using Go&#8217;s <a class="reference external" href="https://golang.org/pkg/net/rpc/">rpc</a> package) to the master server, telling it the RecordIO files representing the dataset specified by the user. Since every trainer will tell the master server the same dataset, only the first RPC call will be honored.</p>
<p>The RPC interface is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">func</span> <span class="p">(</span><span class="nx">m</span> <span class="o">*</span><span class="nx">RPCServer</span><span class="p">)</span> <span class="nx">ReportDataset</span><span class="p">(</span><span class="nx">Paths</span> <span class="p">[]</span><span class="kt">string</span><span class="p">,</span> <span class="nx">dummy</span> <span class="o">*</span><span class="kt">int</span><span class="p">)</span> <span class="kt">error</span> <span class="p">{</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
<li><p class="first">The master server will scan through each RecordIO file to generate the <em>block index</em> and know how many blocks does each file have. A block can be referenced by the file path and the index of the block within the file. The block index is in memory data structure that enables fast access to each block, and the index of the block with the file is an integer start from 0, representing the n-th block within the file.</p>
<p>The definition of the block is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">Block</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Idx</span> <span class="kt">int</span> <span class="c1">// index of the block within the file</span>
<span class="nx">Path</span> <span class="kt">string</span>
<span class="nx">Index</span> <span class="nx">recordio</span><span class="p">.</span><span class="nx">Index</span> <span class="c1">// block index</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
<li><p class="first">Blocks are grouped into tasks, and tasks are filled into the todo queue. The pending queue and the done queue are initialized with no element.</p>
<p>The definition of the task is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">Task</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Index</span> <span class="kt">int</span>
<span class="nx">Blocks</span> <span class="p">[]</span><span class="nx">Block</span>
<span class="p">}</span>
</pre></div>
</div>
<p>The elements in the tasks queues is of type <code class="docutils literal"><span class="pre">TaskEntry</span></code>, containing a timeout counter (described in <a class="reference external" href="#task-retry-logic">task retry logic</a>), and a task:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">TaskEntry</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">NumTimeout</span> <span class="kt">int</span>
<span class="nx">Task</span> <span class="nx">Task</span>
<span class="p">}</span>
</pre></div>
</div>
<p>The definition of task queues is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">type</span> <span class="nx">TaskQueues</span> <span class="kd">struct</span> <span class="p">{</span>
<span class="nx">Todo</span> <span class="p">[]</span><span class="nx">TaskEntry</span>
<span class="nx">Pending</span> <span class="kd">map</span><span class="p">[</span><span class="kt">int</span><span class="p">]</span><span class="nx">TaskEntry</span> <span class="c1">// map from task index to task entry</span>
<span class="nx">Done</span> <span class="p">[]</span><span class="nx">TaskEntry</span>
<span class="p">}</span>
</pre></div>
</div>
</li>
</ol>
</div>
<div class="section" id="task-queue-persistence">
<span id="task-queue-persistence"></span><h3>Task Queue Persistence<a class="headerlink" href="#task-queue-persistence" title="永久链接至标题"></a></h3>
<p>The task queues need to be persisted on <a class="reference external" href="https://github.com/coreos/etcd">etcd</a> for fault recovery. Since the task queues only change once a task is completed or timed out, which is not very frequent, we can afford to synchronize with etcd every time the task queues change.</p>
<p>We will serialize the task queues data structure with <a class="reference external" href="https://golang.org/pkg/encoding/gob/">gob encoding</a>, compress with gzip, and save into etcd synchronously under key <code class="docutils literal"><span class="pre">/task_queues</span></code>.</p>
</div>
<div class="section" id="task-dispatch">
<span id="task-dispatch"></span><h3>Task Dispatch<a class="headerlink" href="#task-dispatch" title="永久链接至标题"></a></h3>
<p>The trainer will make an RPC call to master to get a new task when:</p>
<ul class="simple">
<li>the trainer first started, or</li>
<li>the trainer finishes a task.</li>
</ul>
<p>The RPC interface is:</p>
<div class="highlight-go"><div class="highlight"><pre><span></span><span class="kd">func</span> <span class="p">(</span><span class="nx">m</span> <span class="o">*</span><span class="nx">RPCServer</span><span class="p">)</span> <span class="nx">GetTask</span><span class="p">(</span><span class="nx">finished</span> <span class="o">*</span><span class="nx">Task</span><span class="p">,</span> <span class="nx">result</span> <span class="o">*</span><span class="nx">Task</span><span class="p">)</span> <span class="kt">error</span> <span class="p">{</span>
<span class="p">}</span>
</pre></div>
</div>
<p>Argument <code class="docutils literal"><span class="pre">finished</span></code> will be <code class="docutils literal"><span class="pre">nil</span></code> when the trainer is just started.</p>
<p>During the RPC call the master will do the following:</p>
<ul class="simple">
<li>Make a copy of the task queues, and update the copy reflecting the finished tasks and the new pending tasks.</li>
<li>Synchronize the copy of task queues with etcd using a transaction conditioned on holding the master lock.</li>
<li>Replace the task queues with the copy and report to the trainer with the new tasks if succeeded, or discard the copy and report the error to the trainer if failed.</li>
</ul>
</div>
<div class="section" id="task-retry-logic">
<span id="task-retry-logic"></span><h3>Task Retry Logic<a class="headerlink" href="#task-retry-logic" title="永久链接至标题"></a></h3>
<p>When a task is dispatched to the trainer, the master will schedule a function for execution after the timeout duration (based on the moving average of task completion time). If the task entry in still in the pending queue, its timeout counter will increase by one, and the task will be moved to todo queue. If the timeout counter is above the threshold, the master will log the error and discard the task.</p>
<p>Please note that since a timed out task could be completed after it has been dispatched for retry, so it is possible for a task to be processed multiple times. We do not try to prevent it from happening since it&#8217;s fine to train on the same task multiple times due to the stochastic nature of the stochastic gradient decent algorithm.</p>
</div>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="../../_static/translations.js"></script>
<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.0/MathJax.js"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册