提交 9572e11d 编写于 作者: H Helin Wang

Design doc: master process

上级 896b9c55
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
### 文件预处理 ### 文件预处理
在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(SSTable)。我们提供两个转换方式: 在数据集可以被训练之前,文件需要预先被转换成PaddlePaddle集群内部的存储格式(RecordIO)。我们提供两个转换方式:
- 提供给用户本地转换的库,用户可以编写程序完成转换。 - 提供给用户本地转换的库,用户可以编写程序完成转换。
- 用户可以上传自己的数据集,在集群运行MapReduce job完成转换。 - 用户可以上传自己的数据集,在集群运行MapReduce job完成转换。
...@@ -92,11 +92,11 @@ random_images-00099-of-00099 ...@@ -92,11 +92,11 @@ random_images-00099-of-00099
#### 进行训练 #### 进行训练
PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定SSTable文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的** PaddlePaddle提供专用的[data reader creator](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md#python-data-reader-design-doc),生成给定RecordIO文件对应的data reader。**无论在本地还是在云端,reader的使用方式都是一致的**
```python ```python
# ... # ...
reader = paddle.reader.creator.SSTable("/home/random_images-*-of-*") reader = paddle.reader.creator.RecordIO("/home/random_images-*-of-*")
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128) batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
trainer.train(batch_reader, ...) trainer.train(batch_reader, ...)
``` ```
......
# Design Doc: Master Process
For an overview of master process' role, please refer to [distributed training design doc](./README.md). In this design doc we will discuss the master process in more details. The master will be implemented in [golang](https://golang.org/).
## Dataset
<img src="src/dataset.png"/>
A dataset is represented by a list of files in *RecordIO* format on the distributed filesystem, each RecordIO file consists of multiple *blocks*, and each block has multiple data instances.
## Task Queue
As mentioned in [distributed training design doc](./README.md), a *task* is a data shard that the master process assigns to the trainer process to train on. A task consists of one or multiple *blocks* from one or multiple files. The master process maintains *task queues* to track the training progress.
### Task Queue Creation
1. Each trainer will make an RPC call (using [golang rpc](https://golang.org/pkg/net/rpc/)) to the master process, telling it the RecordIO files representing the dataset specified by the user. Since every trainer will tell the master process the same dataset, only the first RPC call will be honored.
The RPC interface is:
```go
func (m *RPCServer) ReportDataset(Paths []string, dummy *int) error {
}
```
1. The master process will scan through each RecordIO file to generate the *block index* and know how many blocks does each file have. A block can be referenced by the file path and the index of the block within the file. The block index is in memory data structure that enables fast access to each block, and the index of the block with the file is an integer start from 0, representing the n-th block within the file.
The definition of the block is:
```go
type Block struct {
Idx int // index of the block within the file
Path string
Index recordio.Index // block index
}
```
1. Blocks are grouped into tasks, and tasks are filled into the todo queue. The pending queue and the done queue are initialized with no element.
The definition of the task is:
```go
type Task struct {
Index int
Blocks []Block
}
```
The elements in the tasks queues is of type `TaskEntry`, containing a timeout counter (described in [task retry logic](#task-retry-logic)), and a task:
```go
type TaskEntry struct {
NumTimeout int
Task Task
}
```
The definition of task queues is:
```go
type TaskQueues struct {
Todo []TaskEntry
Pending map[int]TaskEntry // map from task index to task entry
Done []TaskEntry
}
```
### Task Queue Persistence
The task queues need to be persisted on [etcd](https://github.com/coreos/etcd) for fault recovery. Since the task queues only change once a task is completed or timed out, which is not very frequent, we can afford to synchronize with etcd every time the task queues change.
We will serialize the task queues data structure with [gob encoding](https://golang.org/pkg/encoding/gob/), compress with gzip, and save into etcd synchronously under key `/task_queues`.
### Task Dispatch
The trainer will make an RPC call to master to get a new task when:
- the trainer first started, or
- the trainer finishes a task.
The RPC interface is:
```go
func (m *RPCServer) GetTask(finished *Task, result *Task) error {
}
```
Argument `finished` will be `nil` when the trainer is just started.
During the RPC call the master will do the following:
- Make a copy of the task queues, and update the copy reflecting the finished tasks and the new pending tasks.
- Synchronize the copy of task queues with etcd using a transaction conditioned on holding the master lock.
- Replace the task queues with the copy and report to the trainer with the new tasks if succeeded, or discard the copy and report the error to the trainer if failed.
### Task Retry Logic
When a task is dispatched to the trainer, the master will schedule a function for execution after the timeout duration (based on the moving average of task completion time). If the task entry in still in the pending queue, its timeout counter will increase by one, and the task will be moved to todo queue. If the timeout counter is above the threshold, the master will log the error and discard the task.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册