提交 33709441 编写于 作者: T Travis CI

Deploy to GitHub Pages: f22302ad

上级 4acfa557
......@@ -17,12 +17,16 @@ A training job will be created once user asks Paddle cloud to train a model. The
1. the *master process*, which dispatches tasks to
1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via
1. one or more *parameter server processes*, where each holds a shard of the global model.
1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters.
Their relation is illustrated in the following graph:
<img src="src/paddle-model-sharding.png"/>
By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.
When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.
### Master Process
The master process will:
......@@ -31,7 +35,7 @@ The master process will:
- Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass.
#### Task
#### Task
A task is a data shard to be trained. The total number of tasks will be much bigger than the total number of trainers. The number of data instances inside a task will be much bigger than the mini-batch size.
......@@ -78,7 +82,7 @@ The communication pattern between the trainers and the parameter servers depends
- Synchronous Stochastic Gradient Descent (sync-SGD)
Parameter server will wait for all trainer finish n-th mini-batch calculation and send their gradients before broadcasting new parameters to every trainer. Every trainer will wait for the new parameters before starting n+1-th mini-batch.
- Asynchronous Stochastic Gradient Descent (async-SGD)
There will no synchronization between different trainers, and parameter server updates its parameter as soon as it receives new gradient:
......@@ -118,8 +122,6 @@ When the master is started by the Kubernetes, it executes the following steps at
1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers.
1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.
The master process will kill itself if its etcd lease expires.
When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.
### Trainer Process
......@@ -132,6 +134,8 @@ When the trainer is started by the Kubernetes, it executes the following steps a
If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master process can discover the trainer again.
When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.
### Parameter Server Process
When the parameter server is started by Kubernetes, it executes the following steps at startup:
......@@ -140,11 +144,11 @@ When the parameter server is started by Kubernetes, it executes the following st
1. Search through etcd keys `/ps/<index>` (`/ps/0`, `/ps/1`, ...) to find the first non-existant key whose index is smaller than the total number of parameter servers. Set the key using a transaction to avoid concurrent writes. The parameter server's index is inferred from the key name.
The desired number of parameter servers is 3:
<img src="src/paddle-ps-0.png"/>
The third parameter server joined:
<img src="src/paddle-ps-1.png"/>
1. The parameter server can load parameters if there are already saved parameters in the save path (inferred from its index).
......@@ -153,6 +157,13 @@ When the parameter server is started by Kubernetes, it executes the following st
If the parameter server's etcd lease expires, the parameter server will kill itself.
## Parameter Server Checkpointing
See [here](./checkpointing.md)
## Store and dispatching trainning data
See [here](./data_dispatch.md)
## Dynamic Scaling
### Trainer Scaling
......
## 模型参数检查点(Checkpointing)
模型数据检查点的实现,可以有效的避免parameter server的单点或多点同时故障。模型参数检查点通过定期向磁盘上保存一份存储在parameter server内存中的模型数据的完整镜像,来保证训练过程可以从中间状态重新启动。在一个不可中断并缺少备份的训练任务中,可以通过阶段性的保存每个parameter server的数据快照(snapshot)到 ***分布式存储服务*** 达到容灾的目的,比如每隔10分钟最新的快照,并删除更早的快照。在出现单点故障时,只需要恢复这台节点,或者将这台节点迁移到另一个节点并启动即可恢复训练任务。
<img src="src/checkpointing.png" width="500"/>
### 快照保存的设计如下:
说明:
* parameter server在集群中启动后,自动挂载分布式存储目录,并把快照保存到这个目录下。
* ***注:每个parameter server的检查点各自独立保存,暂时不考虑多个parameter server同步的保存一个特定时间点的全局检查点,因为这样做也没法保证消除随机性。***
检查点保存程序流程:
1. 如果满足条件"每隔10分钟"时,parameter server会获取parameters内存的`read_lock`,启动一个新的线程开始保存检查点。如果已经正在执行保存检查点的线程,则忽略。由于对parameters的更新需要获取parameters内存的`write_lock`,所以在写入快照的过程中,parameter server会暂停参数更新并等待。
2. parameter server生成一个UUID,向指定的目录中一个新的文件(文件名为此UUID)写入快照数据。在快照写入完成后,计算这个文件的MD5 sum。然后在etcd的`/checkpoints/[pserver_id]`中写入json内容:`{"uuid": [UUID], "md5", "MD5 sum", "timestamp": xxxx}`。
3. 删除磁盘目录中不是当前uuid的快照文件。
4. 释放对paramters内存的锁定,停止保存检查点的线程。
这里需要用户额外注意,在您的实际环境中,训练任务的运行可能会占满trainer和parameter server之间的网络带宽,如果parameter server此时还需要通过网络访问分布式存储以保存快照,可能会造成网络拥塞,而出现阶段性的运行停滞。
### 从快照恢复
在parameter server第一次启动或任意时间parameter server故障后被Kubernetes重新启动,则需要回滚到上一个检查点:
1. 从etcd中读取节点:`/checkpoints/[pserver_id]`获取最新的检查点的文件uuid
1. 从磁盘文件中加载uuid文件名的检查点快照文件,并加载其中的参数
1. 如果上面两步出现错误,则使用启动参数定义的初始化方法初始化参数
1. 开始提供服务
## TODO List
### 推测执行/加速执行(TODO)
在异构集群中,如果存在某些trainer执行速度过慢会影响整体集群的速度(如图中Trainer 1),此时master将负责启动一个新的Trainer(Accelerate Trainer 2),使用同样的训练数据block。哪个trainer先完成block的训练,则把另一个慢速的kill掉。
### 动态扩容/缩容
目前只考虑动态扩容trainer数量,可以减小系统复杂性。
## 术语
* model: 指深度学习训练之后得到的所有参数,使用这个神经网络可以完成对新数据的预测
* parameters: 神经网络中的参数,包括权重w和偏置b。一个神经网络的模型由大量的参数组成
* shard: 分片,通常指将一个整体拆分成多份的其中的一份。
* model shard: 将一个神经网络参数拆分成多份,每个shard分别存储在其中一台parameter server之上
* parameter block: 多个parameter block构成一个model shard
* 单点故障: 任意时刻只可能同时有一台服务器故障。由于集群中同时存在两台机器故障的概率极低((平均故障率*平均故障修复时间)^2)只对特殊在线系统考虑两台以上同时故障的容灾。
## 训练数据的存储和分发
### 流程介绍
生产环境中的训练数据集通常体积很大,并被存储在诸如Hadoop HDFS,Ceph,AWS S3之类的分布式存储之上。这些分布式存储服务通常会把数据切割成多个分片分布式的存储在多个节点之上。这样就可以在云端执行多种数据类计算任务,包括:
* 数据预处理任务
* Paddle训练任务
* 在线模型预测服务
<img src="src/paddle-cloud-in-data-center.png" width="500"/>
在上图中显示了在一个实际生产环境中的应用(人脸识别)的数据流图。生产环境的日志数据会通过实时流的方式(Kafka)和离线数据的方式(HDFS)存储,并在集群中运行多个分布式数据处理任务,比如流式数据处理(online data process),离线批处理(offline data process)完成数据的预处理,提供给paddle作为训练数据。用于也可以上传labeled data到分布式存储补充训练数据。在paddle之上运行的深度学习训练输出的模型会提供给在线人脸识别的应用使用。
### 训练数据的存储
选择GlusterFS作为训练数据的存储服务(后续的实现考虑HDFS)。
在Kubernetes上运行的不同的计算框架,可以通过Volume或PersistentVolume挂载存储空间到每个容器中。
在GlusterFS存储系统中的公开目录,需要保存一些预置的公开数据集(比如MNIST, BOW, imagenet数据集等),并且可以被提交的job直接使用。
### 上传训练文件
使用下面命令,可以把本地的训练数据上传到存储集群中,并指定上传数据的`dataset-name`:
```
paddle upload train_data.list "dataset-name"
```
其中`.list`文件描述了训练数据的文件和对应的label,对于图像类数据,`.list文件`样例如下,每一行包含了图片文件的路径和其label(用tab分隔开):
```
./data/image1.jpg 1
./data/image2.jpg 5
./data/image3.jpg 2
./data/image4.jpg 5
./data/image5.jpg 1
./data/image6.jpg 8
...
```
对于文本类训练数据样例如下(机器翻译),一行中包含源语言,目标语言的文本(label):
```
L&apos; inflation , en Europe , a dérapé sur l&apos; alimentation Food : Where European inflation slipped up
L&apos; inflation accélérée , mesurée dans la zone euro , est due principalement à l&apos; augmentation rapide des prix de l&apos; alimentation . The skyward zoom in food prices is the dominant force behind the speed up in eurozone inflation .
...
```
### 使用reader
用户在使用v2 API编写训练任务时,可以使用paddle内置的reader完成对GlusterFS存储中的训练数据的读取,返回文件中的各列,然后在调用`trainer.train()`时传入,完成训练数据的读取:
```python
reader = paddle.dist.reader("dataset-name")
trainer.train(reader, ...)
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
trainer.train(batch_reader, ...)
```
trainer.train内部会获取reader的内容:
```
def paddle.train(batch_reader):
r = batch_reader() # create a iterator for one pass of data
for batch in r:
# train
```
这里面batch是含有128个data instance的mini-batch。每一个data instance会是一个tuple,tuple元素的顺序与`.list`文件文件中每一列的顺序是一致的。每一个data instance会是(raw_image_file_binary_data, label)。其中raw_image_file_binary_data是对应图像文件的没有解码的原始二进制数据,用户需要自己解码。label是文本类型(比如:“1“,”2“),这里用户需要的其实是整形,用户需要自己转换成整形。
### 实现reader
reader的实现需要考虑本地训练程序实现之后,可以不修改程序直接提交集群进行分布式训练。要达到这样的目标,需要实现下面的功能:
paddle会封装一个在集群中使用的reader: `paddle.dist.reader()`。在集群训练时需要使用这个reader指定要使用的数据集开始训练。用户的训练程序需要按照如下方式初始化reader:
```python
if os.getenv("PADDLE_TRAIN_LOCAL"):
reader = my_local_reader("dataset-name")
else:
reader = paddle.dist.reader("dataset-name")
```
用户训练程序提交到集群之后,集群会自动设置`PADDLE_TRAIN_LOCAL`环境变量,reader会被配置成集群训练的版本。其中`paddle.dist.reader()`需要从master的队列中获得需要开始执行的训练task,并找到对应的训练数据文件,开始训练任务。如果用户的训练数据源来自于其他服务,比如从集群中的Kafka,zeromq队列读取,也可以根据实际情况实现集群中运行的reader程序。
## TODO
### 支持将数据合并成内部的文件格式(key-value),方便sharding与顺序读取
### 支持用户自定义的数据预处理job
......@@ -192,6 +192,8 @@
<li><a class="reference internal" href="#parameter-server-process">Parameter Server Process</a></li>
</ul>
</li>
<li><a class="reference internal" href="#parameter-server-checkpointing">Parameter Server Checkpointing</a></li>
<li><a class="reference internal" href="#store-and-dispatching-trainning-data">Store and dispatching trainning data</a></li>
<li><a class="reference internal" href="#dynamic-scaling">Dynamic Scaling</a><ul>
<li><a class="reference internal" href="#trainer-scaling">Trainer Scaling</a></li>
<li><a class="reference internal" href="#parameter-server-scaling">Parameter Server Scaling</a></li>
......@@ -246,10 +248,12 @@
<ol class="simple">
<li>the <em>master process</em>, which dispatches tasks to</li>
<li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model.</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li>
</ol>
<p>Their relation is illustrated in the following graph:</p>
<p><img src="src/paddle-model-sharding.png"/></p>
<p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p>
<p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p>
<div class="section" id="master-process">
<span id="master-process"></span><h3>Master Process<a class="headerlink" href="#master-process" title="Permalink to this headline"></a></h3>
<p>The master process will:</p>
......@@ -343,7 +347,6 @@
<li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li>
<li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li>
</ol>
<p>The master process will kill itself if its etcd lease expires.</p>
<p>When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p>
</div>
<div class="section" id="trainer-process">
......@@ -355,6 +358,7 @@
<li>Waits for tasks from the master to start training.</li>
</ol>
<p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master process can discover the trainer again.</p>
<p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p>
</div>
<div class="section" id="parameter-server-process">
<span id="id3"></span><h3>Parameter Server Process<a class="headerlink" href="#parameter-server-process" title="Permalink to this headline"></a></h3>
......@@ -376,6 +380,14 @@
<p>If the parameter server&#8217;s etcd lease expires, the parameter server will kill itself.</p>
</div>
</div>
<div class="section" id="parameter-server-checkpointing">
<span id="parameter-server-checkpointing"></span><h2>Parameter Server Checkpointing<a class="headerlink" href="#parameter-server-checkpointing" title="Permalink to this headline"></a></h2>
<p>See <a class="reference internal" href="checkpointing.html"><span class="doc">here</span></a></p>
</div>
<div class="section" id="store-and-dispatching-trainning-data">
<span id="store-and-dispatching-trainning-data"></span><h2>Store and dispatching trainning data<a class="headerlink" href="#store-and-dispatching-trainning-data" title="Permalink to this headline"></a></h2>
<p>See <a class="reference internal" href="data_dispatch.html"><span class="doc">here</span></a></p>
</div>
<div class="section" id="dynamic-scaling">
<span id="dynamic-scaling"></span><h2>Dynamic Scaling<a class="headerlink" href="#dynamic-scaling" title="Permalink to this headline"></a></h2>
<div class="section" id="trainer-scaling">
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>模型参数检查点(Checkpointing) &mdash; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="../../genindex.html"/>
<link rel="search" title="Search" href="../../search.html"/>
<link rel="top" title="PaddlePaddle documentation" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a>Home</a></li>
<li><a>Get Started</a></li>
<li class="active"><a>Documentation</a></li>
<li><a>About Us</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../tutorials/index_en.html">TUTORIALS</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_en.html">PaddlePaddle in Docker Containers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_en.html">Debian Package installation guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/build_from_source_en.html">Installing from Sources</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/basic_usage/index_en.html">Simple Linear Regression</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../tutorials/index_en.html">TUTORIALS</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/quick_start/index_en.html">Quick Start</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/rec/ml_regression_en.html">MovieLens Regression</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/image_classification/index_en.html">Image Classification</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/sentiment_analysis/index_en.html">Sentiment Analysis</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/semantic_role_labeling/index_en.html">Semantic Role Labeling</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/text_generation/index_en.html">Text Generation</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/gan/index_en.html">Image Auto-Generation</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/imagenet_model/resnet_model_en.html">ImageNet: ResNet</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/embedding_model/index_en.html">Embedding: Chinese Word</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_en.html">Set Command-line Parameters</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_en.html">Use Case</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_en.html">Argument Outline</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_en.html">Detail Description</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_en.html">Run Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/new_layer_en.html">Write New Layers</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_en.html">Contribute Code</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_en.html">RNN Models</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_en.html">RNN Configuration</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_en.html">Tune GPU Performance</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">Model Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">Data Reader Interface and DataSets</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">Training and Inference</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
</nav>
<nav class="local-toc"><ul>
<li><a class="reference internal" href="#">模型参数检查点(Checkpointing)</a><ul>
<li><a class="reference internal" href="#">快照保存的设计如下:</a></li>
<li><a class="reference internal" href="#">从快照恢复</a></li>
</ul>
</li>
<li><a class="reference internal" href="#todo-list">TODO List</a><ul>
<li><a class="reference internal" href="#todo">推测执行/加速执行(TODO)</a></li>
<li><a class="reference internal" href="#">动态扩容/缩容</a></li>
</ul>
</li>
<li><a class="reference internal" href="#">术语</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>模型参数检查点(Checkpointing)</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="checkpointing">
<span id="checkpointing"></span><h1>模型参数检查点(Checkpointing)<a class="headerlink" href="#checkpointing" title="Permalink to this headline"></a></h1>
<p>模型数据检查点的实现,可以有效的避免parameter server的单点或多点同时故障。模型参数检查点通过定期向磁盘上保存一份存储在parameter server内存中的模型数据的完整镜像,来保证训练过程可以从中间状态重新启动。在一个不可中断并缺少备份的训练任务中,可以通过阶段性的保存每个parameter server的数据快照(snapshot)到 <strong><em>分布式存储服务</em></strong> 达到容灾的目的,比如每隔10分钟最新的快照,并删除更早的快照。在出现单点故障时,只需要恢复这台节点,或者将这台节点迁移到另一个节点并启动即可恢复训练任务。</p>
<p><img src="src/checkpointing.png" width="500"/></p>
<div class="section" id="">
<span id="id1"></span><h2>快照保存的设计如下:<a class="headerlink" href="#" title="Permalink to this headline"></a></h2>
<p>说明:</p>
<ul class="simple">
<li>parameter server在集群中启动后,自动挂载分布式存储目录,并把快照保存到这个目录下。</li>
<li><strong><em>注:每个parameter server的检查点各自独立保存,暂时不考虑多个parameter server同步的保存一个特定时间点的全局检查点,因为这样做也没法保证消除随机性。</em></strong></li>
</ul>
<p>检查点保存程序流程:</p>
<ol class="simple">
<li>如果满足条件&#8221;每隔10分钟&#8221;时,parameter server会获取parameters内存的<code class="docutils literal"><span class="pre">read_lock</span></code>,启动一个新的线程开始保存检查点。如果已经正在执行保存检查点的线程,则忽略。由于对parameters的更新需要获取parameters内存的<code class="docutils literal"><span class="pre">write_lock</span></code>,所以在写入快照的过程中,parameter server会暂停参数更新并等待。</li>
<li>parameter server生成一个UUID,向指定的目录中一个新的文件(文件名为此UUID)写入快照数据。在快照写入完成后,计算这个文件的MD5 sum。然后在etcd的<code class="docutils literal"><span class="pre">/checkpoints/[pserver_id]</span></code>中写入json内容:<code class="docutils literal"><span class="pre">{&quot;uuid&quot;:</span> <span class="pre">[UUID],</span> <span class="pre">&quot;md5&quot;,</span> <span class="pre">&quot;MD5</span> <span class="pre">sum&quot;,</span> <span class="pre">&quot;timestamp&quot;:</span> <span class="pre">xxxx}</span></code></li>
<li>删除磁盘目录中不是当前uuid的快照文件。</li>
<li>释放对paramters内存的锁定,停止保存检查点的线程。</li>
</ol>
<p>这里需要用户额外注意,在您的实际环境中,训练任务的运行可能会占满trainer和parameter server之间的网络带宽,如果parameter server此时还需要通过网络访问分布式存储以保存快照,可能会造成网络拥塞,而出现阶段性的运行停滞。</p>
</div>
<div class="section" id="">
<span id="id2"></span><h2>从快照恢复<a class="headerlink" href="#" title="Permalink to this headline"></a></h2>
<p>在parameter server第一次启动或任意时间parameter server故障后被Kubernetes重新启动,则需要回滚到上一个检查点:</p>
<ol class="simple">
<li>从etcd中读取节点:<code class="docutils literal"><span class="pre">/checkpoints/[pserver_id]</span></code>获取最新的检查点的文件uuid</li>
<li>从磁盘文件中加载uuid文件名的检查点快照文件,并加载其中的参数</li>
<li>如果上面两步出现错误,则使用启动参数定义的初始化方法初始化参数</li>
<li>开始提供服务</li>
</ol>
</div>
</div>
<div class="section" id="todo-list">
<span id="todo-list"></span><h1>TODO List<a class="headerlink" href="#todo-list" title="Permalink to this headline"></a></h1>
<div class="section" id="todo">
<span id="todo"></span><h2>推测执行/加速执行(TODO)<a class="headerlink" href="#todo" title="Permalink to this headline"></a></h2>
<p>在异构集群中,如果存在某些trainer执行速度过慢会影响整体集群的速度(如图中Trainer 1),此时master将负责启动一个新的Trainer(Accelerate Trainer 2),使用同样的训练数据block。哪个trainer先完成block的训练,则把另一个慢速的kill掉。</p>
</div>
<div class="section" id="">
<span id="id3"></span><h2>动态扩容/缩容<a class="headerlink" href="#" title="Permalink to this headline"></a></h2>
<p>目前只考虑动态扩容trainer数量,可以减小系统复杂性。</p>
</div>
</div>
<div class="section" id="">
<span id="id4"></span><h1>术语<a class="headerlink" href="#" title="Permalink to this headline"></a></h1>
<ul class="simple">
<li>model: 指深度学习训练之后得到的所有参数,使用这个神经网络可以完成对新数据的预测</li>
<li>parameters: 神经网络中的参数,包括权重w和偏置b。一个神经网络的模型由大量的参数组成</li>
<li>shard: 分片,通常指将一个整体拆分成多份的其中的一份。</li>
<li>model shard: 将一个神经网络参数拆分成多份,每个shard分别存储在其中一台parameter server之上</li>
<li>parameter block: 多个parameter block构成一个model shard</li>
<li>单点故障: 任意时刻只可能同时有一台服务器故障。由于集群中同时存在两台机器故障的概率极低((平均故障率*平均故障修复时间)^2)只对特殊在线系统考虑两台以上同时故障的容灾。</li>
</ul>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
此差异已折叠。
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -17,12 +17,16 @@ A training job will be created once user asks Paddle cloud to train a model. The
1. the *master process*, which dispatches tasks to
1. one or more *trainer processes*, which run distributed training and synchronize gradients/models via
1. one or more *parameter server processes*, where each holds a shard of the global model.
1. one or more *parameter server processes*, where each holds a shard of the global model, and receive the uploaded gradients from every *trainer process*, so they can run the optimize functions to update their parameters.
Their relation is illustrated in the following graph:
<img src="src/paddle-model-sharding.png"/>
By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.
When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.
### Master Process
The master process will:
......@@ -31,7 +35,7 @@ The master process will:
- Keep track of training progress on the dataset with [task queue](#task-queue). A training job will iterate on the dataset for a full pass until it goes into next pass.
#### Task
#### Task
A task is a data shard to be trained. The total number of tasks will be much bigger than the total number of trainers. The number of data instances inside a task will be much bigger than the mini-batch size.
......@@ -78,7 +82,7 @@ The communication pattern between the trainers and the parameter servers depends
- Synchronous Stochastic Gradient Descent (sync-SGD)
Parameter server will wait for all trainer finish n-th mini-batch calculation and send their gradients before broadcasting new parameters to every trainer. Every trainer will wait for the new parameters before starting n+1-th mini-batch.
- Asynchronous Stochastic Gradient Descent (async-SGD)
There will no synchronization between different trainers, and parameter server updates its parameter as soon as it receives new gradient:
......@@ -118,8 +122,6 @@ When the master is started by the Kubernetes, it executes the following steps at
1. Watches the trainer prefix keys `/trainer/` on etcd to find the live trainers.
1. Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.
The master process will kill itself if its etcd lease expires.
When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.
### Trainer Process
......@@ -132,6 +134,8 @@ When the trainer is started by the Kubernetes, it executes the following steps a
If trainer's etcd lease expires, it will try set key `/trainer/<unique ID>` again so that the master process can discover the trainer again.
When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.
### Parameter Server Process
When the parameter server is started by Kubernetes, it executes the following steps at startup:
......@@ -140,11 +144,11 @@ When the parameter server is started by Kubernetes, it executes the following st
1. Search through etcd keys `/ps/<index>` (`/ps/0`, `/ps/1`, ...) to find the first non-existant key whose index is smaller than the total number of parameter servers. Set the key using a transaction to avoid concurrent writes. The parameter server's index is inferred from the key name.
The desired number of parameter servers is 3:
<img src="src/paddle-ps-0.png"/>
The third parameter server joined:
<img src="src/paddle-ps-1.png"/>
1. The parameter server can load parameters if there are already saved parameters in the save path (inferred from its index).
......@@ -153,6 +157,13 @@ When the parameter server is started by Kubernetes, it executes the following st
If the parameter server's etcd lease expires, the parameter server will kill itself.
## Parameter Server Checkpointing
See [here](./checkpointing.md)
## Store and dispatching trainning data
See [here](./data_dispatch.md)
## Dynamic Scaling
### Trainer Scaling
......
## 模型参数检查点(Checkpointing)
模型数据检查点的实现,可以有效的避免parameter server的单点或多点同时故障。模型参数检查点通过定期向磁盘上保存一份存储在parameter server内存中的模型数据的完整镜像,来保证训练过程可以从中间状态重新启动。在一个不可中断并缺少备份的训练任务中,可以通过阶段性的保存每个parameter server的数据快照(snapshot)到 ***分布式存储服务*** 达到容灾的目的,比如每隔10分钟最新的快照,并删除更早的快照。在出现单点故障时,只需要恢复这台节点,或者将这台节点迁移到另一个节点并启动即可恢复训练任务。
<img src="src/checkpointing.png" width="500"/>
### 快照保存的设计如下:
说明:
* parameter server在集群中启动后,自动挂载分布式存储目录,并把快照保存到这个目录下。
* ***注:每个parameter server的检查点各自独立保存,暂时不考虑多个parameter server同步的保存一个特定时间点的全局检查点,因为这样做也没法保证消除随机性。***
检查点保存程序流程:
1. 如果满足条件"每隔10分钟"时,parameter server会获取parameters内存的`read_lock`,启动一个新的线程开始保存检查点。如果已经正在执行保存检查点的线程,则忽略。由于对parameters的更新需要获取parameters内存的`write_lock`,所以在写入快照的过程中,parameter server会暂停参数更新并等待。
2. parameter server生成一个UUID,向指定的目录中一个新的文件(文件名为此UUID)写入快照数据。在快照写入完成后,计算这个文件的MD5 sum。然后在etcd的`/checkpoints/[pserver_id]`中写入json内容:`{"uuid": [UUID], "md5", "MD5 sum", "timestamp": xxxx}`。
3. 删除磁盘目录中不是当前uuid的快照文件。
4. 释放对paramters内存的锁定,停止保存检查点的线程。
这里需要用户额外注意,在您的实际环境中,训练任务的运行可能会占满trainer和parameter server之间的网络带宽,如果parameter server此时还需要通过网络访问分布式存储以保存快照,可能会造成网络拥塞,而出现阶段性的运行停滞。
### 从快照恢复
在parameter server第一次启动或任意时间parameter server故障后被Kubernetes重新启动,则需要回滚到上一个检查点:
1. 从etcd中读取节点:`/checkpoints/[pserver_id]`获取最新的检查点的文件uuid
1. 从磁盘文件中加载uuid文件名的检查点快照文件,并加载其中的参数
1. 如果上面两步出现错误,则使用启动参数定义的初始化方法初始化参数
1. 开始提供服务
## TODO List
### 推测执行/加速执行(TODO)
在异构集群中,如果存在某些trainer执行速度过慢会影响整体集群的速度(如图中Trainer 1),此时master将负责启动一个新的Trainer(Accelerate Trainer 2),使用同样的训练数据block。哪个trainer先完成block的训练,则把另一个慢速的kill掉。
### 动态扩容/缩容
目前只考虑动态扩容trainer数量,可以减小系统复杂性。
## 术语
* model: 指深度学习训练之后得到的所有参数,使用这个神经网络可以完成对新数据的预测
* parameters: 神经网络中的参数,包括权重w和偏置b。一个神经网络的模型由大量的参数组成
* shard: 分片,通常指将一个整体拆分成多份的其中的一份。
* model shard: 将一个神经网络参数拆分成多份,每个shard分别存储在其中一台parameter server之上
* parameter block: 多个parameter block构成一个model shard
* 单点故障: 任意时刻只可能同时有一台服务器故障。由于集群中同时存在两台机器故障的概率极低((平均故障率*平均故障修复时间)^2)只对特殊在线系统考虑两台以上同时故障的容灾。
## 训练数据的存储和分发
### 流程介绍
生产环境中的训练数据集通常体积很大,并被存储在诸如Hadoop HDFS,Ceph,AWS S3之类的分布式存储之上。这些分布式存储服务通常会把数据切割成多个分片分布式的存储在多个节点之上。这样就可以在云端执行多种数据类计算任务,包括:
* 数据预处理任务
* Paddle训练任务
* 在线模型预测服务
<img src="src/paddle-cloud-in-data-center.png" width="500"/>
在上图中显示了在一个实际生产环境中的应用(人脸识别)的数据流图。生产环境的日志数据会通过实时流的方式(Kafka)和离线数据的方式(HDFS)存储,并在集群中运行多个分布式数据处理任务,比如流式数据处理(online data process),离线批处理(offline data process)完成数据的预处理,提供给paddle作为训练数据。用于也可以上传labeled data到分布式存储补充训练数据。在paddle之上运行的深度学习训练输出的模型会提供给在线人脸识别的应用使用。
### 训练数据的存储
选择GlusterFS作为训练数据的存储服务(后续的实现考虑HDFS)。
在Kubernetes上运行的不同的计算框架,可以通过Volume或PersistentVolume挂载存储空间到每个容器中。
在GlusterFS存储系统中的公开目录,需要保存一些预置的公开数据集(比如MNIST, BOW, imagenet数据集等),并且可以被提交的job直接使用。
### 上传训练文件
使用下面命令,可以把本地的训练数据上传到存储集群中,并指定上传数据的`dataset-name`:
```
paddle upload train_data.list "dataset-name"
```
其中`.list`文件描述了训练数据的文件和对应的label,对于图像类数据,`.list文件`样例如下,每一行包含了图片文件的路径和其label(用tab分隔开):
```
./data/image1.jpg 1
./data/image2.jpg 5
./data/image3.jpg 2
./data/image4.jpg 5
./data/image5.jpg 1
./data/image6.jpg 8
...
```
对于文本类训练数据样例如下(机器翻译),一行中包含源语言,目标语言的文本(label):
```
L&apos; inflation , en Europe , a dérapé sur l&apos; alimentation Food : Where European inflation slipped up
L&apos; inflation accélérée , mesurée dans la zone euro , est due principalement à l&apos; augmentation rapide des prix de l&apos; alimentation . The skyward zoom in food prices is the dominant force behind the speed up in eurozone inflation .
...
```
### 使用reader
用户在使用v2 API编写训练任务时,可以使用paddle内置的reader完成对GlusterFS存储中的训练数据的读取,返回文件中的各列,然后在调用`trainer.train()`时传入,完成训练数据的读取:
```python
reader = paddle.dist.reader("dataset-name")
trainer.train(reader, ...)
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
trainer.train(batch_reader, ...)
```
trainer.train内部会获取reader的内容:
```
def paddle.train(batch_reader):
r = batch_reader() # create a iterator for one pass of data
for batch in r:
# train
```
这里面batch是含有128个data instance的mini-batch。每一个data instance会是一个tuple,tuple元素的顺序与`.list`文件文件中每一列的顺序是一致的。每一个data instance会是(raw_image_file_binary_data, label)。其中raw_image_file_binary_data是对应图像文件的没有解码的原始二进制数据,用户需要自己解码。label是文本类型(比如:“1“,”2“),这里用户需要的其实是整形,用户需要自己转换成整形。
### 实现reader
reader的实现需要考虑本地训练程序实现之后,可以不修改程序直接提交集群进行分布式训练。要达到这样的目标,需要实现下面的功能:
paddle会封装一个在集群中使用的reader: `paddle.dist.reader()`。在集群训练时需要使用这个reader指定要使用的数据集开始训练。用户的训练程序需要按照如下方式初始化reader:
```python
if os.getenv("PADDLE_TRAIN_LOCAL"):
reader = my_local_reader("dataset-name")
else:
reader = paddle.dist.reader("dataset-name")
```
用户训练程序提交到集群之后,集群会自动设置`PADDLE_TRAIN_LOCAL`环境变量,reader会被配置成集群训练的版本。其中`paddle.dist.reader()`需要从master的队列中获得需要开始执行的训练task,并找到对应的训练数据文件,开始训练任务。如果用户的训练数据源来自于其他服务,比如从集群中的Kafka,zeromq队列读取,也可以根据实际情况实现集群中运行的reader程序。
## TODO
### 支持将数据合并成内部的文件格式(key-value),方便sharding与顺序读取
### 支持用户自定义的数据预处理job
......@@ -199,6 +199,8 @@
<li><a class="reference internal" href="#parameter-server-process">Parameter Server Process</a></li>
</ul>
</li>
<li><a class="reference internal" href="#parameter-server-checkpointing">Parameter Server Checkpointing</a></li>
<li><a class="reference internal" href="#store-and-dispatching-trainning-data">Store and dispatching trainning data</a></li>
<li><a class="reference internal" href="#dynamic-scaling">Dynamic Scaling</a><ul>
<li><a class="reference internal" href="#trainer-scaling">Trainer Scaling</a></li>
<li><a class="reference internal" href="#parameter-server-scaling">Parameter Server Scaling</a></li>
......@@ -253,10 +255,12 @@
<ol class="simple">
<li>the <em>master process</em>, which dispatches tasks to</li>
<li>one or more <em>trainer processes</em>, which run distributed training and synchronize gradients/models via</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model.</li>
<li>one or more <em>parameter server processes</em>, where each holds a shard of the global model, and receive the uploaded gradients from every <em>trainer process</em>, so they can run the optimize functions to update their parameters.</li>
</ol>
<p>Their relation is illustrated in the following graph:</p>
<p><img src="src/paddle-model-sharding.png"/></p>
<p>By coordinating these processes, PaddlePaddle supports use both Synchronize Stochastic Gradient Descent (sync SGD) and Asynchronous Stochastic Gradient Descent (async SGD) to train user-defined neural network topologies.</p>
<p>When training with sync SGD, parameter servers wait for all trainers to finish gradients update and then send the updated parameters to trainers, training can not proceed until the trainer received the updated parameters. This creates a synchronization point between trainers. When training with async SGD, each trainer upload gradient and download new parameters individually, without the synchronization with other trainers. Using asyc SGD will be faster in terms of time per pass, but have more noise in gradient since trainers are likely to have a stale model.</p>
<div class="section" id="master-process">
<span id="master-process"></span><h3>Master Process<a class="headerlink" href="#master-process" title="永久链接至标题"></a></h3>
<p>The master process will:</p>
......@@ -350,7 +354,6 @@
<li>Watches the trainer prefix keys <code class="docutils literal"><span class="pre">/trainer/</span></code> on etcd to find the live trainers.</li>
<li>Starts dispatching the tasks to the trainers, and updates task queue using an etcd transaction to ensure lock is held during the update.</li>
</ol>
<p>The master process will kill itself if its etcd lease expires.</p>
<p>When the master process is dead for any reason, Kubernetes will restart it. It will be online again with all states recovered from etcd in few minutes.</p>
</div>
<div class="section" id="trainer-process">
......@@ -362,6 +365,7 @@
<li>Waits for tasks from the master to start training.</li>
</ol>
<p>If trainer&#8217;s etcd lease expires, it will try set key <code class="docutils literal"><span class="pre">/trainer/&lt;unique</span> <span class="pre">ID&gt;</span></code> again so that the master process can discover the trainer again.</p>
<p>When a trainer fails, Kuberentes would try to restart it. The recovered trainer would fetch tasks from the TODO queue and go on training.</p>
</div>
<div class="section" id="parameter-server-process">
<span id="id3"></span><h3>Parameter Server Process<a class="headerlink" href="#parameter-server-process" title="永久链接至标题"></a></h3>
......@@ -383,6 +387,14 @@
<p>If the parameter server&#8217;s etcd lease expires, the parameter server will kill itself.</p>
</div>
</div>
<div class="section" id="parameter-server-checkpointing">
<span id="parameter-server-checkpointing"></span><h2>Parameter Server Checkpointing<a class="headerlink" href="#parameter-server-checkpointing" title="永久链接至标题"></a></h2>
<p>See <a class="reference internal" href="checkpointing.html"><span class="doc">here</span></a></p>
</div>
<div class="section" id="store-and-dispatching-trainning-data">
<span id="store-and-dispatching-trainning-data"></span><h2>Store and dispatching trainning data<a class="headerlink" href="#store-and-dispatching-trainning-data" title="永久链接至标题"></a></h2>
<p>See <a class="reference internal" href="data_dispatch.html"><span class="doc">here</span></a></p>
</div>
<div class="section" id="dynamic-scaling">
<span id="dynamic-scaling"></span><h2>Dynamic Scaling<a class="headerlink" href="#dynamic-scaling" title="永久链接至标题"></a></h2>
<div class="section" id="trainer-scaling">
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>模型参数检查点(Checkpointing) &mdash; PaddlePaddle 文档</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="索引"
href="../../genindex.html"/>
<link rel="search" title="搜索" href="../../search.html"/>
<link rel="top" title="PaddlePaddle 文档" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a>Home</a></li>
<li><a>Get Started</a></li>
<li class="active"><a>Documentation</a></li>
<li><a>About Us</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../tutorials/index_cn.html">完整教程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_cn.html">安装与编译</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_cn.html">PaddlePaddle的Docker容器使用方式</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_cn.html">Ubuntu部署PaddlePaddle</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/cmake/build_from_source_cn.html">PaddlePaddle的编译选项</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/basic_usage/index_cn.html">经典的线性回归任务</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../tutorials/index_cn.html">完整教程</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/quick_start/index_cn.html">快速入门</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/rec/ml_regression_cn.html">个性化推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/image_classification/index_cn.html">图像分类</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/sentiment_analysis/index_cn.html">情感分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/semantic_role_labeling/index_cn.html">语义角色标注</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/text_generation/index_cn.html">机器翻译</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/imagenet_model/resnet_model_cn.html">ResNet模型</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../tutorials/embedding_model/index_cn.html">词向量模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_cn.html">设置命令行参数</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_cn.html">使用案例</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_cn.html">参数概述</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_cn.html">细节描述</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/concepts/use_concepts_cn.html">基本使用概念</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_cn.html">运行分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html">Kubernetes 简介</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html#kubernetes">部署Kubernetes集群</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html#">选择存储方案</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html#kubectl">配置kubectl</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_cn.html">Kubernetes单机训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_distributed_cn.html">Kubernetes分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/write_docs_cn.html">如何贡献/修改文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_cn.html">如何贡献代码</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_cn.html">RNN相关模型</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_cn.html">RNN配置</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/recurrent_group_cn.html">Recurrent Group教程</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hierarchical_layer_cn.html">支持双层序列作为输入的Layer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hrnn_rnn_api_compare_cn.html">单双层RNN API对比介绍</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_cn.html">GPU性能分析与调优</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">模型配置</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">数据访问</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">训练与应用</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
</nav>
<nav class="local-toc"><ul>
<li><a class="reference internal" href="#">模型参数检查点(Checkpointing)</a><ul>
<li><a class="reference internal" href="#">快照保存的设计如下:</a></li>
<li><a class="reference internal" href="#">从快照恢复</a></li>
</ul>
</li>
<li><a class="reference internal" href="#todo-list">TODO List</a><ul>
<li><a class="reference internal" href="#todo">推测执行/加速执行(TODO)</a></li>
<li><a class="reference internal" href="#">动态扩容/缩容</a></li>
</ul>
</li>
<li><a class="reference internal" href="#">术语</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>模型参数检查点(Checkpointing)</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="checkpointing">
<span id="checkpointing"></span><h1>模型参数检查点(Checkpointing)<a class="headerlink" href="#checkpointing" title="永久链接至标题"></a></h1>
<p>模型数据检查点的实现,可以有效的避免parameter server的单点或多点同时故障。模型参数检查点通过定期向磁盘上保存一份存储在parameter server内存中的模型数据的完整镜像,来保证训练过程可以从中间状态重新启动。在一个不可中断并缺少备份的训练任务中,可以通过阶段性的保存每个parameter server的数据快照(snapshot)到 <strong><em>分布式存储服务</em></strong> 达到容灾的目的,比如每隔10分钟最新的快照,并删除更早的快照。在出现单点故障时,只需要恢复这台节点,或者将这台节点迁移到另一个节点并启动即可恢复训练任务。</p>
<p><img src="src/checkpointing.png" width="500"/></p>
<div class="section" id="">
<span id="id1"></span><h2>快照保存的设计如下:<a class="headerlink" href="#" title="永久链接至标题"></a></h2>
<p>说明:</p>
<ul class="simple">
<li>parameter server在集群中启动后,自动挂载分布式存储目录,并把快照保存到这个目录下。</li>
<li><strong><em>注:每个parameter server的检查点各自独立保存,暂时不考虑多个parameter server同步的保存一个特定时间点的全局检查点,因为这样做也没法保证消除随机性。</em></strong></li>
</ul>
<p>检查点保存程序流程:</p>
<ol class="simple">
<li>如果满足条件&#8221;每隔10分钟&#8221;时,parameter server会获取parameters内存的<code class="docutils literal"><span class="pre">read_lock</span></code>,启动一个新的线程开始保存检查点。如果已经正在执行保存检查点的线程,则忽略。由于对parameters的更新需要获取parameters内存的<code class="docutils literal"><span class="pre">write_lock</span></code>,所以在写入快照的过程中,parameter server会暂停参数更新并等待。</li>
<li>parameter server生成一个UUID,向指定的目录中一个新的文件(文件名为此UUID)写入快照数据。在快照写入完成后,计算这个文件的MD5 sum。然后在etcd的<code class="docutils literal"><span class="pre">/checkpoints/[pserver_id]</span></code>中写入json内容:<code class="docutils literal"><span class="pre">{&quot;uuid&quot;:</span> <span class="pre">[UUID],</span> <span class="pre">&quot;md5&quot;,</span> <span class="pre">&quot;MD5</span> <span class="pre">sum&quot;,</span> <span class="pre">&quot;timestamp&quot;:</span> <span class="pre">xxxx}</span></code></li>
<li>删除磁盘目录中不是当前uuid的快照文件。</li>
<li>释放对paramters内存的锁定,停止保存检查点的线程。</li>
</ol>
<p>这里需要用户额外注意,在您的实际环境中,训练任务的运行可能会占满trainer和parameter server之间的网络带宽,如果parameter server此时还需要通过网络访问分布式存储以保存快照,可能会造成网络拥塞,而出现阶段性的运行停滞。</p>
</div>
<div class="section" id="">
<span id="id2"></span><h2>从快照恢复<a class="headerlink" href="#" title="永久链接至标题"></a></h2>
<p>在parameter server第一次启动或任意时间parameter server故障后被Kubernetes重新启动,则需要回滚到上一个检查点:</p>
<ol class="simple">
<li>从etcd中读取节点:<code class="docutils literal"><span class="pre">/checkpoints/[pserver_id]</span></code>获取最新的检查点的文件uuid</li>
<li>从磁盘文件中加载uuid文件名的检查点快照文件,并加载其中的参数</li>
<li>如果上面两步出现错误,则使用启动参数定义的初始化方法初始化参数</li>
<li>开始提供服务</li>
</ol>
</div>
</div>
<div class="section" id="todo-list">
<span id="todo-list"></span><h1>TODO List<a class="headerlink" href="#todo-list" title="永久链接至标题"></a></h1>
<div class="section" id="todo">
<span id="todo"></span><h2>推测执行/加速执行(TODO)<a class="headerlink" href="#todo" title="永久链接至标题"></a></h2>
<p>在异构集群中,如果存在某些trainer执行速度过慢会影响整体集群的速度(如图中Trainer 1),此时master将负责启动一个新的Trainer(Accelerate Trainer 2),使用同样的训练数据block。哪个trainer先完成block的训练,则把另一个慢速的kill掉。</p>
</div>
<div class="section" id="">
<span id="id3"></span><h2>动态扩容/缩容<a class="headerlink" href="#" title="永久链接至标题"></a></h2>
<p>目前只考虑动态扩容trainer数量,可以减小系统复杂性。</p>
</div>
</div>
<div class="section" id="">
<span id="id4"></span><h1>术语<a class="headerlink" href="#" title="永久链接至标题"></a></h1>
<ul class="simple">
<li>model: 指深度学习训练之后得到的所有参数,使用这个神经网络可以完成对新数据的预测</li>
<li>parameters: 神经网络中的参数,包括权重w和偏置b。一个神经网络的模型由大量的参数组成</li>
<li>shard: 分片,通常指将一个整体拆分成多份的其中的一份。</li>
<li>model shard: 将一个神经网络参数拆分成多份,每个shard分别存储在其中一台parameter server之上</li>
<li>parameter block: 多个parameter block构成一个model shard</li>
<li>单点故障: 任意时刻只可能同时有一台服务器故障。由于集群中同时存在两台机器故障的概率极低((平均故障率*平均故障修复时间)^2)只对特殊在线系统考虑两台以上同时故障的容灾。</li>
</ul>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="../../_static/translations.js"></script>
<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.0/MathJax.js"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册