diff --git a/doc/design/cluster_train/src/trainer.graffle b/doc/design/cluster_train/src/trainer.graffle
index 42384a3f059966e22e22f5fa4295cc9ead5cef83..43415ed8cf61a5acfa34f8e56b9577f338dbf254 100644
Binary files a/doc/design/cluster_train/src/trainer.graffle and b/doc/design/cluster_train/src/trainer.graffle differ
diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md
index 274452fbf0c595ad7b4dbeffe85ad9038f12b458..93c5544bcfa911f8bdcdaea39a75b3ab7ef218f8 100644
--- a/doc/howto/usage/cluster/cluster_train_cn.md
+++ b/doc/howto/usage/cluster/cluster_train_cn.md
@@ -1,135 +1,215 @@
-```eval_rst
-.. _cluster_train:
+# PaddlePaddle分布式训练
+
+* [概述](#概述)
+* [环境准备](#环境准备)
+* [启动参数说明](#启动参数说明)
+ * [启动参数服务器](#启动参数服务器)
+ * [启动计算节点](#启动计算节点)
+ * [准备数据集](#准备数据集)
+ * [准备训练程序](#准备训练程序)
+* [使用分布式计算平台或工具](#使用分布式计算平台或工具)
+ * [使用Fabric启动集群作业](#使用fabric启动集群作业)
+ * [准备一个Linux集群](#准备一个linux集群)
+ * [启动集群作业](#启动集群作业)
+ * [终止集群作业](#终止集群作业)
+ * [检查集群训练结果](#检查集群训练结果)
+ * [检查模型输出](#检查模型输出)
+ * [在OpenMPI集群中提交训练作业](#在openmpi集群中提交训练作业)
+ * [准备OpenMPI集群](#准备OpenMPI集群)
+ * [启动集群作业](#启动集群作业-1)
+ * [在Kubernetes集群中提交训练作业](#在kubernetes集群中提交训练作业)
+
+# 概述
+本文将介绍如何使用PaddlePaddle在不同的集群框架下完成分布式训练。分布式训练架构如下图所示:
+
+
+
+- 数据分片(Data shard): 用于训练神经网络的数据,被切分成多个部分,每个部分分别给每个trainer使用。
+- 计算节点(Trainer): 每个trainer启动后读取切分好的一部分数据,开始神经网络的“前馈”和“后馈”计算,并和参数服务器通信。在完成一定量数据的训练后,上传计算得出的梯度(gradients),然后下载优化更新后的神经网络参数(parameters)。
+- 参数服务器(Parameter server):每个参数服务器只保存整个神经网络所有参数的一部分。参数服务器接收从计算节点上传的梯度,并完成参数优化更新,再将更新后的参数下发到每个计算节点。
+
+这样,通过计算节点和参数服务器的分布式协作,可以完成神经网络的SGD方法的训练。PaddlePaddle可以同时支持同步随机梯度下降(SGD)和异步随机梯度下降。
+
+在使用同步SGD训练神经网络时,PaddlePaddle使用同步屏障(barrier),使梯度的提交和参数的更新按照顺序方式执行。在异步SGD中,则并不会等待所有trainer提交梯度才更新参数,这样极大地提高了计算的并行性:参数服务器之间不相互依赖,并行地接收梯度和更新参数,参数服务器也不会等待计算节点全部都提交梯度之后才开始下一步,计算节点之间也不会相互依赖,并行地执行模型的训练。可以看出,虽然异步SGD方式会提高参数更新并行度, 但是并不能保证参数同步更新,在任意时间某一台参数服务器上保存的参数可能比另一台要更新,与同步SGD相比,梯度会有噪声。
+
+# 环境准备
+
+1. 准备您的计算集群。计算集群通常由一组(几台到几千台规模)的Linux服务器组成。服务器之间可以通过局域网(LAN)联通,每台服务器具有集群中唯一的IP地址(或者可被DNS解析的主机名)。集群中的每台计算机通常被成为一个“节点”。
+1. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,还需要在节点上安装对应的GPU驱动以及CUDA。PaddlePaddle的安装可以参考[build_and_install](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/getstarted/build_and_install)的多种安装方式。我们推荐使用[Docker](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_cn.rst)安装方式来快速安装PaddlePaddle。
+
+安装完成之后,执行下面的命令可以查看已经安装的版本(docker安装方式可以进入docker容器执行:`docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
+```bash
+$ paddle version
+PaddlePaddle 0.10.0, compiled with
+ with_avx: ON
+ with_gpu: OFF
+ with_double: OFF
+ with_python: ON
+ with_rdma: OFF
+ with_timer: OFF
```
-# 运行分布式训练
+下面以`doc/howto/usage/cluster/src/word2vec`中的代码作为实例,介绍使用PaddlePaddle v2 API完成分布式训练。
-在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将以[推荐系统](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)为例创建分布式的单进程训练。
+# 启动参数说明
+## 启动参数服务器
+执行以下的命令启动一个参数服务器并等待和计算节点的数据交互
+```bash
+$ paddle pserver --port=7164 --ports_num=1 --ports_num_for_sparse=1 --num_gradient_servers=1
+```
-在本文中使用的[脚本](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train)通过 SSH 运行分布式作业。 它们还可以供那些运行更复杂的集群管理系统(如 MPI 和 [Kubernetes](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/k8s) )的用户参考。
+如果希望可以在后台运行pserver程序,并保存输出到一个日志文件,可以运行:
+```bash
+$ stdbuf -oL /usr/bin/nohup paddle pserver --port=7164 --ports_num=1 --ports_num_for_sparse=1 --num_gradient_servers=1 &> pserver.log
+```
-## 前提条件
+| 参数 | 是否必选 | 默认值 | 说明 |
+| ------------- | ------------- | ------------- | ------------- |
+| port | 必选 | 7164 | pserver监听的起始端口,根据ports_num决定
总端口个数,从起始端口监听多个端口用于通信 |
+| ports_num | 必选 | 1 | 监听的端口个数 |
+| ports_num_for_sparse | 必选 | 1 | 用于稀疏类型参数通信的端口个数 |
+| num_gradient_servers | 必选 | 1 | 当前训练任务pserver总数 |
+
+## 启动计算节点
+执行以下命令启动使用python编写的trainer程序(文件名为任意文件名,如train.py)
+```bash
+$ python train.py
+```
-1. 上述脚本使用 Python 库 [fabric](http://www.fabfile.org/) 来运行 SSH 命令。 我们使用 `pip` 来安装 fabric:
+trainer需要和pserver保持网络联通以完成训练。trainer启动需要传入端口、pserver地址等参数使trainer可以正确连接到pserver。这些参数可以通过环境变量(https://zh.wikipedia.org/wiki/环境变量 )或编写程序时`paddle.init()`中传入参数。如果同时使用`paddle.init()`参数和环境变量,将会优先使用`paddle.init()`中传入的参数。
- ```bash
- pip install fabric
- ```
+使用环境变量:
-2. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,需要在 `/usr/local/cuda` 中安装 CUDA; 否则 Paddle 将在运行时报错。
+```bash
+export PADDLE_INIT_USE_GPU=False
+export PADDLE_INIT_TRAINER_COUNT=1
+export PADDLE_INIT_PORT=7164
+export PADDLE_INIT_PORTS_NUM=1
+export PADDLE_INIT_PORTS_NUM_FOR_SPARSE=1
+export PADDLE_INIT_NUM_GRADIENT_SERVERS=1
+export PADDLE_INIT_TRAINER_ID=0
+export PADDLE_INIT_PSERVERS=127.0.0.1
+```
-3. 在 [`cluster_train/conf.py`] 中设置 `ROOT_DIR`, 该 ROOT_DIR 要在所有节点上存在。为了方便起见,我们通常在所有节点上创建一个 Unix 用户 `paddle`,并设置 `ROOT_DIR=/home/paddle`。这样,我们可以将 SSH 公钥写入 `/home/paddle/.ssh/authorized_keys`,以便用户 `paddle` 可以 SSH 到所有节点而不用密码。
+使用参数:
-## 准备工作空间
+```python
+paddle.init(
+ use_gpu=False,
+ trainer_count=1,
+ port=7164,
+ ports_num=1,
+ ports_num_for_sparse=1,
+ num_gradient_servers=1,
+ trainer_id=0,
+ pservers="127.0.0.1")
+```
-我们将放置依赖库、配置等文件的目录视为 *工作空间(workspace)*。
+| 参数 | 是否必选 | 默认 | 说明 |
+| ------------- | ------------- | ------------- | ------------- |
+| use_gpu | 可选 | False | 是否启用GPU训练 |
+| trainer_count | 必选 | 1 | 当前训练任务trainer总个数 |
+| port | 必选 | 7164 | 连接到pserver的端口 |
+| ports_num | 必选 | 1 | 连接到pserver的端口个数 |
+| ports_num_for_sparse | 必选 | 1 | 和pserver之间用于稀疏类型参数通信的端口个数 |
+| num_gradient_servers | 必选 | 1 | 当前训练任务pserver总数 |
+| trainer_id | 必选 | 0 | 每个trainer的唯一ID,从0开始的整数 |
+| pservers | 必选 | 127.0.0.1 | 当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开 |
-这些 `train/test` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 `train.list/test.list` 的索引文件引用训练/测试数据,所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。
-通常,你可以使用本地训练中的相同模型文件进行集群训练。请记住,在模型文件的 `setting`函数中设置的 `batch_size` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。
+## 准备数据集
-以下步骤基于 demo 目录中的 [demo/recommendation](https://github.com/PaddlePaddle/Paddle/tree/develop/demo/recommendation)。
+参考样例数据准备脚本[prepare.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py),准备训练数据和验证数据集,我们使用paddle.dataset.imikolov数据集,并根据分布式训练并发数(trainer节点个数),在`prepare.py`开头部分指定`SPLIT_COUNT`将数据切分成多份。
-你只需完成 demo/recommendation 教程文档到 `Train` 的部分,之后你会得到训练/测试数据和模型配置文件。最后,只需使用 demo/recommendation 作为集群训练的工作空间。
+在线上系统中,通常会使用MapReduce任务的输出结果作为训练结果,这样训练文件的个数会比较多,而且个数并不确定。在trainer中可以使用下面取模的方法为每个trainer分配训练数据文件:
-最后,你的工作空间应如下所示:
-```
-.
-|-- common_utils.py
-|-- data
-| |-- config.json
-| |-- config_generator.py
-| |-- meta.bin
-| |-- meta_config.json
-| |-- meta_generator.py
-| |-- ml-1m
-| |-- ml_data.sh
-| |-- ratings.dat.test
-| |-- ratings.dat.train
-| |-- split.py
-| |-- test.list
-| `-- train.list
-|-- dataprovider.py
-|-- evaluate.sh
-|-- prediction.py
-|-- preprocess.sh
-|-- requirements.txt
-|-- run.sh
-`-- trainer_config.py
+```python
+import os
+train_list = []
+flist = os.listdir("/train_data/")
+for f in flist:
+ suffix = int(f.split("-")[1])
+ if suffix % TRAINER_COUNT == TRAINER_ID:
+ train_list.append(f)
```
-虽然这些文件并非都需要集群训练,但是也没有必要删除无用的文件。
-
-`trainer_config.py`
-表示模型配置文件。
-`train.list` 和 `test.list`
-文件索引。它存储当前节点所有训练/测试数据的所有相对或绝对文件路径。
+示例程序`prepare.py`会把训练集和测试集分别分割成多个文件(例子中为3个,后缀为`-00000`、`-00001`和`-00002`):
+```
+train.txt
+train.txt-00000
+train.txt-00001
+train.txt-00002
+test.txt
+test.txt-00000
+test.txt-00001
+test.txt-00002
+```
-`dataprovider.py`
-用于读取训练/测试样本。这与本地训练相同。
+在进行分布式训练时,每个trainer进程需要能够读取属于自己的一份数据。在一些分布式系统中,系统会提供一个分布式存储服务,这样保存在分布式存储中的数据可以被集群中的每个节点读取到。如果不使用分布式存储,则需要手动拷贝属于每个trainer节点的训练数据到对应的节点上。
-`data`
-数据目录中的所有文件被 train.list/test.list 引用。
+对于不同的训练任务,训练数据格式和训练程序的`reader()`会大不相同,所以开发者需要根据自己训练任务的实际场景完成训练数据的分割和`reader()`的编写。
+## 准备训练程序
-## 准备集群作业配置
+我们会对每个训练任务都会在每个节点上创建一个工作空间(workspace),其中包含了用户的训练程序、程序依赖、挂载或下载的训练数据分片。
-以下选项必须在 cluster_train/conf.py 中认真设置
+最后,工作空间应如下所示:
+```
+.
+|-- my_lib.py
+|-- word_dict.pickle
+|-- train.py
+|-- train_data_dir/
+| |-- train.txt-00000
+| |-- train.txt-00001
+| |-- train.txt-00002
+`-- test_data_dir/
+ |-- test.txt-00000
+ |-- test.txt-00001
+ `-- test.txt-00002
+```
-`HOSTS` 所有节点运行集群作业的主机名或 IP 。你还可以将用户和 ssh 端口附加到主机名上,例如 root@192.168.100.17:9090。
+- `my_lib.py`:会被`train.py`调用的一些用户定义的库函数,比如PIL库等。
+- `word_dict.pickle`:在`train.py`中会使用到的字典数据文件。
+- `train.py`:训练程序,代码参考[api_train_v2_cluster.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py)。***注意:*** 对于本样例代码,在使用不同的分布式计算平台时,您可能需要修改`train.py`开头的部分(如下),以便获得训练数据的位置和获取环境变量配置:
-`ROOT_DIR` 用于放置 JOB 工作空间目录的工作空间 ROOT 目录
+ ```python
+ cluster_train_file = "./train_data_dir/train/train.txt"
+ cluster_test_file = "./test_data_dir/test/test.txt"
+ node_id = os.getenv("OMPI_COMM_WORLD_RANK")
+ if not node_id:
+ raise EnvironmentError("must provied OMPI_COMM_WORLD_RANK")
+ ```
-`PADDLE_NIC` 集群通信通道的 NIC(Network Interface Card, 网络接口卡) 接口名称,例如以太网的 eth0,infiniband 的 ib0。
+- `train_data_dir`:包含训练数据的目录,可以是从分布式存储挂载过来的,也可以是在任务启动前下载到本地的。
+- `test_data_dir`:包含测试数据集的目录。
-`PADDLE_PORT` 集群通信通道的端口号
+# 使用分布式计算平台或工具
-`PADDLE_PORTS_NUM` 用于集群通信通道的端口数。 如果集群节点数量少(少于5〜6个节点),建议将其设置为较大,如2〜8,以获得更好的网络性能。
+PaddlePaddle可以使用多种分布式计算平台构建分布式计算任务,包括:
+- [Kubernetes](http://kubernetes.io) Google开源的容器集群的调度框架,支持大规模集群生产环境的完整集群方案。
+- [OpenMPI](https://www.open-mpi.org) 成熟的高性能并行计算框架。
+- [Fabric](http://www.fabfile.org) 集群管理工具。可以使用`Fabric`编写集群任务提交和管理脚本。
-`PADDLE_PORTS_NUM_FOR_SPARSE` 用于 sparse remote updater 集群通信信道的端口数。如果使用 sparse remote update,则可以像 `PADDLE_PORTS_NUM` 一样设置。
+对于不同的集群平台,会分别介绍集群作业的启动和停止方法。这些例子都可以在[cluster_train_v2](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/scripts/cluster_train_v2)找到。
-`LD_LIBRARY_PATH` 为集群作业设置额外的 LD_LIBRARY_PATH。你可以使用它来设置 CUDA 库的路径。
+在使用分布式计算平台进行训练时,任务被调度在集群中时,分布式计算平台通常会通过API或者环境变量提供任务运行需要的参数,比如节点的ID、IP和任务节点个数等。
-默认配置如下:
+## 使用Fabric启动集群作业
-```python
-HOSTS = [
- "root@192.168.100.17",
- "root@192.168.100.18",
- ]
-
-'''
-工作空间配置
-'''
-
-#工作空间根目录
-ROOT_DIR = "/home/paddle"
-
-'''
-网络配置
-'''
-#pserver NIC
-PADDLE_NIC = "eth0"
-#pserver 端口
-PADDLE_PORT = 7164
-#pserver 端口数
-PADDLE_PORTS_NUM = 2
-#pserver sparse ports num
-PADDLE_PORTS_NUM_FOR_SPARSE = 2
-
-#集群作业中所有进程的环境设置
-LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64"
-```
+### 准备一个Linux集群
+可以在`paddle/scripts/cluster_train_v2/fabric/docker_cluster`目录下,执行`kubectl -f ssh_servers.yaml`启动一个测试集群,并使用`kubectl get po -o wide`获得这些节点的IP地址。
### 启动集群作业
-`paddle.py` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 `paddle.py` 将透明、自动地将这些选项应用到 PaddlePaddle 底层进程。
+
+`paddle.py` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为 `paddle.py` 命令选项并且 `paddle.py` 将透明、自动地将这些选项应用到 PaddlePaddle 底层进程。
`paddle.py` 为方便作业启动提供了两个独特的命令选项。
-`job_dispatch_package` 设为本地 `workspace` 目录,它将被分发到 conf.py 中设置的所有节点。 它有助于帮助频繁修改和访问工作区文件的用户减少负担,否则频繁的多节点工作空间部署可能会很麻烦。
-`job_workspace` 设为已部署的工作空间目录,`paddle.py` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。
+- `job_dispatch_package` 设为本地 `workspace` 目录,它将被分发到 `conf.py` 中设置的所有节点。它有助于帮助频繁修改和访问工作区文件的用户减少负担,否则频繁的多节点工作空间部署可能会很麻烦。
+- `job_workspace` 设为已部署的工作空间目录,`paddle.py` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。
-`cluster_train/run.sh` 提供了命令样例来运行 `demo/recommendation` 集群工作,只需用你定义的目录修改 `job_dispatch_package` 和 `job_workspace`,然后:
+`cluster_train/run.sh` 提供了命令样例来运行 `doc/howto/usage/cluster/src/word2vec` 集群任务,只需用您定义的目录修改 `job_dispatch_package` 和 `job_workspace`,然后:
```
sh run.sh
```
@@ -149,7 +229,7 @@ sh run.sh
提供 pserver 运行日志,有助于诊断分布式错误。
`server.log`
-提供 pserver 进程的 stderr 和 stdout。训练失败时可以检查错误日志。
+提供 parameter server 进程的 stderr 和 stdout。训练失败时可以检查错误日志。
`train.log`
提供训练过程的 stderr 和 stdout。训练失败时可以检查错误日志。
@@ -157,3 +237,49 @@ sh run.sh
### 检查模型输出
运行完成后,模型文件将被写入节点 0 的 `output` 目录中。
工作空间中的 `nodefile` 表示当前集群作业的节点 ID。
+
+## 在OpenMPI集群中提交训练作业
+
+### 准备OpenMPI集群
+
+执行下面的命令以启动3个节点的OpenMPI集群和一个"head"节点:
+
+```bash
+paddle/scripts/cluster_train_v2/openmpi/docker_cluster
+kubectl create -f head.yaml
+kubectl create -f mpi-nodes.yaml
+```
+
+然后可以从head节点ssh无密码登录到OpenMPI的每个节点上。
+
+### 启动集群作业
+
+您可以按照下面的步骤在OpenMPI集群中提交paddle训练任务:
+
+```bash
+# 获得head和node节点的IP地址
+kubectl get po -o wide
+# 将node节点的IP地址保存到machines文件中
+kubectl get po -o wide | grep nodes | awk '{print $6}' > machines
+# 拷贝必要的文件到head节点
+scp -i ssh/id_rsa.mpi.pub machines prepare.py train.py start_mpi_train.sh tutorial@[headIP]:~
+# ssh 登录到head节点
+ssh -i ssh/id_rsa.mpi.pub tutorial@[headIP]
+# --------------- 以下操作均在head节点中执行 ---------------
+# 准备训练数据
+python prepare.py
+# 拷贝训练程序和字典文件到每台MPI节点
+cat machines | xargs -i scp word_dict.pickle train.py start_mpi_train.sh machines {}:/home/tutorial
+# 创建日志目录
+mpirun -hostfile machines -n 3 mkdir /home/tutorial/logs
+# 拷贝训练数据到各自的节点
+scp train.txt-00000 test.txt-00000 [node1IP]:/home/tutorial
+scp train.txt-00001 test.txt-00001 [node2IP]:/home/tutorial
+scp train.txt-00002 test.txt-00002 [node3IP]:/home/tutorial
+# 启动训练任务
+mpirun -hostfile machines -n 3 /home/tutorial/start_mpi_train.sh
+```
+
+## 在Kubernetes集群中提交训练作业
+
+此部分的使用方法可以参考[here](../k8s/k8s_distributed_cn.md)。
diff --git a/doc/howto/usage/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md
index c60876721cbf5565d6e48c8061811aacada748cd..1e8b4d54b9ffa99b3beef35ecaf95bbd0866535f 100644
--- a/doc/howto/usage/cluster/cluster_train_en.md
+++ b/doc/howto/usage/cluster/cluster_train_en.md
@@ -1,129 +1,220 @@
-# Run Distributed Training
+# PaddlePaddle Distributed Training
+
+* [Introduction](#introduction)
+* [Preparations](#preparations)
+* [Command-line arguments](#command-line-arguments)
+ * [Starting parameter server](#starting-parameter-server)
+ * [Starting trainer](#starting-trainer)
+ * [Prepare Training Dataset](#prepare-training-dataset)
+ * [Prepare Training program](#prepare-training-program)
+* [Use cluster platforms or cluster management tools](#use-cluster-platforms-or-cluster-management-tools)
+ * [Cluster Training Using Fabric](#cluster-training-using-fabric)
+ * [Prepare a Linux cluster](#prepare-a-linux-cluster)
+ * [Launching Cluster Job](#launching-cluster-job)
+ * [Kill Cluster Job](#kill-cluster-job)
+ * [Check Cluster Training Result](#check-cluster-training-result)
+ * [Check Model Output](#check-model-output)
+ * [Cluster Training Using OpenMPI](#cluster-training-using-openmpi)
+ * [Prepare an OpenMPI cluster](#prepare-an-openmpi-cluster)
+ * [Launching Cluster Job](#launching-cluster-job-1)
+ * [Cluster Training Using Kubernetes](#cluster-training-using-kubernetes)
+
+# Introduction
+
+In this article, we'll explain how to run distributed training jobs with PaddlePaddle on different types of clusters. The diagram below shows the main architecture of a distributed trainning job:
+
+
+
+- Data shard: training data will be split into multiple partitions, trainers use the partitions of the whole dataset to do the training job.
+- Trainer: each trainer reads the data shard, and train the neural network. Then the trainer will upload calculated "gradients" to parameter servers, and wait for parameters to be optimized on the parameter server side. When that finishes, the trainer download optimized parameters and continues its training.
+- Parameter server: every parameter server stores part of the whole neural network model data. They will do optimization calculations when gradients are uploaded from trainers, and then send updated parameters to trainers.
+
+PaddlePaddle can support both synchronize stochastic gradient descent (SGD) and asynchronous SGD.
+
+When training with synchronize SGD, PaddlePaddle uses an internal "synchronize barrier" which makes gradients update and parameter download in strict order. On the other hand, asynchronous SGD won't wait for all trainers to finish upload at a single step, this will increase the parallelism of distributed training: parameter servers do not depend on each other, they'll do parameter optimization concurrently. Parameter servers will not wait for trainers, so trainers will also do their work concurrently. But asynchronous SGD will introduce more randomness and noises in the gradient.
+
+# Preparations
+1. Prepare your computer cluster. It's normally a bunch of Linux servers connected by LAN. Each server will be assigned a unique IP address. The computers in the cluster can be called "nodes".
+2. Install PaddlePaddle on every node. If you are going to take advantage of GPU cards, you'll also need to install proper driver and CUDA libraries. To install PaddlePaddle please read [this build and install](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/getstarted/build_and_install) document. We strongly recommend using [Docker installation](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_en.rst).
+
+After installation, you can check the version by typing the below command (run a docker container if using docker: `docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
+
+```bash
+$ paddle version
+PaddlePaddle 0.10.0rc, compiled with
+ with_avx: ON
+ with_gpu: OFF
+ with_double: OFF
+ with_python: ON
+ with_rdma: OFF
+ with_timer: OFF
+```
-In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, [recommendation](https://github.com/baidu/Paddle/tree/develop/demo/recommendation).
+We'll take `doc/howto/usage/cluster/src/word2vec` as an example to introduce distributed training using PaddlePaddle v2 API.
-[Scripts](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train) used in this article launch distributed jobs via SSH. They also work as a reference for users running more sophisticated cluster management systems like MPI and [Kubernetes](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/k8s).
+# Command-line arguments
-## Prerequisite
+## Starting parameter server
-1. Aforementioned scripts use a Python library [fabric](http://www.fabfile.org/) to run SSH commands. We can use `pip` to install fabric:
+Type the below command to start a parameter server which will wait for trainers to connect:
- ```bash
- pip install fabric
- ```
+```bash
+$ paddle pserver --port=7164 --ports_num=1 --ports_num_for_sparse=1 --num_gradient_servers=1
+```
-1. We need to install PaddlePaddle on all nodes in the cluster. To enable GPUs, we need to install CUDA in `/usr/local/cuda`; otherwise Paddle would report errors at runtime.
+If you wish to run parameter servers in background, and save a log file, you can type:
+```bash
+$ stdbuf -oL /usr/bin/nohup paddle pserver --port=7164 --ports_num=1 --ports_num_for_sparse=1 --num_gradient_servers=1 &> pserver.log
+```
-1. Set the `ROOT_DIR` variable in [`cluster_train/conf.py`] on all nodes. For convenience, we often create a Unix user `paddle` on all nodes and set `ROOT_DIR=/home/paddle`. In this way, we can write public SSH keys into `/home/paddle/.ssh/authorized_keys` so that user `paddle` can SSH to all nodes without password.
+| param | required | default | description |
+| ------------- | ------------- | ------------- | ------------- |
+| port | required | 7164 | port which parameter server will listen on. If ports_num greater than 1, parameter server will listen on multiple ports for more network throughput |
+| ports_num | required | 1 | total number of ports will listen on |
+| ports_num_for_sparse | required | 1 | number of ports which serves sparse parameter update |
+| num_gradient_servers | required | 1 | total number of gradient servers |
-## Prepare Job Workspace
+## Starting trainer
+Type the command below to start the trainer(name the file whatever you want, like "train.py")
-We refer to the directory where we put dependent libraries, config files, etc., as *workspace*.
+```bash
+$ python train.py
+```
-These `train/test` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as `train.list/test.list` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition.
+Trainers' network need to be connected with parameter servers' network to finish the job. Trainers need to know port and IPs to locate parameter servers. You can pass arguments to trainers through [environment variables](https://en.wikipedia.org/wiki/Environment_variable) or pass to `paddle.init()` function. Arguments passed to the `paddle.init()` function will overwrite environment variables.
-Generally, you can use same model file from local training for cluster training. What you should have in mind that, the `batch_size` set in `setting` function in model file means batch size in `each` node of cluster job instead of total batch size if synchronization SGD was used.
+Use environment viriables:
-Following steps are based on [demo/recommendation](https://github.com/PaddlePaddle/Paddle/tree/develop/demo/recommendation) demo in demo directory.
+```bash
+export PADDLE_INIT_USE_GPU=False
+export PADDLE_INIT_TRAINER_COUNT=1
+export PADDLE_INIT_PORT=7164
+export PADDLE_INIT_PORTS_NUM=1
+export PADDLE_INIT_PORTS_NUM_FOR_SPARSE=1
+export PADDLE_INIT_NUM_GRADIENT_SERVERS=1
+export PADDLE_INIT_TRAINER_ID=0
+export PADDLE_INIT_PSERVERS=127.0.0.1
+python train.py
+```
-You just go through demo/recommendation tutorial doc until `Train` section, and at last you will get train/test data and model configuration file. Finaly, just use demo/recommendation as workspace for cluster training.
+Pass arguments:
-At last your workspace should look like as follow:
+```python
+paddle.init(
+ use_gpu=False,
+ trainer_count=1,
+ port=7164,
+ ports_num=1,
+ ports_num_for_sparse=1,
+ num_gradient_servers=1,
+ trainer_id=0,
+ pservers="127.0.0.1")
```
-.
-|-- common_utils.py
-|-- data
-| |-- config.json
-| |-- config_generator.py
-| |-- meta.bin
-| |-- meta_config.json
-| |-- meta_generator.py
-| |-- ml-1m
-| |-- ml_data.sh
-| |-- ratings.dat.test
-| |-- ratings.dat.train
-| |-- split.py
-| |-- test.list
-| `-- train.list
-|-- dataprovider.py
-|-- evaluate.sh
-|-- prediction.py
-|-- preprocess.sh
-|-- requirements.txt
-|-- run.sh
-`-- trainer_config.py
+
+| param | required | default | description |
+| ------------- | ------------- | ------------- | ------------- |
+| use_gpu | optional | False | set to "True" to enable GPU training |
+| trainer_count | required | 1 | total count of trainers in the training job |
+| port | required | 7164 | port to connect to parameter server |
+| ports_num | required | 1 | number of ports for communication |
+| ports_num_for_sparse | required | 1 | number of ports for sparse type caculation |
+| num_gradient_servers | required | 1 | total number of gradient server |
+| trainer_id | required | 0 | ID for every trainer, start from 0 |
+| pservers | required | 127.0.0.1 | list of IPs of parameter servers, separated by "," |
+
+## Prepare Training Dataset
+
+Here's some example code [prepare.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py), it will download public `imikolov` dataset and split it into multiple files according to job parallelism(trainers count). Modify `SPLIT_COUNT` at the begining of `prepare.py` to change the count of output files.
+
+In the real world, we often use `MapReduce` job's output as training data, so there will be lots of files. You can use `mod` to assign training file to trainers:
+
+```python
+import os
+train_list = []
+flist = os.listdir("/train_data/")
+for f in flist:
+ suffix = int(f.split("-")[1])
+ if suffix % TRAINER_COUNT == TRAINER_ID:
+ train_list.append(f)
+```
+
+Example code `prepare.py` will split training data and testing data into 3 files with digital suffix like `-00000`, `-00001` and`-00002`:
+
+```
+train.txt
+train.txt-00000
+train.txt-00001
+train.txt-00002
+test.txt
+test.txt-00000
+test.txt-00001
+test.txt-00002
```
-Not all of these files are needed for cluster training, but it's not necessary to remove useless files.
-`trainer_config.py`
-Indicates the model config file.
+When job started, every trainer needs to get it's own part of data. In some distributed systems a storage service will be provided, so the date under that path can be accessed by all the trainer nodes. Without the storage service, you must copy the training data to each trainer node.
-`train.list` and `test.list`
-File index. It stores all relative or absolute file paths of all train/test data at current node.
+Different training jobs may have different data format and `reader()` function, developers may need to write different data prepare scripts and `reader()` functions for their job.
-`dataprovider.py`
-used to read train/test samples. It's same as local training.
+## Prepare Training program
-`data`
-all files in data directory are refered by train.list/test.list which are refered by data provider.
+We'll create a *workspace* directory on each node, storing your training program, dependencies, mounted or downloaded dataset directory.
-## Prepare Cluster Job Configuration
+Your workspace may looks like:
+```
+.
+|-- my_lib.py
+|-- word_dict.pickle
+|-- train.py
+|-- train_data_dir/
+| |-- train.txt-00000
+| |-- train.txt-00001
+| |-- train.txt-00002
+`-- test_data_dir/
+ |-- test.txt-00000
+ |-- test.txt-00001
+ `-- test.txt-00002
+```
-The options below must be carefully set in cluster_train/conf.py
+- `my_lib.py`: user defined libraries, like PIL libs. This is optional.
+- `word_dict.pickle`: dict file for training word embeding.
+- `train.py`: training program. Sample code: [api_train_v2_cluster.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py). ***NOTE:*** You may need to modify the head part of `train.py` when using different cluster platform to retrive configuration environment variables:
-`HOSTS` all nodes hostname or ip that will run cluster job. You can also append user and ssh port with hostname, such as root@192.168.100.17:9090.
+ ```python
+ cluster_train_file = "./train_data_dir/train/train.txt"
+ cluster_test_file = "./test_data_dir/test/test.txt"
+ node_id = os.getenv("OMPI_COMM_WORLD_RANK")
+ if not node_id:
+ raise EnvironmentError("must provied OMPI_COMM_WORLD_RANK")
+ ```
-`ROOT_DIR` workspace ROOT directory for placing JOB workspace directory
+- `train_data_dir`: containing training data. Mount from storage service or copy trainning data to here.
+- `test_data_dir`: containing testing data.
-`PADDLE_NIC` the NIC(Network Interface Card) interface name for cluster communication channel, such as eth0 for ethternet, ib0 for infiniband.
+# Use cluster platforms or cluster management tools
-`PADDLE_PORT` port number for cluster commnunication channel
+PaddlePaddle supports running jobs on several platforms including:
+- [Kubernetes](http://kubernetes.io) open-source system for automating deployment, scaling, and management of containerized applications from Google.
+- [OpenMPI](https://www.open-mpi.org) Mature high performance parallel computing framework.
+- [Fabric](http://www.fabfile.org) A cluster management tool. Write scripts to submit jobs or manage the cluster.
-`PADDLE_PORTS_NUM` the number of port used for cluster communication channle. if the number of cluster nodes is small(less than 5~6nodes), recommend you set it to larger, such as 2 ~ 8, for better network performance.
+We'll introduce cluster job management on these platforms. The examples can be found under [cluster_train_v2](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/scripts/cluster_train_v2).
-`PADDLE_PORTS_NUM_FOR_SPARSE` the number of port used for sparse updater cluster commnunication channel. if sparse remote update is used, set it like `PADDLE_PORTS_NUM`
+These cluster platforms provide API or environment variables for training processes, when the job is dispatched to different nodes. Like node ID, IP or total number of nodes etc.
-`LD_LIBRARY_PATH` set addtional LD_LIBRARY_PATH for cluster job. You can use it to set CUDA libraries path.
+## Cluster Training Using Fabric
-Default Configuration as follow:
+### Prepare a Linux cluster
-```python
-HOSTS = [
- "root@192.168.100.17",
- "root@192.168.100.18",
- ]
-
-'''
-workspace configuration
-'''
-
-#root dir for workspace
-ROOT_DIR = "/home/paddle"
-
-'''
-network configuration
-'''
-#pserver nics
-PADDLE_NIC = "eth0"
-#pserver port
-PADDLE_PORT = 7164
-#pserver ports num
-PADDLE_PORTS_NUM = 2
-#pserver sparse ports num
-PADDLE_PORTS_NUM_FOR_SPARSE = 2
-
-#environments setting for all processes in cluster job
-LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64"
-```
+Run `kubectl -f ssh_servers.yaml` under the directory: `paddle/scripts/cluster_train_v2/fabric/docker_cluster` will launch a demo cluster. Run `kubectl get po -o wide` to get IP addresses of these nodes.
### Launching Cluster Job
-`paddle.py` provides automatical scripts to start all PaddlePaddle cluster processes in different nodes. By default, all command line options can set as `paddle.py` command options and `paddle.py` will transparently and automatically set these options to PaddlePaddle lower level processes.
+`paddle.py` provides automatical scripts to start all PaddlePaddle cluster processes in different nodes. By default, all command line options can be set as `paddle.py` command options and `paddle.py` will transparently and automatically set these options to PaddlePaddle lower level processes.
`paddle.py`provides two distinguished command option for easy job launching.
-`job_dispatch_package` set it with local `workspace`directory, it will be dispatched to all nodes set in conf.py. It could be helpful for frequent hacking workspace files, otherwise frequent mulit-nodes workspace deployment could make your crazy.
-`job_workspace` set it with already deployed workspace directory, `paddle.py` will skip dispatch stage to directly launch cluster job with all nodes. It could help to reduce heavy
+- `job_dispatch_package` set it with local `workspace` directory, it will be dispatched to all nodes which is set in `conf.py`. It could be helpful for frequently manipulating workspace files. otherwise, frequent multi-nodes workspace deployment is very annoying.
+- `job_workspace` set it with already deployed workspace directory, `paddle.py` will skip dispatch stage to directly launch cluster job with all nodes. It could help to reduce heavy
dispatch latency.
`cluster_train/run.sh` provides command line sample to run `demo/recommendation` cluster job, just modify `job_dispatch_package` and `job_workspace` with your defined directory, then:
@@ -134,23 +225,69 @@ sh run.sh
The cluster Job will start in several seconds.
### Kill Cluster Job
-`paddle.py` can capture `Ctrl + C` SIGINT signal to automatically kill all processes launched by it. So just stop `paddle.py` to kill cluster job. You should mannally kill job if program crashed.
+`paddle.py` can capture `Ctrl + C` SIGINT signal to automatically kill all processes launched by it. So just stop `paddle.py` to kill cluster job. You should manually kill the job if the program crashed.
### Check Cluster Training Result
Check log in $workspace/log for details, each node owns same log structure.
`paddle_trainer.INFO`
-It provides almost all interal output log for training, same as local training. Check runtime model convergence here.
+It provides almost all internal output log for training, same as local training. Check runtime model convergence here.
`paddle_pserver2.INFO`
-It provides pserver running log, which could help to diagnose distributed error.
+It provides parameter server running log, which could help to diagnose distributed error.
`server.log`
-It provides stderr and stdout of pserver process. Check error log if training crashs.
+It provides stderr and stdout of parameter server process. Check error log if training crashes.
`train.log`
-It provides stderr and stdout of trainer process. Check error log if training crashs.
+It provides stderr and stdout of trainer process. Check error log if training crashes.
### Check Model Output
-After one pass finished, model files will be writed in `output` directory in node 0.
+After one pass finished, model files will be written in `output` directory in node 0.
`nodefile` in workspace indicates the node id of current cluster job.
+
+## Cluster Training Using OpenMPI
+
+### Prepare an OpenMPI cluster
+
+Run the following command to start a 3-node MPI cluster and one "head" node.
+
+```bash
+cd paddle/scripts/cluster_train_v2/openmpi/docker_cluster
+kubectl create -f head.yaml
+kubectl create -f mpi-nodes.yaml
+```
+
+Then you can log in to every OpenMPI node using ssh without input any passwords.
+
+### Launching Cluster Job
+
+Follow the steps to launch a PaddlePaddle training job in OpenMPI cluster:\
+
+```bash
+# find out node IP addresses
+kubectl get po -o wide
+# generate a "machines" file containing node IP addresses
+kubectl get po -o wide | grep nodes | awk '{print $6}' > machines
+# copy necessary files onto "head" node
+scp -i ssh/id_rsa.mpi.pub machines prepare.py train.py start_mpi_train.sh tutorial@[headIP]:~
+# login to head node using ssh
+ssh -i ssh/id_rsa.mpi.pub tutorial@[headIP]
+# --------------- in head node ---------------
+# prepare training data
+python prepare.py
+# copy training data and dict file to MPI nodes
+cat machines | xargs -i scp word_dict.pickle train.py start_mpi_train.sh machines {}:/home/tutorial
+# creat a directory for storing log files
+mpirun -hostfile machines -n 3 mkdir /home/tutorial/logs
+# copy training data to every node
+scp train.txt-00000 test.txt-00000 [node1IP]:/home/tutorial
+scp train.txt-00001 test.txt-00001 [node2IP]:/home/tutorial
+scp train.txt-00002 test.txt-00002 [node3IP]:/home/tutorial
+# start the job
+mpirun -hostfile machines -n 3 /home/tutorial/start_mpi_train.sh
+```
+
+## Cluster Training Using Kubernetes
+
+The details can be found [here](../k8s/k8s_cn.md)
diff --git a/doc/howto/usage/cluster/src/trainer.png b/doc/howto/usage/cluster/src/trainer.png
new file mode 100644
index 0000000000000000000000000000000000000000..6537d3d56589ca9f19a77a50a970e4b5275e6ce0
Binary files /dev/null and b/doc/howto/usage/cluster/src/trainer.png differ
diff --git a/doc/howto/usage/cluster/src/trainer_cn.png b/doc/howto/usage/cluster/src/trainer_cn.png
new file mode 100644
index 0000000000000000000000000000000000000000..f9525739cc8bc6506adde642aafa0a85ae3ebebc
Binary files /dev/null and b/doc/howto/usage/cluster/src/trainer_cn.png differ
diff --git a/doc/howto/usage/cluster/src/word2vec/api_train_v2.py b/doc/howto/usage/cluster/src/word2vec/api_train_v2.py
new file mode 100644
index 0000000000000000000000000000000000000000..c0940f0e56eafa22f8aeb7052c0ddc79d8862917
--- /dev/null
+++ b/doc/howto/usage/cluster/src/word2vec/api_train_v2.py
@@ -0,0 +1,100 @@
+import gzip
+import math
+
+import paddle.v2 as paddle
+
+embsize = 32
+hiddensize = 256
+N = 5
+
+
+def wordemb(inlayer):
+ wordemb = paddle.layer.embedding(
+ input=inlayer,
+ size=embsize,
+ param_attr=paddle.attr.Param(
+ name="_proj",
+ initial_std=0.001,
+ learning_rate=1,
+ l2_rate=0,
+ sparse_update=True))
+ return wordemb
+
+
+def main():
+ # for local training
+ cluster_train = False
+
+ if not cluster_train:
+ paddle.init(use_gpu=False, trainer_count=1)
+ else:
+ paddle.init(
+ use_gpu=False,
+ trainer_count=2,
+ port=7164,
+ ports_num=1,
+ ports_num_for_sparse=1,
+ num_gradient_servers=1)
+ word_dict = paddle.dataset.imikolov.build_dict()
+ dict_size = len(word_dict)
+ firstword = paddle.layer.data(
+ name="firstw", type=paddle.data_type.integer_value(dict_size))
+ secondword = paddle.layer.data(
+ name="secondw", type=paddle.data_type.integer_value(dict_size))
+ thirdword = paddle.layer.data(
+ name="thirdw", type=paddle.data_type.integer_value(dict_size))
+ fourthword = paddle.layer.data(
+ name="fourthw", type=paddle.data_type.integer_value(dict_size))
+ nextword = paddle.layer.data(
+ name="fifthw", type=paddle.data_type.integer_value(dict_size))
+
+ Efirst = wordemb(firstword)
+ Esecond = wordemb(secondword)
+ Ethird = wordemb(thirdword)
+ Efourth = wordemb(fourthword)
+
+ contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth])
+ hidden1 = paddle.layer.fc(input=contextemb,
+ size=hiddensize,
+ act=paddle.activation.Sigmoid(),
+ layer_attr=paddle.attr.Extra(drop_rate=0.5),
+ bias_attr=paddle.attr.Param(learning_rate=2),
+ param_attr=paddle.attr.Param(
+ initial_std=1. / math.sqrt(embsize * 8),
+ learning_rate=1))
+ predictword = paddle.layer.fc(input=hidden1,
+ size=dict_size,
+ bias_attr=paddle.attr.Param(learning_rate=2),
+ act=paddle.activation.Softmax())
+
+ def event_handler(event):
+ if isinstance(event, paddle.event.EndIteration):
+ if event.batch_id % 100 == 0:
+ with gzip.open("batch-" + str(event.batch_id) + ".tar.gz",
+ 'w') as f:
+ trainer.save_parameter_to_tar(f)
+ result = trainer.test(
+ paddle.batch(
+ paddle.dataset.imikolov.test(word_dict, N), 32))
+ print "Pass %d, Batch %d, Cost %f, %s, Testing metrics %s" % (
+ event.pass_id, event.batch_id, event.cost, event.metrics,
+ result.metrics)
+
+ cost = paddle.layer.classification_cost(input=predictword, label=nextword)
+
+ parameters = paddle.parameters.create(cost)
+ adagrad = paddle.optimizer.AdaGrad(
+ learning_rate=3e-3,
+ regularization=paddle.optimizer.L2Regularization(8e-4))
+ trainer = paddle.trainer.SGD(cost,
+ parameters,
+ adagrad,
+ is_local=not cluster_train)
+ trainer.train(
+ paddle.batch(paddle.dataset.imikolov.train(word_dict, N), 32),
+ num_passes=30,
+ event_handler=event_handler)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/doc/howto/usage/cluster/src/word2vec/api_train_v2_cluster.py b/doc/howto/usage/cluster/src/word2vec/api_train_v2_cluster.py
new file mode 100644
index 0000000000000000000000000000000000000000..2e6d8887124a5524505b097803a60a35478ca644
--- /dev/null
+++ b/doc/howto/usage/cluster/src/word2vec/api_train_v2_cluster.py
@@ -0,0 +1,123 @@
+import math
+import os
+import paddle.v2 as paddle
+import pickle
+
+embsize = 32
+hiddensize = 256
+N = 5
+cluster_train_file = "./train_data_dir/train/train.txt"
+cluster_test_file = "./test_data_dir/test/test.txt"
+node_id = os.getenv("OMPI_COMM_WORLD_RANK")
+if not node_id:
+ raise EnvironmentError("must provied OMPI_COMM_WORLD_RANK")
+
+
+def wordemb(inlayer):
+ wordemb = paddle.layer.embedding(
+ input=inlayer,
+ size=embsize,
+ param_attr=paddle.attr.Param(
+ name="_proj",
+ initial_std=0.001,
+ learning_rate=1,
+ l2_rate=0,
+ sparse_update=True))
+ return wordemb
+
+
+def cluster_reader_cluster(filename, node_id):
+ def cluster_reader():
+ with open("-".join([filename, "%05d" % int(node_id)]), "r") as f:
+ for l in f:
+ csv_data = [int(cell) for cell in l.split(",")]
+ yield tuple(csv_data)
+
+ return cluster_reader
+
+
+def main():
+ # get arguments from env
+
+ # for local training
+ TRUTH = ["true", "True", "TRUE", "1", "yes", "Yes", "YES"]
+ cluster_train = os.getenv('PADDLE_CLUSTER_TRAIN', "False") in TRUTH
+ use_gpu = os.getenv('PADDLE_INIT_USE_GPU', "False")
+
+ if not cluster_train:
+ paddle.init(
+ use_gpu=use_gpu,
+ trainer_count=int(os.getenv("PADDLE_INIT_TRAINER_COUNT", "1")))
+ else:
+ paddle.init(
+ use_gpu=use_gpu,
+ trainer_count=int(os.getenv("PADDLE_INIT_TRAINER_COUNT", "1")),
+ port=int(os.getenv("PADDLE_INIT_PORT", "7164")),
+ ports_num=int(os.getenv("PADDLE_INIT_PORTS_NUM", "1")),
+ ports_num_for_sparse=int(
+ os.getenv("PADDLE_INIT_PORTS_NUM_FOR_SPARSE", "1")),
+ num_gradient_servers=int(
+ os.getenv("PADDLE_INIT_NUM_GRADIENT_SERVERS", "1")),
+ trainer_id=int(os.getenv("PADDLE_INIT_TRAINER_ID", "0")),
+ pservers=os.getenv("PADDLE_INIT_PSERVERS", "127.0.0.1"))
+ fn = open("thirdparty/wuyi_train_thdpty/word_dict.pickle", "r")
+ word_dict = pickle.load(fn)
+ fn.close()
+ dict_size = len(word_dict)
+ firstword = paddle.layer.data(
+ name="firstw", type=paddle.data_type.integer_value(dict_size))
+ secondword = paddle.layer.data(
+ name="secondw", type=paddle.data_type.integer_value(dict_size))
+ thirdword = paddle.layer.data(
+ name="thirdw", type=paddle.data_type.integer_value(dict_size))
+ fourthword = paddle.layer.data(
+ name="fourthw", type=paddle.data_type.integer_value(dict_size))
+ nextword = paddle.layer.data(
+ name="fifthw", type=paddle.data_type.integer_value(dict_size))
+
+ Efirst = wordemb(firstword)
+ Esecond = wordemb(secondword)
+ Ethird = wordemb(thirdword)
+ Efourth = wordemb(fourthword)
+
+ contextemb = paddle.layer.concat(input=[Efirst, Esecond, Ethird, Efourth])
+ hidden1 = paddle.layer.fc(input=contextemb,
+ size=hiddensize,
+ act=paddle.activation.Sigmoid(),
+ layer_attr=paddle.attr.Extra(drop_rate=0.5),
+ bias_attr=paddle.attr.Param(learning_rate=2),
+ param_attr=paddle.attr.Param(
+ initial_std=1. / math.sqrt(embsize * 8),
+ learning_rate=1))
+ predictword = paddle.layer.fc(input=hidden1,
+ size=dict_size,
+ bias_attr=paddle.attr.Param(learning_rate=2),
+ act=paddle.activation.Softmax())
+
+ def event_handler(event):
+ if isinstance(event, paddle.event.EndIteration):
+ if event.batch_id % 100 == 0:
+ result = trainer.test(
+ paddle.batch(
+ cluster_reader_cluster(cluster_test_file, node_id), 32))
+ print "Pass %d, Batch %d, Cost %f, %s, Testing metrics %s" % (
+ event.pass_id, event.batch_id, event.cost, event.metrics,
+ result.metrics)
+
+ cost = paddle.layer.classification_cost(input=predictword, label=nextword)
+ parameters = paddle.parameters.create(cost)
+ adagrad = paddle.optimizer.AdaGrad(
+ learning_rate=3e-3,
+ regularization=paddle.optimizer.L2Regularization(8e-4))
+ trainer = paddle.trainer.SGD(cost,
+ parameters,
+ adagrad,
+ is_local=not cluster_train)
+ trainer.train(
+ paddle.batch(cluster_reader_cluster(cluster_train_file, node_id), 32),
+ num_passes=30,
+ event_handler=event_handler)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/doc/howto/usage/cluster/src/word2vec/prepare.py b/doc/howto/usage/cluster/src/word2vec/prepare.py
new file mode 100644
index 0000000000000000000000000000000000000000..24f5c5b26d37ea03de3ab4dc2d967a4bd009eef0
--- /dev/null
+++ b/doc/howto/usage/cluster/src/word2vec/prepare.py
@@ -0,0 +1,41 @@
+import paddle.v2 as paddle
+import tarfile
+import os
+import pickle
+
+SPLIT_COUNT = 3
+N = 5
+
+
+def file_len(fd):
+ for i, l in enumerate(fd):
+ pass
+ return i + 1
+
+
+def split_from_reader_by_line(filename, reader, split_count):
+ fn = open(filename, "w")
+ for batch_id, batch_data in enumerate(reader()):
+ batch_data_str = [str(d) for d in batch_data]
+ fn.write(",".join(batch_data_str))
+ fn.write("\n")
+ fn.close()
+
+ fn = open(filename, "r")
+ total_line_count = file_len(fn)
+ fn.close()
+ per_file_lines = total_line_count / split_count + 1
+ cmd = "split -d -a 5 -l %d %s %s-" % (per_file_lines, filename, filename)
+ os.system(cmd)
+
+
+word_dict = paddle.dataset.imikolov.build_dict()
+with open("word_dict.pickle", "w") as dict_f:
+ pickle.dump(word_dict, dict_f)
+
+split_from_reader_by_line("train.txt",
+ paddle.dataset.imikolov.train(word_dict, N),
+ SPLIT_COUNT)
+split_from_reader_by_line("test.txt",
+ paddle.dataset.imikolov.test(word_dict, N),
+ SPLIT_COUNT)
diff --git a/paddle/parameter/FirstOrderOptimizer.h b/paddle/parameter/FirstOrderOptimizer.h
index 895e8d6a63d1fad0ee7a6f5647402435d418b2f1..f157188a4f736319ea187052b90a17f8be9e9edb 100644
--- a/paddle/parameter/FirstOrderOptimizer.h
+++ b/paddle/parameter/FirstOrderOptimizer.h
@@ -265,6 +265,10 @@ public:
addParameterType(PARAMETER_SECOND_MOMENTUM);
}
+ virtual void startBatch(int64_t numSamplesProcessed) {
+ learningRate_ = calcLearningRate(numSamplesProcessed, pass_);
+ }
+
virtual void finishBatch() { ++step_; }
virtual void update(const VectorPtr vecs[],
diff --git a/paddle/scripts/cluster_train_v2/fabric/conf.py b/paddle/scripts/cluster_train_v2/fabric/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..e96503d093a4317df7bb006043eb42098f51b6f5
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/fabric/conf.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HOSTS = [
+ "root@10.1.9.7",
+ "root@10.1.18.7",
+ "root@10.1.32.9",
+]
+'''
+workspace configuration
+'''
+#root dir for workspace, can be set as any director with real user account
+ROOT_DIR = "/root"
+'''
+network configuration
+'''
+#pserver nics
+PADDLE_NIC = "eth0"
+#pserver port
+PADDLE_PORT = 7164
+#pserver ports num
+PADDLE_PORTS_NUM = 1
+#pserver sparse ports num
+PADDLE_PORTS_NUM_FOR_SPARSE = 1
+#trainer whether use gpu
+PADDLE_USE_GPU = "False"
+#environments setting for all processes in cluster job
+LD_LIBRARY_PATH = "/usr/local/cuda/lib64:/usr/lib64"
diff --git a/paddle/scripts/cluster_train_v2/fabric/docker_cluster/Dockerfile b/paddle/scripts/cluster_train_v2/fabric/docker_cluster/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..6606c01265af1fa8009e67906a3dbbe5c95ebc0d
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/fabric/docker_cluster/Dockerfile
@@ -0,0 +1,11 @@
+FROM docker.paddlepaddlehub.com/paddle:0.10.0rc2
+RUN apt-get update && apt-get install -y openssh-server
+RUN mkdir /var/run/sshd
+
+RUN echo 'root:root' |chpasswd
+
+RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config
+RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
+
+EXPOSE 22
+CMD ["/usr/sbin/sshd", "-D"]
diff --git a/paddle/scripts/cluster_train_v2/fabric/docker_cluster/ssh_servers.yaml b/paddle/scripts/cluster_train_v2/fabric/docker_cluster/ssh_servers.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..0784b2d1b8785796f94fff1607643218564fc126
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/fabric/docker_cluster/ssh_servers.yaml
@@ -0,0 +1,23 @@
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: ssh-servers
+spec:
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: ssh-servers
+ spec:
+ containers:
+ - name: ssh-servers
+ image: docker.paddlepaddlehub.com/paddlessh
+ resources:
+ limits:
+ cpu: 500m
+ memory: 1Gi
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ ports:
+ - containerPort: 22
diff --git a/paddle/scripts/cluster_train_v2/fabric/run.sh b/paddle/scripts/cluster_train_v2/fabric/run.sh
new file mode 100644
index 0000000000000000000000000000000000000000..f6324bcb136803ebc30e69bcdaa2f8725cb0ccba
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/fabric/run.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+python paddle.py \
+ --job_dispatch_package="/root/wuyi/fabric_submit/workspace" \
+ --dot_period=10 \
+ --ports_num_for_sparse=1 \
+ --log_period=50 \
+ --num_passes=5 \
+ --trainer_count=2 \
+ --saving_period=1 \
+ --local=0 \
+ --config=./trainer_config.py \
+ --save_dir=./output \
+ --use_gpu=0
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..1a2d19e823541750830fcaa25f65b2f8e1ea2b49
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile
@@ -0,0 +1,43 @@
+# Build this image: docker build -t mpi .
+#
+
+FROM paddledev/paddle:0.10.0rc3
+
+ENV DEBIAN_FRONTEND noninteractive
+
+RUN apt-get update -y && \
+ apt-get upgrade -y && \
+ apt-get install -y openssh-server zip unzip vim sudo \
+gcc gfortran openmpi-checkpoint binutils wget curl git openmpi-bin openmpi-common libopenmpi-dev && \
+pip install mpi4py numpy virtualenv scipy matplotlib lxml sqlalchemy suds ipython obspy && \
+mkdir /var/run/sshd && \
+echo 'root:tutorial' | chpasswd && \
+sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config && \
+# SSH login fix. Otherwise user is kicked off after login
+sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd && \
+echo "export VISIBLE=now" >> /etc/profile && \
+adduser --disabled-password --gecos "" tutorial && \
+echo "tutorial ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers && \
+mkdir /home/tutorial/.ssh/
+
+ENV HOME /home/tutorial
+ENV NOTVISIBLE "in users profile"
+
+# ------------------------------------------------------------
+# Set-Up SSH with our Github deploy key
+# ------------------------------------------------------------
+
+ADD ssh/config /home/tutorial/.ssh/config
+ADD ssh/id_rsa.mpi /home/tutorial/.ssh/id_rsa
+ADD ssh/id_rsa.mpi.pub /home/tutorial/.ssh/id_rsa.pub
+ADD ssh/id_rsa.mpi.pub /home/tutorial/.ssh/authorized_keys
+
+#---------------------------------------------------------------
+#LD_LIBRARY_PATH
+#---------------------------------------------------------------
+
+RUN export LD_LIBRARY_PATH=/usr/lib/openmpi/lib/
+
+WORKDIR /home/tutorial
+EXPOSE 22
+CMD ["/usr/sbin/sshd", "-D"]
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/head.yaml b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/head.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..34835e5eb8d7cb92ad3cf7758a47c9e565a7dcf6
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/head.yaml
@@ -0,0 +1,25 @@
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: mpi-header
+ labels:
+ app: mpi-header
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: mpi-header
+ spec:
+ containers:
+ - image: typhoon1986/paddle-openmpi
+ name : mpi-header
+ resources:
+ limits:
+ cpu: 500m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 2Gi
+ ports:
+ - containerPort: 22
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/mpi-nodes.yaml b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/mpi-nodes.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..2fd5cb4d44a25efac68dd8c9195dea9fd8f84a26
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/mpi-nodes.yaml
@@ -0,0 +1,26 @@
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: mpi-nodes
+ labels:
+ app: mpi-nodes
+spec:
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: mpi-nodes
+ spec:
+ containers:
+ - image: typhoon1986/paddle-openmpi
+ name : mpi-nodes
+ resources:
+ limits:
+ cpu: 500m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 2Gi
+ ports:
+ - containerPort: 22
+ imagePullPolicy: Always
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/config b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/config
new file mode 100644
index 0000000000000000000000000000000000000000..a9ecad07c39e4a9d6f0572d6cbf77795d99681f2
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/config
@@ -0,0 +1 @@
+StrictHostKeyChecking no
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi
new file mode 100644
index 0000000000000000000000000000000000000000..23768343edf5258cf525523d471f67071a24f5de
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi
@@ -0,0 +1,27 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIEogIBAAKCAQEA7PWLZmgdJ508dD15T6+xqGDvL9Ehzo9SgsnN6xJ+qpUvvOi4
+1axW0AqR4MnPTg/uuvk+x4tUpuufOW4w22UTGjsdvmIVWa9ujLtcRiN3YPY+SU+Y
+O5FfqKg7r/hBn+/GMcSoffwSs7vVgmhBBnp/mJh2O1cOAFZEe98/47mbg3/kHBAk
+36NOQktaU3l48B38EhBTnjWfcEGm1HcTRPFxXV5Wiko6ZhKFEuHcTVKng4ROtUqE
+mgHyI0aB7TAxg4na0ejItsYWEPWGeDOw6ms/4MwylxNosWzHFPW9p4zgLCLNr+b6
+bDDfYKjXZflAuTQtQhLmJUwD9uuYLAijpSE2fQIDAQABAoIBADgcgRET8Gt0CV/B
+OtvKz/f+VEVvcWD3gWNlJDTZIVOFllNWjIZUlA4ZoqenQkbK8Q4nfV1FOht4yjCQ
+TlN1oMtiWk297i5Zo4UBzPzy4w774I39oh/g8dT/WXr2/5s+7SDV38xNh6Q2A34o
+79T35wUcfUrZ93/O7dKjb/6d8hx2FMha0wVKqY4lmG1lQE3bbx3kakec0PdvU5kO
+YHKlpqj3pMR7CpMa+4yL/iXFwWYmnK+uu+zw7JR7PwvH1CzrnvW438wjQ1QmYbSx
+mHHOE89X67Lsl5hn81qYWBhpwAlBwi1qscsE0cV9GcFyKqWFqZsj5coM9u3CRfvy
+lrWe1OUCgYEA+LBUFEd3Hxs4sFiYElJ8R9SAs1udaqPvAl01hTEijJLfYlMMVs/y
+rgNN7j22zjDak2f8QdyMJZX7EZdRmdYcHO0csYOwbYvalzcnwk+U3mxmdD3r4xSo
+DSvkJ70fogAqUlcVIg2re6fCmZVJQTvMQYTVEM8zQomJRt/Lb2esSfsCgYEA8+zv
+44aToe8uqiDs4w8guRW7LCDkTw4z4IVo9JUibIaPjaAs5bZEBXSB43EEywXCR75H
+fML0rU1PVvKh1rqcvZdVzm+XMWVr3asPk0sapaiHaTcmyZvJRDxxqbLFp0zRP1T6
+cCtXNFdHWU4KiuKrUi6cDyOKchpfkSZa4seiT+cCgYB+n4FgBfdQPlMB70oW4irn
+g/q32CjxuGCk6oKqu5bkzo+xB6obtavSEFqouIGQwO056tNVUY+GP7Rjg5GH663K
+yKw4cl3tmS0Gm43B8TVSfw03mKO3rrfWZQe5eCFYIg9qd26KNT2gK435FzsCXQkm
+PxUhhu6JrW/ZR2/U3Iur6wKBgADrWLAb1ryagSuE+j+U1AO+kDkHWrTtkcZ72jxp
+v3p3O11GSEUJXdJDcSXhTCpTuDq6/dv7hB6PFwh126RKicKxKlKf2wsFndV1Cpb8
+hnovW2tLGOtTmfuW2rrQAKyzvmolsNfxYd/BoHQ2thV16z1hDZeFA8WQUeHjKh6G
+sBbrAoGATdtQlaUxx4izua6k02ihkxx/cRYwDl2N8UDvDBHokS7vJFMX8b8NpsGg
+zMElnqSpu/pe/0UG7N2MtPF6uyMcX8AZzzcsRkiMkDvWJzYt8Jpf+Eyd/uryF+Yv
+yrXaOEY83tm6x/fny5ZaZmk8lNth7bfWywuTMkZLX3fYpWtIeE4=
+-----END RSA PRIVATE KEY-----
diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi.pub b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi.pub
new file mode 100644
index 0000000000000000000000000000000000000000..015f2b42e71920e00de090cbb1108d9a12ed5f0c
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/ssh/id_rsa.mpi.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDs9YtmaB0nnTx0PXlPr7GoYO8v0SHOj1KCyc3rEn6qlS+86LjVrFbQCpHgyc9OD+66+T7Hi1Sm6585bjDbZRMaOx2+YhVZr26Mu1xGI3dg9j5JT5g7kV+oqDuv+EGf78YxxKh9/BKzu9WCaEEGen+YmHY7Vw4AVkR73z/juZuDf+QcECTfo05CS1pTeXjwHfwSEFOeNZ9wQabUdxNE8XFdXlaKSjpmEoUS4dxNUqeDhE61SoSaAfIjRoHtMDGDidrR6Mi2xhYQ9YZ4M7Dqaz/gzDKXE2ixbMcU9b2njOAsIs2v5vpsMN9gqNdl+UC5NC1CEuYlTAP265gsCKOlITZ9 oweidner@peahi
diff --git a/paddle/scripts/cluster_train_v2/openmpi/start_mpi_train.sh b/paddle/scripts/cluster_train_v2/openmpi/start_mpi_train.sh
new file mode 100644
index 0000000000000000000000000000000000000000..c645495448f9844de5ae9024b6a0f41452522765
--- /dev/null
+++ b/paddle/scripts/cluster_train_v2/openmpi/start_mpi_train.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+# General trainning configurations
+
+NICS=eth0
+PADDLE_INIT_PORT=7164
+PADDLE_INIT_PORTS_NUM=1
+PADDLE_INIT_PORTS_NUM_FOR_SPARSE=1
+PADDLE_INIT_PSERVERS=$(cat machines | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g')
+PADDLE_INIT_USE_GPU=False
+
+PADDLE_INIT_NUM_GRADIENT_SERVERS=${OMPI_COMM_WORLD_SIZE}
+PADDLE_INIT_TRAINER_ID=${OMPI_COMM_WORLD_RANK}
+PADDLE_CLUSTER_TRAIN=True
+
+env
+
+# start pserver
+stdbuf -oL nohup paddle pserver --port=$PADDLE_INIT_PORT --ports_num=$PADDLE_INIT_PORTS_NUM \
+ --ports_num_for_sparse=$PADDLE_INIT_PORTS_NUM_FOR_SPARSE --nics=$NICS \
+ --comment=paddle_cluster_pserver \
+ --num_gradient_servers=$PADDLE_INIT_NUM_GRADIENT_SERVERS &> logs/pserver.log &
+
+# start trainer
+# NOTE: train.py will use the above environment variables as configuration
+python train.py &> logs/train.log
+
+# kill background pservers when train finishes
+ps -ef | grep pserver | awk '{print $2}' | xargs kill