diff --git a/doc/getstarted/index_cn.rst b/doc/getstarted/index_cn.rst
index 9f6ee25987d51dcca3a37cf0f62a70a5a5a2d89a..1dc141396b95bda776aeff87ac30fad6baf37bd2 100644
--- a/doc/getstarted/index_cn.rst
+++ b/doc/getstarted/index_cn.rst
@@ -1,61 +1,8 @@
新手入门
============
-.. _quick_install:
-
-快速安装
-++++++++
-
-PaddlePaddle支持使用pip快速安装,目前支持CentOS 6以上, Ubuntu 14.04以及MacOS 10.12,并安装有Python2.7。
-执行下面的命令完成快速安装,版本为cpu_avx_openblas:
-
- .. code-block:: bash
-
- pip install paddlepaddle
-
-如果需要安装支持GPU的版本(cuda7.5_cudnn5_avx_openblas),需要执行:
-
- .. code-block:: bash
-
- pip install paddlepaddle-gpu
-
-更详细的安装和编译方法参考:
-
-.. toctree::
- :maxdepth: 1
-
- build_and_install/index_cn.rst
-
-.. _quick_start:
-
-快速开始
-++++++++
-
-创建一个 housing.py 并粘贴此Python代码:
-
- .. code-block:: python
-
- import paddle.v2 as paddle
-
- # Initialize PaddlePaddle.
- paddle.init(use_gpu=False, trainer_count=1)
-
- # Configure the neural network.
- x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13))
- y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear())
-
- # Infer using provided test data.
- probs = paddle.infer(
- output_layer=y_predict,
- parameters=paddle.dataset.uci_housing.model(),
- input=[item for item in paddle.dataset.uci_housing.test()()])
-
- for i in xrange(len(probs)):
- print 'Predicted price: ${:,.2f}'.format(probs[i][0] * 1000)
-
-执行 :code:`python housing.py` 瞧! 它应该打印出预测住房数据的清单。
-
.. toctree::
:maxdepth: 1
+ quickstart_cn.rst
concepts/use_concepts_cn.rst
diff --git a/doc/getstarted/index_en.rst b/doc/getstarted/index_en.rst
index 063d9d880c82550f7f5d47d3d0b1fff59865bca7..c680e1903750117073bee64cb4d4f4ccfff5ac3d 100644
--- a/doc/getstarted/index_en.rst
+++ b/doc/getstarted/index_en.rst
@@ -1,61 +1,7 @@
GET STARTED
============
-.. _quick_install:
-
-Quick Install
-----------------------
-
-You can use pip to install PaddlePaddle with a single command, supports
-CentOS 6 above, Ubuntu 14.04 above or MacOS 10.12, with Python 2.7 installed.
-Simply run the following command to install, the version is cpu_avx_openblas:
-
- .. code-block:: bash
-
- pip install paddlepaddle
-
-If you need to install GPU version (cuda7.5_cudnn5_avx_openblas), run:
-
- .. code-block:: bash
-
- pip install paddlepaddle-gpu
-
-For more details about installation and build:
-
.. toctree::
:maxdepth: 1
- build_and_install/index_en.rst
-
-
-.. _quick_start:
-
-Quick Start
-++++++++
-
-Create a new file called housing.py, and paste this Python
-code:
-
-
- .. code-block:: python
-
- import paddle.v2 as paddle
-
- # Initialize PaddlePaddle.
- paddle.init(use_gpu=False, trainer_count=1)
-
- # Configure the neural network.
- x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13))
- y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear())
-
- # Infer using provided test data.
- probs = paddle.infer(
- output_layer=y_predict,
- parameters=paddle.dataset.uci_housing.model(),
- input=[item for item in paddle.dataset.uci_housing.test()()])
-
- for i in xrange(len(probs)):
- print 'Predicted price: ${:,.2f}'.format(probs[i][0] * 1000)
-
-Run :code:`python housing.py` and voila! It should print out a list of predictions
-for the test housing data.
+ quickstart_en.rst
diff --git a/doc/getstarted/quickstart_cn.rst b/doc/getstarted/quickstart_cn.rst
new file mode 100644
index 0000000000000000000000000000000000000000..51dd00f1e806e6423afe3ce53d80d53a187d2ca0
--- /dev/null
+++ b/doc/getstarted/quickstart_cn.rst
@@ -0,0 +1,41 @@
+快速开始
+========
+
+PaddlePaddle支持使用pip快速安装,目前支持CentOS 6以上, Ubuntu 14.04以及MacOS 10.12,并安装有Python2.7。
+执行下面的命令完成快速安装,版本为cpu_avx_openblas:
+
+ .. code-block:: bash
+
+ pip install paddlepaddle
+
+如果需要安装支持GPU的版本(cuda7.5_cudnn5_avx_openblas),需要执行:
+
+ .. code-block:: bash
+
+ pip install paddlepaddle-gpu
+
+更详细的安装和编译方法参考::ref:`install_steps` 。
+
+创建一个 housing.py 并粘贴此Python代码:
+
+ .. code-block:: python
+
+ import paddle.v2 as paddle
+
+ # Initialize PaddlePaddle.
+ paddle.init(use_gpu=False, trainer_count=1)
+
+ # Configure the neural network.
+ x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13))
+ y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear())
+
+ # Infer using provided test data.
+ probs = paddle.infer(
+ output_layer=y_predict,
+ parameters=paddle.dataset.uci_housing.model(),
+ input=[item for item in paddle.dataset.uci_housing.test()()])
+
+ for i in xrange(len(probs)):
+ print 'Predicted price: ${:,.2f}'.format(probs[i][0] * 1000)
+
+执行 :code:`python housing.py` 瞧! 它应该打印出预测住房数据的清单。
diff --git a/doc/getstarted/quickstart_en.rst b/doc/getstarted/quickstart_en.rst
new file mode 100644
index 0000000000000000000000000000000000000000..d1bcf82ea071e2c53760a5ccf6a5074a3ac0abd5
--- /dev/null
+++ b/doc/getstarted/quickstart_en.rst
@@ -0,0 +1,45 @@
+Quick Start
+============
+
+You can use pip to install PaddlePaddle with a single command, supports
+CentOS 6 above, Ubuntu 14.04 above or MacOS 10.12, with Python 2.7 installed.
+Simply run the following command to install, the version is cpu_avx_openblas:
+
+ .. code-block:: bash
+
+ pip install paddlepaddle
+
+If you need to install GPU version (cuda7.5_cudnn5_avx_openblas), run:
+
+ .. code-block:: bash
+
+ pip install paddlepaddle-gpu
+
+For more details about installation and build: :ref:`install_steps` .
+
+Create a new file called housing.py, and paste this Python
+code:
+
+
+ .. code-block:: python
+
+ import paddle.v2 as paddle
+
+ # Initialize PaddlePaddle.
+ paddle.init(use_gpu=False, trainer_count=1)
+
+ # Configure the neural network.
+ x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(13))
+ y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear())
+
+ # Infer using provided test data.
+ probs = paddle.infer(
+ output_layer=y_predict,
+ parameters=paddle.dataset.uci_housing.model(),
+ input=[item for item in paddle.dataset.uci_housing.test()()])
+
+ for i in xrange(len(probs)):
+ print 'Predicted price: ${:,.2f}'.format(probs[i][0] * 1000)
+
+Run :code:`python housing.py` and voila! It should print out a list of predictions
+for the test housing data.
diff --git a/doc/howto/cluster/cluster_train_cn.md b/doc/howto/cluster/cmd_argument_cn.md
similarity index 56%
rename from doc/howto/cluster/cluster_train_cn.md
rename to doc/howto/cluster/cmd_argument_cn.md
index 0f3db59607fb6b43da01f5fdb46949087517ed6c..5c575dd5b53f6e4ea025a8fbaebdb2d1a1f1c9ed 100644
--- a/doc/howto/cluster/cluster_train_cn.md
+++ b/doc/howto/cluster/cmd_argument_cn.md
@@ -1,41 +1,7 @@
-# 分布式训练
-
-
-## 概述
-
-本文将介绍如何使用PaddlePaddle在不同的集群框架下完成分布式训练。分布式训练架构如下图所示:
-
-
-
-- 数据分片(Data shard): 用于训练神经网络的数据,被切分成多个部分,每个部分分别给每个trainer使用。
-- 计算节点(Trainer): 每个trainer启动后读取切分好的一部分数据,开始神经网络的“前馈”和“后馈”计算,并和参数服务器通信。在完成一定量数据的训练后,上传计算得出的梯度(gradients),然后下载优化更新后的神经网络参数(parameters)。
-- 参数服务器(Parameter server):每个参数服务器只保存整个神经网络所有参数的一部分。参数服务器接收从计算节点上传的梯度,并完成参数优化更新,再将更新后的参数下发到每个计算节点。
-
-这样,通过计算节点和参数服务器的分布式协作,可以完成神经网络的SGD方法的训练。PaddlePaddle可以同时支持同步随机梯度下降(SGD)和异步随机梯度下降。
-
-在使用同步SGD训练神经网络时,PaddlePaddle使用同步屏障(barrier),使梯度的提交和参数的更新按照顺序方式执行。在异步SGD中,则并不会等待所有trainer提交梯度才更新参数,这样极大地提高了计算的并行性:参数服务器之间不相互依赖,并行地接收梯度和更新参数,参数服务器也不会等待计算节点全部都提交梯度之后才开始下一步,计算节点之间也不会相互依赖,并行地执行模型的训练。可以看出,虽然异步SGD方式会提高参数更新并行度, 但是并不能保证参数同步更新,在任意时间某一台参数服务器上保存的参数可能比另一台要更新,与同步SGD相比,梯度会有噪声。
-
-
-## 环境准备
-
-1. 准备您的计算集群。计算集群通常由一组(几台到几千台规模)的Linux服务器组成。服务器之间可以通过局域网(LAN)联通,每台服务器具有集群中唯一的IP地址(或者可被DNS解析的主机名)。集群中的每台计算机通常被成为一个“节点”。
-1. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,还需要在节点上安装对应的GPU驱动以及CUDA。PaddlePaddle的安装可以参考[build_and_install](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/build_and_install/index_cn.html)的多种安装方式。我们推荐使用[Docker](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/build_and_install/docker_install_cn.html)安装方式来快速安装PaddlePaddle。
-
-安装完成之后,执行下面的命令可以查看已经安装的版本(docker安装方式可以进入docker容器执行:`docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
-```bash
-$ paddle version
-PaddlePaddle 0.10.0, compiled with
- with_avx: ON
- with_gpu: OFF
- with_double: OFF
- with_python: ON
- with_rdma: OFF
- with_timer: OFF
-```
+## 启动参数说明
-下面以`doc/howto/usage/cluster/src/word2vec`中的代码作为实例,介绍使用PaddlePaddle v2 API完成分布式训练。
+下面以`doc/howto/cluster/src/word2vec`中的代码作为实例,介绍使用PaddlePaddle v2 API完成分布式训练。
-## 启动参数说明
### 启动参数服务器
执行以下的命令启动一个参数服务器并等待和计算节点的数据交互
```bash
@@ -167,22 +133,3 @@ test.txt-00002
- `train_data_dir`:包含训练数据的目录,可以是从分布式存储挂载过来的,也可以是在任务启动前下载到本地的。
- `test_data_dir`:包含测试数据集的目录。
-
-## 使用分布式计算平台或工具
-
-PaddlePaddle可以使用多种分布式计算平台构建分布式计算任务,包括:
-- [Kubernetes](http://kubernetes.io) Google开源的容器集群的调度框架,支持大规模集群生产环境的完整集群方案。
-- [OpenMPI](https://www.open-mpi.org) 成熟的高性能并行计算框架。
-- [Fabric](http://www.fabfile.org) 集群管理工具。可以使用`Fabric`编写集群任务提交和管理脚本。
-
-对于不同的集群平台,会分别介绍集群作业的启动和停止方法。这些例子都可以在[cluster_train_v2](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/scripts/cluster_train_v2)找到。
-
-在使用分布式计算平台进行训练时,任务被调度在集群中时,分布式计算平台通常会通过API或者环境变量提供任务运行需要的参数,比如节点的ID、IP和任务节点个数等。
-
-## 在不同集群中运行
-
- - [fabric集群](fabric_cn.md)
- - [openmpi集群](openmpi_cn.md)
- - [kubernetes单机](k8s_cn.md)
- - [kubernetes distributed分布式](k8s_distributed_cn.md)
- - [AWS上运行kubernetes集群训练](k8s_aws_cn.md)
diff --git a/doc/howto/cluster/cluster_train_en.md b/doc/howto/cluster/cmd_argument_en.md
similarity index 58%
rename from doc/howto/cluster/cluster_train_en.md
rename to doc/howto/cluster/cmd_argument_en.md
index f9424f8f1a29fcf001c4e7976086512b22f6e858..06fd5717564c99e3bb46835a2bd5071dff665f23 100644
--- a/doc/howto/cluster/cluster_train_en.md
+++ b/doc/howto/cluster/cmd_argument_en.md
@@ -1,40 +1,7 @@
-# Distributed Training
-
-## Introduction
-
-In this article, we'll explain how to run distributed training jobs with PaddlePaddle on different types of clusters. The diagram below shows the main architecture of a distributed trainning job:
-
-
-
-- Data shard: training data will be split into multiple partitions, trainers use the partitions of the whole dataset to do the training job.
-- Trainer: each trainer reads the data shard, and train the neural network. Then the trainer will upload calculated "gradients" to parameter servers, and wait for parameters to be optimized on the parameter server side. When that finishes, the trainer download optimized parameters and continues its training.
-- Parameter server: every parameter server stores part of the whole neural network model data. They will do optimization calculations when gradients are uploaded from trainers, and then send updated parameters to trainers.
-
-PaddlePaddle can support both synchronize stochastic gradient descent (SGD) and asynchronous SGD.
-
-When training with synchronize SGD, PaddlePaddle uses an internal "synchronize barrier" which makes gradients update and parameter download in strict order. On the other hand, asynchronous SGD won't wait for all trainers to finish upload at a single step, this will increase the parallelism of distributed training: parameter servers do not depend on each other, they'll do parameter optimization concurrently. Parameter servers will not wait for trainers, so trainers will also do their work concurrently. But asynchronous SGD will introduce more randomness and noises in the gradient.
-
-## Preparations
-1. Prepare your computer cluster. It's normally a bunch of Linux servers connected by LAN. Each server will be assigned a unique IP address. The computers in the cluster can be called "nodes".
-2. Install PaddlePaddle on every node. If you are going to take advantage of GPU cards, you'll also need to install proper driver and CUDA libraries. To install PaddlePaddle please read [this build and install](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/build_and_install/index_en.html) document. We strongly recommend using [Docker installation](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/build_and_install/docker_install_en.html).
-
-After installation, you can check the version by typing the below command (run a docker container if using docker: `docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
-
-```bash
-$ paddle version
-PaddlePaddle 0.10.0rc, compiled with
- with_avx: ON
- with_gpu: OFF
- with_double: OFF
- with_python: ON
- with_rdma: OFF
- with_timer: OFF
-```
-
-We'll take `doc/howto/usage/cluster/src/word2vec` as an example to introduce distributed training using PaddlePaddle v2 API.
-
## Command-line arguments
+We'll take `doc/howto/cluster/src/word2vec` as an example to introduce distributed training using PaddlePaddle v2 API.
+
### Starting parameter server
Type the below command to start a parameter server which will wait for trainers to connect:
@@ -171,21 +138,3 @@ Your workspace may looks like:
- `train_data_dir`: containing training data. Mount from storage service or copy trainning data to here.
- `test_data_dir`: containing testing data.
-
-## Use cluster platforms or cluster management tools
-
-PaddlePaddle supports running jobs on several platforms including:
-- [Kubernetes](http://kubernetes.io) open-source system for automating deployment, scaling, and management of containerized applications from Google.
-- [OpenMPI](https://www.open-mpi.org) Mature high performance parallel computing framework.
-- [Fabric](http://www.fabfile.org) A cluster management tool. Write scripts to submit jobs or manage the cluster.
-
-We'll introduce cluster job management on these platforms. The examples can be found under [cluster_train_v2](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/scripts/cluster_train_v2).
-
-These cluster platforms provide API or environment variables for training processes, when the job is dispatched to different nodes. Like node ID, IP or total number of nodes etc.
-
-## Use different clusters
-
- - [fabric](fabric_en.md)
- - [openmpi](openmpi_en.md)
- - [kubernetes](k8s_en.md)
- - [kubernetes on AWS](k8s_aws_en.md)
diff --git a/doc/howto/cluster/index_cn.rst b/doc/howto/cluster/index_cn.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c68b2655b65b192814b94f0013fa92b0733b9afa
--- /dev/null
+++ b/doc/howto/cluster/index_cn.rst
@@ -0,0 +1,10 @@
+分布式训练
+==========
+
+.. toctree::
+ :maxdepth: 1
+
+ introduction_cn.md
+ preparations_cn.md
+ cmd_argument_cn.md
+ multi_cluster/index_cn.rst
diff --git a/doc/howto/cluster/index_en.rst b/doc/howto/cluster/index_en.rst
new file mode 100644
index 0000000000000000000000000000000000000000..af957e06cd7930ce63569a1bafdde47a1d34eb69
--- /dev/null
+++ b/doc/howto/cluster/index_en.rst
@@ -0,0 +1,10 @@
+Distributed Training
+====================
+
+.. toctree::
+ :maxdepth: 1
+
+ introduction_en.md
+ preparations_en.md
+ cmd_argument_en.md
+ multi_cluster/index_en.rst
diff --git a/doc/howto/cluster/introduction_cn.md b/doc/howto/cluster/introduction_cn.md
new file mode 100644
index 0000000000000000000000000000000000000000..562008a898414a6566d74d08cfeb18fb9d57582a
--- /dev/null
+++ b/doc/howto/cluster/introduction_cn.md
@@ -0,0 +1,13 @@
+## 概述
+
+本节将介绍如何使用PaddlePaddle在不同的集群框架下完成分布式训练。分布式训练架构如下图所示:
+
+
+
+- 数据分片(Data shard): 用于训练神经网络的数据,被切分成多个部分,每个部分分别给每个trainer使用。
+- 计算节点(Trainer): 每个trainer启动后读取切分好的一部分数据,开始神经网络的“前馈”和“后馈”计算,并和参数服务器通信。在完成一定量数据的训练后,上传计算得出的梯度(gradients),然后下载优化更新后的神经网络参数(parameters)。
+- 参数服务器(Parameter server):每个参数服务器只保存整个神经网络所有参数的一部分。参数服务器接收从计算节点上传的梯度,并完成参数优化更新,再将更新后的参数下发到每个计算节点。
+
+这样,通过计算节点和参数服务器的分布式协作,可以完成神经网络的SGD方法的训练。PaddlePaddle可以同时支持同步随机梯度下降(SGD)和异步随机梯度下降。
+
+在使用同步SGD训练神经网络时,PaddlePaddle使用同步屏障(barrier),使梯度的提交和参数的更新按照顺序方式执行。在异步SGD中,则并不会等待所有trainer提交梯度才更新参数,这样极大地提高了计算的并行性:参数服务器之间不相互依赖,并行地接收梯度和更新参数,参数服务器也不会等待计算节点全部都提交梯度之后才开始下一步,计算节点之间也不会相互依赖,并行地执行模型的训练。可以看出,虽然异步SGD方式会提高参数更新并行度, 但是并不能保证参数同步更新,在任意时间某一台参数服务器上保存的参数可能比另一台要更新,与同步SGD相比,梯度会有噪声。
diff --git a/doc/howto/cluster/introduction_en.md b/doc/howto/cluster/introduction_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb70d7cf35ab729e0da4c6a3a2e732c26905f584
--- /dev/null
+++ b/doc/howto/cluster/introduction_en.md
@@ -0,0 +1,13 @@
+## Introduction
+
+In this section, we'll explain how to run distributed training jobs with PaddlePaddle on different types of clusters. The diagram below shows the main architecture of a distributed trainning job:
+
+
+
+- Data shard: training data will be split into multiple partitions, trainers use the partitions of the whole dataset to do the training job.
+- Trainer: each trainer reads the data shard, and train the neural network. Then the trainer will upload calculated "gradients" to parameter servers, and wait for parameters to be optimized on the parameter server side. When that finishes, the trainer download optimized parameters and continues its training.
+- Parameter server: every parameter server stores part of the whole neural network model data. They will do optimization calculations when gradients are uploaded from trainers, and then send updated parameters to trainers.
+
+PaddlePaddle can support both synchronize stochastic gradient descent (SGD) and asynchronous SGD.
+
+When training with synchronize SGD, PaddlePaddle uses an internal "synchronize barrier" which makes gradients update and parameter download in strict order. On the other hand, asynchronous SGD won't wait for all trainers to finish upload at a single step, this will increase the parallelism of distributed training: parameter servers do not depend on each other, they'll do parameter optimization concurrently. Parameter servers will not wait for trainers, so trainers will also do their work concurrently. But asynchronous SGD will introduce more randomness and noises in the gradient.
diff --git a/doc/howto/cluster/fabric_cn.md b/doc/howto/cluster/multi_cluster/fabric_cn.md
similarity index 100%
rename from doc/howto/cluster/fabric_cn.md
rename to doc/howto/cluster/multi_cluster/fabric_cn.md
diff --git a/doc/howto/cluster/fabric_en.md b/doc/howto/cluster/multi_cluster/fabric_en.md
similarity index 100%
rename from doc/howto/cluster/fabric_en.md
rename to doc/howto/cluster/multi_cluster/fabric_en.md
diff --git a/doc/howto/cluster/multi_cluster/index_cn.rst b/doc/howto/cluster/multi_cluster/index_cn.rst
new file mode 100644
index 0000000000000000000000000000000000000000..ef56b6ddb38e59f20f7248de1ceb952c7627ce76
--- /dev/null
+++ b/doc/howto/cluster/multi_cluster/index_cn.rst
@@ -0,0 +1,20 @@
+在不同集群中运行
+================
+
+PaddlePaddle可以使用多种分布式计算平台构建分布式计算任务,包括:
+- `Kubernetes `_ Google开源的容器集群的调度框架,支持大规模集群生产环境的完整集群方案。
+- `OpenMPI `_ 成熟的高性能并行计算框架。
+- `Fabric `_ 集群管理工具。可以使用`Fabric`编写集群任务提交和管理脚本。
+
+对于不同的集群平台,会分别介绍集群作业的启动和停止方法。这些例子都可以在 `cluster_train_v2 `_ 找到。
+
+在使用分布式计算平台进行训练时,任务被调度在集群中时,分布式计算平台通常会通过API或者环境变量提供任务运行需要的参数,比如节点的ID、IP和任务节点个数等。
+
+.. toctree::
+ :maxdepth: 1
+
+ fabric_cn.md
+ openmpi_cn.md
+ k8s_cn.md
+ k8s_distributed_cn.md
+ k8s_aws_cn.md
diff --git a/doc/howto/cluster/multi_cluster/index_en.rst b/doc/howto/cluster/multi_cluster/index_en.rst
new file mode 100644
index 0000000000000000000000000000000000000000..dac7aaef085c80851c1bbb89250faf2151de4ca6
--- /dev/null
+++ b/doc/howto/cluster/multi_cluster/index_en.rst
@@ -0,0 +1,19 @@
+Use different clusters
+======================
+
+PaddlePaddle supports running jobs on several platforms including:
+- `Kubernetes `_ open-source system for automating deployment, scaling, and management of containerized applications from Google.
+- `OpenMPI `_ Mature high performance parallel computing framework.
+- `Fabric `_ A cluster management tool. Write scripts to submit jobs or manage the cluster.
+
+We'll introduce cluster job management on these platforms. The examples can be found under `cluster_train_v2 `_ .
+
+These cluster platforms provide API or environment variables for training processes, when the job is dispatched to different nodes. Like node ID, IP or total number of nodes etc.
+
+.. toctree::
+ :maxdepth: 1
+
+ fabric_en.md
+ openmpi_en.md
+ k8s_en.md
+ k8s_aws_en.md
diff --git a/doc/howto/cluster/k8s_aws_cn.md b/doc/howto/cluster/multi_cluster/k8s_aws_cn.md
similarity index 100%
rename from doc/howto/cluster/k8s_aws_cn.md
rename to doc/howto/cluster/multi_cluster/k8s_aws_cn.md
diff --git a/doc/howto/cluster/k8s_aws_en.md b/doc/howto/cluster/multi_cluster/k8s_aws_en.md
similarity index 100%
rename from doc/howto/cluster/k8s_aws_en.md
rename to doc/howto/cluster/multi_cluster/k8s_aws_en.md
diff --git a/doc/howto/cluster/k8s_cn.md b/doc/howto/cluster/multi_cluster/k8s_cn.md
similarity index 100%
rename from doc/howto/cluster/k8s_cn.md
rename to doc/howto/cluster/multi_cluster/k8s_cn.md
diff --git a/doc/howto/cluster/k8s_distributed_cn.md b/doc/howto/cluster/multi_cluster/k8s_distributed_cn.md
similarity index 100%
rename from doc/howto/cluster/k8s_distributed_cn.md
rename to doc/howto/cluster/multi_cluster/k8s_distributed_cn.md
diff --git a/doc/howto/cluster/k8s_en.md b/doc/howto/cluster/multi_cluster/k8s_en.md
similarity index 100%
rename from doc/howto/cluster/k8s_en.md
rename to doc/howto/cluster/multi_cluster/k8s_en.md
diff --git a/doc/howto/cluster/openmpi_cn.md b/doc/howto/cluster/multi_cluster/openmpi_cn.md
similarity index 100%
rename from doc/howto/cluster/openmpi_cn.md
rename to doc/howto/cluster/multi_cluster/openmpi_cn.md
diff --git a/doc/howto/cluster/openmpi_en.md b/doc/howto/cluster/multi_cluster/openmpi_en.md
similarity index 100%
rename from doc/howto/cluster/openmpi_en.md
rename to doc/howto/cluster/multi_cluster/openmpi_en.md
diff --git a/doc/howto/cluster/preparations_cn.md b/doc/howto/cluster/preparations_cn.md
new file mode 100644
index 0000000000000000000000000000000000000000..ce40697e703503b66f6306e15ebdb0ce1329991d
--- /dev/null
+++ b/doc/howto/cluster/preparations_cn.md
@@ -0,0 +1,16 @@
+## 环境准备
+
+1. 准备您的计算集群。计算集群通常由一组(几台到几千台规模)的Linux服务器组成。服务器之间可以通过局域网(LAN)联通,每台服务器具有集群中唯一的IP地址(或者可被DNS解析的主机名)。集群中的每台计算机通常被成为一个“节点”。
+1. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,还需要在节点上安装对应的GPU驱动以及CUDA。PaddlePaddle的安装可以参考[build_and_install](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/build_and_install/index_cn.html)的多种安装方式。我们推荐使用[Docker](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/build_and_install/docker_install_cn.html)安装方式来快速安装PaddlePaddle。
+
+安装完成之后,执行下面的命令可以查看已经安装的版本(docker安装方式可以进入docker容器执行:`docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
+```bash
+$ paddle version
+PaddlePaddle 0.10.0, compiled with
+ with_avx: ON
+ with_gpu: OFF
+ with_double: OFF
+ with_python: ON
+ with_rdma: OFF
+ with_timer: OFF
+```
diff --git a/doc/howto/cluster/preparations_en.md b/doc/howto/cluster/preparations_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b77b293907ae0548134fc65ceed3aa0ed0b845d
--- /dev/null
+++ b/doc/howto/cluster/preparations_en.md
@@ -0,0 +1,17 @@
+## Preparations
+
+1. Prepare your computer cluster. It's normally a bunch of Linux servers connected by LAN. Each server will be assigned a unique IP address. The computers in the cluster can be called "nodes".
+2. Install PaddlePaddle on every node. If you are going to take advantage of GPU cards, you'll also need to install proper driver and CUDA libraries. To install PaddlePaddle please read [this build and install](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/build_and_install/index_en.html) document. We strongly recommend using [Docker installation](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/build_and_install/docker_install_en.html).
+
+After installation, you can check the version by typing the below command (run a docker container if using docker: `docker run -it paddlepaddle/paddle:[tag] /bin/bash`):
+
+```bash
+$ paddle version
+PaddlePaddle 0.10.0rc, compiled with
+ with_avx: ON
+ with_gpu: OFF
+ with_double: OFF
+ with_python: ON
+ with_rdma: OFF
+ with_timer: OFF
+```
diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst
index 37a34c113f31054a3dc3f80fc348245fb6d16a5c..dd39ef9e79d7dc7f710aacfdcf03e01d3d0d30ba 100644
--- a/doc/howto/index_cn.rst
+++ b/doc/howto/index_cn.rst
@@ -5,7 +5,7 @@
:maxdepth: 1
cmd_parameter/index_cn.rst
- cluster/cluster_train_cn.md
+ cluster/index_cn.rst
capi/index_cn.rst
rnn/index_cn.rst
optimization/gpu_profiling_cn.rst
diff --git a/doc/howto/index_en.rst b/doc/howto/index_en.rst
index 3ba76d6aad1c92c1757fc7c3ec378eb4e1aa7cb9..ae8b86f75b5de770312fb2fdc46db490a18e5ff6 100644
--- a/doc/howto/index_en.rst
+++ b/doc/howto/index_en.rst
@@ -5,6 +5,6 @@ HOW TO
:maxdepth: 1
cmd_parameter/index_en.rst
- cluster/cluster_train_en.md
+ cluster/index_en.rst
rnn/index_en.rst
optimization/gpu_profiling_en.rst