diff --git a/doc/v2/howto/capi/compile_paddle_lib_en.md b/doc/v2/howto/capi/compile_paddle_lib_en.md
index 11d69b9b79c1a41898d3060d3fe25a31330334a3..6212a3081116d988630706e83d2349dd200b73ab 100644
--- a/doc/v2/howto/capi/compile_paddle_lib_en.md
+++ b/doc/v2/howto/capi/compile_paddle_lib_en.md
@@ -1,3 +1,175 @@
## Install and Build
-TBD
+### Download & Install
+
+ Download the latest C-API development package from CI system and install. You can find the required version in the table below:
+
+
+### From source
+
+ Users can also compile the C-API library from PaddlePaddle source code by compiling with the following compilation options:
+
+
+
+
+Options |
+Value |
+
+
+
+
+WITH_C_API |
+ON |
+
+
+WITH_PYTHON |
+OFF(recommended) |
+
+
+WITH_SWIG_PY |
+OFF(recommended) |
+
+
+WITH_GOLANG |
+OFF(recommended) |
+
+
+WITH_GPU |
+ON/OFF |
+
+
+WITH_MKL |
+ON/OFF |
+
+
+It is best to set up with recommended values to avoid linking with unnecessary libraries. Set other compilation options as you need.
+
+Pull the latest following code snippet from github, and configure compilation options(replace PADDLE_ROOT with the installation path of the PaddlePaddle C-API inference library):
+
+```shell
+PADDLE_ROOT=/path/of/capi
+git clone https://github.com/PaddlePaddle/Paddle.git
+cd Paddle
+mkdir build
+cd build
+cmake -DCMAKE_INSTALL_PREFIX=$PADDLE_ROOT \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DWITH_C_API=ON \
+ -DWITH_SWIG_PY=OFF \
+ -DWITH_GOLANG=OFF \
+ -DWITH_PYTHON=OFF \
+ -DWITH_MKL=OFF \
+ -DWITH_GPU=OFF \
+ ..
+```
+
+After running the above code to generate Makefile , run: `make && make install`. After successful compilation, the dependencies required by C-API(includes: (1)PaddlePaddle inference library and header files; (2) Third-party libraries and header files) will be stored in the `PADDLE_ROOT` directory.
+
+If the compilation is successful, see the following directory structure under `PADDLE_ROOT`(includes PaddlePaddle header files and libraries, and third-party libraries and header files(determined by the link methods if necessary)):
+
+```text
+├── include
+│ └── paddle
+│ ├── arguments.h
+│ ├── capi.h
+│ ├── capi_private.h
+│ ├── config.h
+│ ├── error.h
+│ ├── gradient_machine.h
+│ ├── main.h
+│ ├── matrix.h
+│ ├── paddle_capi.map
+│ └── vector.h
+├── lib
+│ ├── libpaddle_capi_engine.a
+│ ├── libpaddle_capi_layers.a
+│ ├── libpaddle_capi_shared.so
+│ └── libpaddle_capi_whole.a
+└── third_party
+ ├── gflags
+ │ ├── include
+ │ │ └── gflags
+ │ │ ├── gflags_completions.h
+ │ │ ├── gflags_declare.h
+ │ │ ...
+ │ └── lib
+ │ └── libgflags.a
+ ├── glog
+ │ ├── include
+ │ │ └── glog
+ │ │ ├── config.h
+ │ │ ...
+ │ └── lib
+ │ └── libglog.a
+ ├── openblas
+ │ ├── include
+ │ │ ├── cblas.h
+ │ │ ...
+ │ └── lib
+ │ ...
+ ├── protobuf
+ │ ├── include
+ │ │ └── google
+ │ │ └── protobuf
+ │ │ ...
+ │ └── lib
+ │ └── libprotobuf-lite.a
+ └── zlib
+ ├── include
+ │ ...
+ └── lib
+ ...
+
+```
+
+### Linking Description:
+
+There are three kinds of linking methods:
+
+1. Linking with dynamic library `libpaddle_capi_shared.so`(This way is much more convenient and easier, **Without special requirements, it is recommended**), refer to the following:
+ 1. Compiling with CPU version and using `OpenBLAS`; only need to link one library named `libpaddle_capi_shared.so` to develop prediction program through C-API.
+ 1. Compiling with CPU version and using `MKL` lib, you need to link MKL library directly to develop prediction program through PaddlePaddle C-API, due to `MKL` has its own dynamic library.
+ 1. Compiling with GPU version, CUDA library will be loaded dynamically on prediction program run-time, and also set CUDA library to `LD_LIBRARY_PATH` environment variable.
+
+2. Linking with static library `libpaddle_capi_whole.a`,refer to the following:
+ 1. Specify `-Wl,--whole-archive` linking options.
+ 1. Explicitly link third-party libraries such as `gflags`、`glog`、`libz`、`protobuf` .etc, you can find them under `PADDLE_ROOT/third_party` directory.
+ 1. Use OpenBLAS library if compiling C-API,must explicitly link `libopenblas.a`.
+ 1. Use MKL when compiling C-API, must explicitly link MKL dynamic library.
+
+3. Linking with static library `libpaddle_capi_layers.a` and `libpaddle_capi_engine.a`,refer to the following:
+ 1. This linking methods is mainly used for mobile prediction.
+ 1. Split `libpaddle_capi_whole.a` into two static linking library at least to reduce the size of linking libraries.
+ 1. Specify `-Wl,--whole-archive -lpaddle_capi_layers` and `-Wl,--no-whole-archive -lpaddle_capi_engine` for linking.
+ 1. The third-party dependencies need explicitly link same as method 2 above.
diff --git a/doc/v2/howto/cluster/multi_cluster/k8s_distributed_en.md b/doc/v2/howto/cluster/multi_cluster/k8s_distributed_en.md
index bc3d50b3ffd3b703a3a656caa1f96bdcf683f68b..dee1b7554f97af17989c3f7739d8feea3b6b8e3f 100644
--- a/doc/v2/howto/cluster/multi_cluster/k8s_distributed_en.md
+++ b/doc/v2/howto/cluster/multi_cluster/k8s_distributed_en.md
@@ -1,3 +1,372 @@
-# Kubernetes Distributed
+# Distributed Training on Kubernetes
-TBD
+We introduced how to create a PaddlePaddle Job with a single node on Kuberentes in the
+previous document.
+In this article, we will introduce how to create a PaddlePaddle job with multiple nodes
+on Kubernetes cluster.
+
+## Overall Architecture
+
+Before creating a training job, the users need to slice the training data and deploy
+the Python scripts along with it into the distributed file system
+(We can use the different type of Kuberentes Volumes to mount different distributed
+file systems). Before training starts, The program will copy the training data into the
+Container and also save the models at the same path during training. The global architecture
+is as follows:
+
+![PaddlePaddle on Kubernetes Architecture](src/k8s-paddle-arch.png)
+
+The above figure describes a distributed training architecture which contains 3 nodes, each
+Pod mounts a folder of the distributed file system to save training data and models
+by Kubernetes Volume. Kubernetes created 3 Pods for this training phase and scheduled these on
+3 nodes, each Pod has a PaddlePaddle container. After the containers car created,
+PaddlePaddle starts up the communication between PServer and Trainer and read training
+data for this training job.
+
+As the description above, we can start up a PaddlePaddle distributed training job on a
+Kubernetes ready cluster with the following steps:
+
+1. [Build PaddlePaddle Docker Image](#Build a Docker Image)
+1. [Split training data and upload to the distributed file system](#Upload Training Data)
+1. [Edit a YAML file and create a Kubernetes Job](#Create a Job)
+1. [Check the output](#Check The Output)
+
+We will introduce these steps as follows:
+
+### Build a Docker Image
+
+Training docker image needs to package the paddle pserver and paddle trainer runtimes, as well as two more processes before we can kick off the training:
+
+- Copying the training data into container.
+- Generating the initialization arguments for `Paddle PServer` and `Paddle Training` processes.
+
+Since the paddlepaddle official docker image already has the runtimes we need, we'll take it as the base image and pack some additional scripts for the processes mentioned above to build our training image. for more detail, please find from the following link:
+- https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/cluster/src/k8s_train/Dockerfile
+
+
+```bash
+$ cd doc/howto/usage/k8s/src/k8s_train
+$ docker build -t [YOUR_REPO]/paddle:mypaddle .
+```
+
+And then upload the new Docker Image to a Docker hub:
+
+```bash
+docker push [YOUR_REPO]/paddle:mypaddle
+```
+
+**[NOTE]**, in the above command arguments, `[YOUR_REPO]` represents your Docker repository,
+you need to use your repository instead of it. We will replace it with your respository name to
+represent the Docker Image which built in this step.
+
+### Prepare Training Data
+
+We can download and split the training job by creating a Kubernetes Job, or custom your image
+by editing [k8s_train](./src/k8s_train/).
+
+Before creating a Job, we need to bind a [persistenVolumeClaim](https://kubernetes.io/docs/user-guide/persistent-volumes) by the different type of
+the different file system, the generated dataset would be saved on this volume.
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: paddle-data
+spec:
+ template:
+ metadata:
+ name: pi
+ spec:
+ hostNetwork: true
+ containers:
+ - name: paddle-data
+ image: paddlepaddle/paddle-tutorial:k8s_data
+ imagePullPolicy: Always
+ volumeMounts:
+ - mountPath: "/mnt"
+ name: nfs
+ env:
+ - name: OUT_DIR
+ value: /home/work/mfs/paddle-cluster-job
+ - name: SPLIT_COUNT
+ value: "3"
+ volumes:
+ - name: nfs
+ persistentVolumeClaim:
+ claimName: mfs
+ restartPolicy: Never
+```
+
+Create the Job with the following command:
+
+```bash
+> kubectl create -f xxx.yaml
+```
+
+If created successfully, you can see some information like this:
+
+```base
+[root@paddle-kubernetes-node0 nfsdir]$ tree -d
+.
+`-- paddle-cluster-job
+ |-- 0
+ | `-- data
+ |-- 1
+ | `-- data
+ |-- 2
+ | `-- data
+ |-- output
+ |-- quick_start
+```
+
+The `paddle-cluster-job` above is the job name for this training job; we need 3
+PaddlePaddle training nodes and save the split training data in `paddle-cluster-job` path,
+the folder `0`, `1` and `2` represents the `training_id` on each node, `quick_start` folder is used to store training data, `output` folder is used to store the models and logs.
+
+
+### Create a Job
+
+Kubernetes allow users to create objects with YAML files, and we can use a command-line tool
+to create it.
+
+The Job YAML file describes that which Docker Image would be used in this training job, how much nodes would be created, what's the startup arguments of `Paddle PServer/Trainer` process and what's the type of Volumes. You can find the details of the YAML filed in
+[Kubernetes Job API](http://kubernetes.io/docs/api-reference/batch/v1/definitions/#_v1_job).
+The following is an example for this training job:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: paddle-cluster-job
+spec:
+ parallelism: 3
+ completions: 3
+ template:
+ metadata:
+ name: paddle-cluster-job
+ spec:
+ volumes:
+ - name: jobpath
+ hostPath:
+ path: /home/work/mfs
+ containers:
+ - name: trainer
+ image: [YOUR_REPO]/paddle:mypaddle
+ command: ["bin/bash", "-c", "/root/start.sh"]
+ env:
+ - name: JOB_NAME
+ value: paddle-cluster-job
+ - name: JOB_PATH
+ value: /home/jobpath
+ - name: JOB_NAMESPACE
+ value: default
+ - name: TRAIN_CONFIG_DIR
+ value: recommendation
+ - name: CONF_PADDLE_NIC
+ value: eth0
+ - name: CONF_PADDLE_PORT
+ value: "7164"
+ - name: CONF_PADDLE_PORTS_NUM
+ value: "2"
+ - name: CONF_PADDLE_PORTS_NUM_SPARSE
+ value: "2"
+ - name: CONF_PADDLE_GRADIENT_NUM
+ value: "3"
+ volumeMounts:
+ - name: jobpath
+ mountPath: /home/jobpath
+ restartPolicy: Never
+```
+
+In the above YAML file:
+- `metadata.name`, The job name.
+- `parallelism`, Whether the Kubernetes Job would create `parallelism` Pods at the same time.
+- `completions`, The Job would become the success status only when the number of successful Pod(the exit code is 0)
+ is equal to `completions`.
+- `volumeMounts`, the name field `jobpath` is a key, the `mountPath` field represents
+ the path in the container, and we can define the `jobpath` in `volumes` filed, use `hostPath`
+ to configure the host path we want to mount.
+- `env`, the environment variables in the Container, we pass some startup arguments by
+ this approach, some details are as following:
+ - JOB_PATH:the mount path in the container
+ - JOB_NAME:the job name
+ - TRAIN_CONFIG_DIR:the job path in the container, we can find the training data path by
+ combine with JOB_NAME.
+ - CONF_PADDLE_NIC: the argument `--nics` of `Paddle PServer` process, the network
+ device name.
+ - CONF_PADDLE_PORT: the argument `--port` of `Paddle PServer` process.
+ - CONF_PADDLE_PORTS_NUM: the argument `--ports_num` of `Paddle PServer`, the port number
+ for dense prameter update.
+ - CONF_PADDLE_PORTS_NUM_SPARSE:the argument `--ports_num_for_sparse` of `Paddle PServer`,
+ the port number for sparse parameter update.
+ - CONF_PADDLE_GRADIENT_NUM:the number of training node, the argument
+ `--num_gradient_servers` of `Paddle PServer` and `Paddle Trainer`.
+
+You can find some details information at [here]
+(http://www.paddlepaddle.org/docs/develop/documentation/zh/howto/usage/cmd_parameter/detail_introduction_cn.html)。
+
+We can use the command-line tool of Kubernetes to create a Job when we finish the YAML file:
+
+```bash
+kubectl create -f job.yaml
+```
+
+Upon successful creation, Kubernetes would create 3 Pods as PaddlePaddle training node,
+pull the Docker image and begin to train.
+
+
+### Checkout the Output
+
+At the process of training, we can check the logs and the output models which is stored in
+the `output` folder.
+
+**NOTE**, `node_0`, `node_1` and `node_2` represent the
+`trainer_id` of the PaddlePaddle training job rather than the node id of Kubernetes.
+
+```bash
+[root@paddle-kubernetes-node0 output]# tree -d
+.
+├── node_0
+│ ├── server.log
+│ └── train.log
+├── node_1
+│ ├── server.log
+│ └── train.log
+├── node_2
+......
+├── pass-00002
+│ ├── done
+│ ├── ___embedding_0__.w0
+│ ├── ___embedding_1__.w0
+......
+```
+
+We can checkout the status of each training Pod by viewing the logs:
+
+```bash
+[root@paddle-kubernetes-node0 node_0]# cat train.log
+I1116 09:10:17.123121 50 Util.cpp:155] commandline:
+ /usr/local/bin/../opt/paddle/bin/paddle_trainer
+ --nics=eth0 --port=7164
+ --ports_num=2 --comment=paddle_process_by_paddle
+ --pservers=192.168.129.66,192.168.223.143,192.168.129.71
+ --ports_num_for_sparse=2 --config=./trainer_config.py
+ --trainer_count=4 --num_passes=10 --use_gpu=0
+ --log_period=50 --dot_period=10 --saving_period=1
+ --local=0 --trainer_id=0
+ --save_dir=/home/jobpath/paddle-cluster-job/output
+I1116 09:10:17.123440 50 Util.cpp:130] Calling runInitFunctions
+I1116 09:10:17.123764 50 Util.cpp:143] Call runInitFunctions done.
+[WARNING 2016-11-16 09:10:17,227 default_decorators.py:40] please use keyword arguments in paddle config.
+[INFO 2016-11-16 09:10:17,239 networks.py:1282] The input order is [movie_id, title, genres, user_id, gender, age, occupation, rating]
+[INFO 2016-11-16 09:10:17,239 networks.py:1289] The output order is [__square_error_cost_0__]
+I1116 09:10:17.392917 50 Trainer.cpp:170] trainer mode: Normal
+I1116 09:10:17.613910 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
+I1116 09:10:17.680917 50 PyDataProvider2.cpp:257] loading dataprovider dataprovider::process
+I1116 09:10:17.681543 50 GradientMachine.cpp:134] Initing parameters..
+I1116 09:10:18.012390 50 GradientMachine.cpp:141] Init parameters done.
+I1116 09:10:18.018641 50 ParameterClient2.cpp:122] pserver 0 192.168.129.66:7164
+I1116 09:10:18.018950 50 ParameterClient2.cpp:122] pserver 1 192.168.129.66:7165
+I1116 09:10:18.019069 50 ParameterClient2.cpp:122] pserver 2 192.168.223.143:7164
+I1116 09:10:18.019492 50 ParameterClient2.cpp:122] pserver 3 192.168.223.143:7165
+I1116 09:10:18.019716 50 ParameterClient2.cpp:122] pserver 4 192.168.129.71:7164
+I1116 09:10:18.019836 50 ParameterClient2.cpp:122] pserver 5 192.168.129.71:7165
+```
+
+## Some Additional Details
+
+### Using Environment Variables
+
+Usually we use the environment varialbes to configurate the PaddlePaddle Job which runs in
+Kubernetes, `start_paddle.py` provides a start up script to convert the environment variable
+to the start up arguments of PaddlePaddle process:
+
+```bash
+API = "/api/v1/namespaces/"
+JOBSELECTOR = "labelSelector=job-name="
+JOB_PATH = os.getenv("JOB_PATH") + "/" + os.getenv("JOB_NAME")
+JOB_PATH_OUTPUT = JOB_PATH + "/output"
+JOBNAME = os.getenv("JOB_NAME")
+NAMESPACE = os.getenv("JOB_NAMESPACE")
+PADDLE_NIC = os.getenv("CONF_PADDLE_NIC")
+PADDLE_PORT = os.getenv("CONF_PADDLE_PORT")
+PADDLE_PORTS_NUM = os.getenv("CONF_PADDLE_PORTS_NUM")
+PADDLE_PORTS_NUM_SPARSE = os.getenv("CONF_PADDLE_PORTS_NUM_SPARSE")
+PADDLE_SERVER_NUM = os.getenv("CONF_PADDLE_GRADIENT_NUM")
+```
+
+### Communication between Pods
+
+At the begin of `start_paddle.py`, it would initializes and parses the arguments.
+
+```python
+parser = argparse.ArgumentParser(prog="start_paddle.py",
+ description='simple tool for k8s')
+ args, train_args_list = parser.parse_known_args()
+ train_args = refine_unknown_args(train_args_list)
+ train_args_dict = dict(zip(train_args[:-1:2], train_args[1::2]))
+ podlist = getPodList()
+```
+
+And then query the status of all the other Pods of this Job by the function `getPodList()`, and fetch `triner_id` by the function `getIdMap(podlist)` if all the Pods status is `RUNNING`.
+
+```python
+ podlist = getPodList()
+ # need to wait until all pods are running
+ while not isPodAllRunning(podlist):
+ time.sleep(10)
+ podlist = getPodList()
+ idMap = getIdMap(podlist)
+```
+
+**NOTE**: `getPodList()` would prefetch all the Pods in the current namespace, if some
+Pods are alreay running, it may cause some error. We will use [statfulesets](https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets) instead of
+Kubernetes Pod or Replicaset in the future.
+
+The function `getIdMap(podlist)` fetches IPs addresses of `podlist` and then sort them
+to generate `trainer_id`.
+
+```python
+def getIdMap(podlist):
+ '''
+ generate tainer_id by ip
+ '''
+ ips = []
+ for pod in podlist["items"]:
+ ips.append(pod["status"]["podIP"])
+ ips.sort()
+ idMap = {}
+ for i in range(len(ips)):
+ idMap[ips[i]] = i
+ return idMap
+```
+
+After getting the `idMap`, we can generate the arguments of `Paddle PServer` and `Paddle Trainer`
+so that we can start up them by `startPaddle(idMap, train_args_dict)`.
+
+### Create Job
+
+The main goal of `startPaddle` is generating the arguments of `Paddle PServer` and
+`Paddle Trainer` processes. Take `Paddle Trainer` as an example, we parse the
+environment variable and then get `PADDLE_NIC`, `PADDLE_PORT`, `PADDLE_PORTS_NUM` and etc...,
+finally find `trainerId` from `idMap` according to its IP address.
+
+```python
+ program = 'paddle train'
+ args = " --nics=" + PADDLE_NIC
+ args += " --port=" + str(PADDLE_PORT)
+ args += " --ports_num=" + str(PADDLE_PORTS_NUM)
+ args += " --comment=" + "paddle_process_by_paddle"
+ ip_string = ""
+ for ip in idMap.keys():
+ ip_string += (ip + ",")
+ ip_string = ip_string.rstrip(",")
+ args += " --pservers=" + ip_string
+ args_ext = ""
+ for key, value in train_args_dict.items():
+ args_ext += (' --' + key + '=' + value)
+ localIP = socket.gethostbyname(socket.gethostname())
+ trainerId = idMap[localIP]
+ args += " " + args_ext + " --trainer_id=" + \
+ str(trainerId) + " --save_dir=" + JOB_PATH_OUTPUT
+```
diff --git a/paddle/fluid/framework/parallel_executor.cc b/paddle/fluid/framework/parallel_executor.cc
index 74945fb4f2f745b6ca9c48adb0c8b9e6ae1e94a4..f393105fe82bfad70246952deada8e296c851ef5 100644
--- a/paddle/fluid/framework/parallel_executor.cc
+++ b/paddle/fluid/framework/parallel_executor.cc
@@ -174,12 +174,17 @@ void ParallelExecutor::SplitTensorToPlaces(
const std::unordered_map &feed_tensors) {
for (auto it : feed_tensors) {
auto lod_tensors = it.second.SplitLoDTensor(member_->places_);
+ PADDLE_ENFORCE_EQ(
+ member_->places_.size(), lod_tensors.size(),
+ "The number of samples of current batch is less than the count of "
+ "devices, currently, it is not allowed. (%d vs %d)",
+ member_->places_.size(), lod_tensors.size());
for (size_t j = 0; j < member_->places_.size(); ++j) {
// TODO(panxy0718): Do I need to delete this var?
- member_->local_scopes_[j]
- ->Var(it.first)
- ->GetMutable()
- ->ShareDataWith(lod_tensors[j]);
+ auto t =
+ member_->local_scopes_[j]->Var(it.first)->GetMutable();
+ t->ShareDataWith(lod_tensors[j]);
+ t->set_lod(lod_tensors[j].lod());
}
}
}
diff --git a/paddle/fluid/operators/batch_norm_op.cc b/paddle/fluid/operators/batch_norm_op.cc
index 36049ee6a4a0d2a251b6d10cf1ff05a9d9845089..c9939e8602ed341d37784ca292a55326899e8e65 100644
--- a/paddle/fluid/operators/batch_norm_op.cc
+++ b/paddle/fluid/operators/batch_norm_op.cc
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/batch_norm_op.h"
+#include
#include "paddle/fluid/framework/data_layout.h"
namespace paddle {
diff --git a/paddle/fluid/operators/batch_norm_op.cu.cc b/paddle/fluid/operators/batch_norm_op.cu.cc
index 6ceacc39924a7558e380aaf563aaf234f1bf30a5..eecb58e11ef57b550c79c040e6933ed6e52e2e87 100644
--- a/paddle/fluid/operators/batch_norm_op.cu.cc
+++ b/paddle/fluid/operators/batch_norm_op.cu.cc
@@ -13,9 +13,8 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/batch_norm_op.h"
-#include "paddle/fluid/framework/data_layout.h"
-
#include
+#include "paddle/fluid/framework/data_layout.h"
#include "paddle/fluid/operators/math/math_function.h"
#include "paddle/fluid/platform/cudnn_helper.h"
#include "paddle/fluid/platform/float16.h"
diff --git a/paddle/fluid/operators/batch_size_like.h b/paddle/fluid/operators/batch_size_like.h
index 0bdf27e620a3a7c7b62b955f708a5e2aad1a6986..dd51a11fbe6ad5e528197b67536518c4b31fa355 100644
--- a/paddle/fluid/operators/batch_size_like.h
+++ b/paddle/fluid/operators/batch_size_like.h
@@ -13,7 +13,8 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
-
+#include
+#include
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/operators/math/math_function.h"
diff --git a/paddle/fluid/operators/box_coder_op.h b/paddle/fluid/operators/box_coder_op.h
index 3c7cac1cd17042994287effc31a918ebd4353c4c..77fc6c2b62af42e6526b889aeef2d9bab795baec 100644
--- a/paddle/fluid/operators/box_coder_op.h
+++ b/paddle/fluid/operators/box_coder_op.h
@@ -10,6 +10,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
+#include
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/operators/math/math_function.h"
diff --git a/paddle/fluid/operators/compare_op.cc b/paddle/fluid/operators/compare_op.cc
index 9a139ab27ec53395a8d1ab1347dbce93ea68fd8e..3a6a357e81949014a70e5bae1ee0e1c8b9d0c2ce 100644
--- a/paddle/fluid/operators/compare_op.cc
+++ b/paddle/fluid/operators/compare_op.cc
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/compare_op.h"
+#include
#include "paddle/fluid/framework/op_registry.h"
namespace paddle {
diff --git a/paddle/fluid/operators/concat_op.cc b/paddle/fluid/operators/concat_op.cc
index 0eedd8ee51ebfff6f553d8e19e97c3a45a95fa6a..d65a7b34678cda38d5f8beb9154d61928f517ce0 100644
--- a/paddle/fluid/operators/concat_op.cc
+++ b/paddle/fluid/operators/concat_op.cc
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/concat_op.h"
+#include
#include
namespace paddle {
diff --git a/paddle/fluid/operators/cond_op.h b/paddle/fluid/operators/cond_op.h
index a04fae2182005d4eb08305e943449977bfb637f9..d3888923dbdeee122fb3045a839c0ba639b892b1 100644
--- a/paddle/fluid/operators/cond_op.h
+++ b/paddle/fluid/operators/cond_op.h
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
+#include
#include
#include "glog/logging.h"
#include "paddle/fluid/framework/ddim.h"
diff --git a/paddle/fluid/operators/conv_transpose_op.cc b/paddle/fluid/operators/conv_transpose_op.cc
index b2a3cfc89f18eff24c941c664b1184b4485ab895..08f5939d42a41d235a94eff16cf2f558068d6aaa 100644
--- a/paddle/fluid/operators/conv_transpose_op.cc
+++ b/paddle/fluid/operators/conv_transpose_op.cc
@@ -13,6 +13,8 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/conv_transpose_op.h"
+#include
+#include
namespace paddle {
namespace operators {
diff --git a/paddle/fluid/operators/conv_transpose_op.h b/paddle/fluid/operators/conv_transpose_op.h
index d4e4b641ece9ed120904ded6f8baed65a2666213..bfc0177c2a0da1627fbca532764fdae8167b6b2a 100644
--- a/paddle/fluid/operators/conv_transpose_op.h
+++ b/paddle/fluid/operators/conv_transpose_op.h
@@ -13,7 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
-
+#include
#include "paddle/fluid/framework/eigen.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/operators/math/im2col.h"
diff --git a/paddle/fluid/operators/crf_decoding_op.h b/paddle/fluid/operators/crf_decoding_op.h
index 2b2a733fb9f162755e5c548fec617937d86689dd..3f5fab3b382bea97f43e4bc1b2cd436c956ba264 100644
--- a/paddle/fluid/operators/crf_decoding_op.h
+++ b/paddle/fluid/operators/crf_decoding_op.h
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
+#include
#include "paddle/fluid/framework/eigen.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/operators/math/math_function.h"
diff --git a/paddle/fluid/operators/crop_op.h b/paddle/fluid/operators/crop_op.h
index c5ac6849789587f2f41588f79bd538f7b79a7478..f05c2e23284e3a24cf48442996f671ec6084c391 100644
--- a/paddle/fluid/operators/crop_op.h
+++ b/paddle/fluid/operators/crop_op.h
@@ -13,7 +13,8 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
-
+#include
+#include
#include "paddle/fluid/framework/eigen.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/operators/strided_memcpy.h"
diff --git a/paddle/fluid/operators/elementwise_op_function.h b/paddle/fluid/operators/elementwise_op_function.h
index 0b4238436ffcc586fe8bc7abbe4cfbc1654dcb88..415182201a7a9e11d8ea8c62b92849b5ea3bac3e 100644
--- a/paddle/fluid/operators/elementwise_op_function.h
+++ b/paddle/fluid/operators/elementwise_op_function.h
@@ -13,14 +13,15 @@ See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
+#include
#include "paddle/fluid/framework/eigen.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/framework/operator.h"
#include "paddle/fluid/platform/transform.h"
#ifdef __NVCC__
+#include
#include
-#include "paddle/fluid/platform/cuda_helper.h"
constexpr int ELEMWISE_MAX_BLOCK_DIM = 1024;
#endif
@@ -43,35 +44,35 @@ namespace operators {
*/
inline void get_mid_dims(const framework::DDim& x_dims,
const framework::DDim& y_dims, const int axis,
- int& pre, int& n, int& post) {
- pre = 1;
- n = 1;
- post = 1;
+ int* pre, int* n, int* post) {
+ *pre = 1;
+ *n = 1;
+ *post = 1;
for (int i = 0; i < axis; ++i) {
- pre *= x_dims[i];
+ (*pre) *= x_dims[i];
}
for (int i = 0; i < y_dims.size(); ++i) {
PADDLE_ENFORCE_EQ(x_dims[i + axis], y_dims[i],
"Broadcast dimension mismatch.");
- n *= y_dims[i];
+ (*n) *= y_dims[i];
}
for (int i = axis + y_dims.size(); i < x_dims.size(); ++i) {
- post *= x_dims[i];
+ (*post) *= x_dims[i];
}
}
-inline void trim_trailing_singular_dims(framework::DDim& dims) {
+inline void trim_trailing_singular_dims(framework::DDim* dims) {
// Remove trailing dimensions of size 1 for y
- auto actual_dims_size = dims.size();
+ auto actual_dims_size = dims->size();
for (; actual_dims_size != 0; --actual_dims_size) {
- if (dims[actual_dims_size - 1] != 1) break;
+ if ((*dims)[actual_dims_size - 1] != 1) break;
}
- if (actual_dims_size != dims.size()) {
- auto actual_dims = framework::vectorize(dims);
+ if (actual_dims_size != dims->size()) {
+ auto actual_dims = framework::vectorize(*dims);
actual_dims.resize(actual_dims_size);
- dims = framework::make_ddim(actual_dims);
+ *dims = framework::make_ddim(actual_dims);
}
}
@@ -159,7 +160,7 @@ class RowwiseTransformIterator
RowwiseTransformIterator, const T*>
super_t;
HOSTDEVICE RowwiseTransformIterator(const T* x, int n)
- : super_t(x), begin_(x), n_(n){};
+ : super_t(x), begin_(x), n_(n) {}
friend class thrust::iterator_core_access;
private:
@@ -179,7 +180,7 @@ class MidWiseTransformIterator
MidWiseTransformIterator, const T*>
super_t;
HOSTDEVICE MidWiseTransformIterator(const T* x, int n, int post)
- : super_t(x), begin_(x), n_(n), post_(post){};
+ : super_t(x), begin_(x), n_(n), post_(post) {}
friend class thrust::iterator_core_access;
private:
@@ -333,6 +334,55 @@ static void ElemwiseGradBroadcast1CPU(const T* x, const T* y, const T* out,
}
}
#ifdef __NVCC__
+
+// __shfl_down has been deprecated as of CUDA 9.0.
+#if CUDA_VERSION < 9000
+template
+__forceinline__ __device__ T __shfl_down_sync(unsigned, T val, int delta) {
+ return __shfl_down(val, delta);
+}
+#define CREATE_SHFL_MASK(mask, predicate) mask = 0u;
+#else
+#define FULL_WARP_MASK 0xFFFFFFFF
+#define CREATE_SHFL_MASK(mask, predicate) \
+ mask = __ballot_sync(FULL_WARP_MASK, (predicate))
+#endif
+
+template
+__device__ T reduceSum(T val, int tid, int len) {
+ // TODO(zcd): The warp size should be taken from the
+ // parameters of the GPU but not specified as 32 simply.
+ // To make the reduceSum more efficiently,
+ // I use Warp-Level Parallelism and assume the Warp size
+ // is 32 which may be different for different GPU,
+ // but most card's warp size is 32.
+ __shared__ T shm[32];
+ const int warpSize = 32;
+ unsigned mask = 0u;
+ CREATE_SHFL_MASK(mask, tid < len);
+
+ for (int offset = warpSize / 2; offset > 0; offset /= 2)
+ val += __shfl_down_sync(mask, val, offset);
+
+ if (tid < warpSize) shm[tid] = 0;
+
+ __syncthreads();
+
+ if (tid % warpSize == 0) {
+ shm[tid / warpSize] = val;
+ }
+
+ CREATE_SHFL_MASK(mask, tid < warpSize);
+
+ if (tid < warpSize) {
+ val = shm[tid];
+ for (int offset = warpSize / 2; offset > 0; offset /= 2)
+ val += __shfl_down_sync(mask, val, offset);
+ }
+
+ return val;
+}
+
template
static __global__ void ElemwiseGradBroadcast1CUDAKernel(
const T* x, const T* y, const T* out, const T* dout, int h, int w,
@@ -355,7 +405,7 @@ static __global__ void ElemwiseGradBroadcast1CUDAKernel(
if (dy) {
h = h > ELEMWISE_MAX_BLOCK_DIM ? ELEMWISE_MAX_BLOCK_DIM : h;
- val = platform::reduceSum(val, tid, h);
+ val = reduceSum(val, tid, h);
if (threadIdx.x == 0) {
dy[j] = val;
}
@@ -432,7 +482,7 @@ static __global__ void ElemwiseGradBroadcast2CUDAKernel(
if (dy) {
int h = pre * post;
h = h > ELEMWISE_MAX_BLOCK_DIM ? ELEMWISE_MAX_BLOCK_DIM : h;
- val = platform::reduceSum(val, tid, h);
+ val = reduceSum(val, tid, h);
if (threadIdx.x == 0) {
dy[j] = val;
}
@@ -472,11 +522,11 @@ void ElemwiseGradCompute(const framework::ExecutionContext& ctx,
auto y_dim = y.dims();
axis = (axis == -1 ? x_dim.size() - y_dim.size() : axis);
- trim_trailing_singular_dims(y_dim);
+ trim_trailing_singular_dims(&y_dim);
axis = (y_dim.size() == 0) ? x_dim.size() : axis;
int pre, n, post;
- get_mid_dims(x_dim, y_dim, axis, pre, n, post);
+ get_mid_dims(x_dim, y_dim, axis, &pre, &n, &post);
if (post == 1) {
int h = pre;
int w = n;
@@ -514,7 +564,7 @@ void ElemwiseGradCompute(const framework::ExecutionContext& ctx,
}
}
}
-};
+}
template
@@ -543,11 +593,11 @@ void ElementwiseGradCompute(const framework::ExecutionContext& ctx,
}
axis = (axis == -1 ? x_dims.size() - y_dims.size() : axis);
- trim_trailing_singular_dims(y_dims);
+ trim_trailing_singular_dims(&y_dims);
axis = (y_dims.size() == 0) ? x_dims.size() : axis;
int pre, n, post;
- get_mid_dims(x_dims, y_dims, axis, pre, n, post);
+ get_mid_dims(x_dims, y_dims, axis, &pre, &n, &post);
if (post == 1) {
broadcastfunctor f;
@@ -582,11 +632,11 @@ void ElementwiseComputeEx(const framework::ExecutionContext& ctx,
axis = (axis == -1 ? x_dims.size() - y_dims.size() : axis);
PADDLE_ENFORCE(axis >= 0 && axis < x_dims.size(),
"Axis should be in range [0, x_dims)");
- trim_trailing_singular_dims(y_dims);
+ trim_trailing_singular_dims(&y_dims);
axis = (y_dims.size() == 0) ? x_dims.size() : axis;
int pre, n, post;
- get_mid_dims(x_dims, y_dims, axis, pre, n, post);
+ get_mid_dims(x_dims, y_dims, axis, &pre, &n, &post);
if (post == 1) {
functor.RunRowWise(n, pre);
return;
diff --git a/paddle/fluid/operators/math/math_function.cu b/paddle/fluid/operators/math/math_function.cu
index 82e12943148a806bae719c722944d6a9d5236b7c..e53183603fec54ceef68873cfd97b4b985b0d437 100644
--- a/paddle/fluid/operators/math/math_function.cu
+++ b/paddle/fluid/operators/math/math_function.cu
@@ -39,13 +39,14 @@ void gemm(
cublasOperation_t cuTransB =
(transB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
- float h_alpha = static_cast(alpha);
- float h_beta = static_cast(beta);
-
// TODO(kexinzhao): add processing code for compute capability < 53 case
PADDLE_ENFORCE_GE(context.GetComputeCapability(), 53,
"cublas fp16 gemm requires GPU compute capability >= 53");
+#if CUDA_VERSION >= 8000
+ float h_alpha = static_cast(alpha);
+ float h_beta = static_cast(beta);
+
cublasGemmAlgo_t algo = CUBLAS_GEMM_DFALT;
#if CUDA_VERSION >= 9000
if (context.GetComputeCapability() >= 70) {
@@ -56,7 +57,7 @@ void gemm(
PADDLE_ENFORCE(platform::dynload::cublasSetMathMode(context.cublas_handle(),
CUBLAS_DEFAULT_MATH));
}
-#endif
+#endif // CUDA_VERSION >= 9000
// cublasHgemm does true FP16 computation which is slow for non-Volta
// GPUs. So use cublasGemmEx instead which does pesudo FP16 computation:
@@ -66,6 +67,18 @@ void gemm(
context.cublas_handle(), cuTransB, cuTransA, N, M, K, &h_alpha, B,
CUDA_R_16F, ldb, A, CUDA_R_16F, lda, &h_beta, C, CUDA_R_16F, N,
CUDA_R_32F, algo));
+#else
+ // CUDA 7.5 does not support cublasGemmEx, hence we fall back to use hgemm
+ const half h_alpha = static_cast(alpha);
+ const half h_beta = static_cast(beta);
+ const half* h_A = reinterpret_cast(A);
+ const half* h_B = reinterpret_cast(B);
+ half* h_C = reinterpret_cast(C);
+
+ PADDLE_ENFORCE(platform::dynload::cublasHgemm(
+ context.cublas_handle(), cuTransB, cuTransA, N, M, K, &h_alpha, h_B, ldb,
+ h_A, lda, &h_beta, h_C, N));
+#endif // CUDA_VERSION >= 8000
}
template <>
diff --git a/paddle/fluid/platform/cuda_helper.h b/paddle/fluid/platform/cuda_helper.h
index a4ea4f21e3c16c9292cf67863616924e9d9f8aba..881d611d4ac26f992036f639097815aff625227b 100644
--- a/paddle/fluid/platform/cuda_helper.h
+++ b/paddle/fluid/platform/cuda_helper.h
@@ -62,53 +62,5 @@ CUDA_ATOMIC_WRAPPER(Add, double) {
}
#endif
-// __shfl_down has been deprecated as of CUDA 9.0.
-#if CUDA_VERSION < 9000
-template
-__forceinline__ __device__ T __shfl_down_sync(unsigned, T val, int delta) {
- return __shfl_down(val, delta);
-}
-#define CREATE_SHFL_MASK(mask, predicate) mask = 0u;
-#else
-#define FULL_WARP_MASK 0xFFFFFFFF
-#define CREATE_SHFL_MASK(mask, predicate) \
- mask = __ballot_sync(FULL_WARP_MASK, (predicate))
-#endif
-
-template
-__device__ T reduceSum(T val, int tid, int len) {
- // TODO(zcd): The warp size should be taken from the
- // parameters of the GPU but not specified as 32 simply.
- // To make the reduceSum more efficiently,
- // I use Warp-Level Parallelism and assume the Warp size
- // is 32 which may be different for different GPU,
- // but most card's warp size is 32.
- __shared__ T shm[32];
- const int warpSize = 32;
- unsigned mask = 0u;
- CREATE_SHFL_MASK(mask, tid < len);
-
- for (int offset = warpSize / 2; offset > 0; offset /= 2)
- val += __shfl_down_sync(mask, val, offset);
-
- if (tid < warpSize) shm[tid] = 0;
-
- __syncthreads();
-
- if (tid % warpSize == 0) {
- shm[tid / warpSize] = val;
- }
-
- CREATE_SHFL_MASK(mask, tid < warpSize);
-
- if (tid < warpSize) {
- val = shm[tid];
- for (int offset = warpSize / 2; offset > 0; offset /= 2)
- val += __shfl_down_sync(mask, val, offset);
- }
-
- return val;
-}
-
} // namespace platform
} // namespace paddle
diff --git a/paddle/fluid/platform/dynload/cublas.cc b/paddle/fluid/platform/dynload/cublas.cc
index eb541579a136de2a84ecc9773e0c312b405f7e86..361d3439b844e9f68d3fba0a0e41ec457118a4a9 100644
--- a/paddle/fluid/platform/dynload/cublas.cc
+++ b/paddle/fluid/platform/dynload/cublas.cc
@@ -28,6 +28,10 @@ CUBLAS_BLAS_ROUTINE_EACH(DEFINE_WRAP);
CUBLAS_BLAS_ROUTINE_EACH_R2(DEFINE_WRAP);
#endif
+#ifdef CUBLAS_BLAS_ROUTINE_EACH_R3
+CUBLAS_BLAS_ROUTINE_EACH_R3(DEFINE_WRAP);
+#endif
+
} // namespace dynload
} // namespace platform
} // namespace paddle
diff --git a/paddle/fluid/platform/dynload/cublas.h b/paddle/fluid/platform/dynload/cublas.h
index a41018d350e89881888d5e31089c2b9ecd76f6c0..1ab55d6b9bf8fdbd14c9c2bd978e3e99dba3e73e 100644
--- a/paddle/fluid/platform/dynload/cublas.h
+++ b/paddle/fluid/platform/dynload/cublas.h
@@ -71,7 +71,6 @@ extern void *cublas_dso_handle;
__macro(cublasDgemm_v2); \
__macro(cublasHgemm); \
__macro(cublasSgemmEx); \
- __macro(cublasGemmEx); \
__macro(cublasSgeam_v2); \
__macro(cublasDgeam_v2); \
__macro(cublasCreate_v2); \
@@ -83,11 +82,6 @@ extern void *cublas_dso_handle;
__macro(cublasDgemmBatched); \
__macro(cublasCgemmBatched); \
__macro(cublasZgemmBatched); \
- __macro(cublasSgemmStridedBatched); \
- __macro(cublasDgemmStridedBatched); \
- __macro(cublasCgemmStridedBatched); \
- __macro(cublasZgemmStridedBatched); \
- __macro(cublasHgemmStridedBatched); \
__macro(cublasSgetrfBatched); \
__macro(cublasSgetriBatched); \
__macro(cublasDgetrfBatched); \
@@ -95,10 +89,24 @@ extern void *cublas_dso_handle;
CUBLAS_BLAS_ROUTINE_EACH(DECLARE_DYNAMIC_LOAD_CUBLAS_WRAP)
+// APIs available after CUDA 8.0
+#if CUDA_VERSION >= 8000
+#define CUBLAS_BLAS_ROUTINE_EACH_R2(__macro) \
+ __macro(cublasGemmEx); \
+ __macro(cublasSgemmStridedBatched); \
+ __macro(cublasDgemmStridedBatched); \
+ __macro(cublasCgemmStridedBatched); \
+ __macro(cublasZgemmStridedBatched); \
+ __macro(cublasHgemmStridedBatched);
+
+CUBLAS_BLAS_ROUTINE_EACH_R2(DECLARE_DYNAMIC_LOAD_CUBLAS_WRAP)
+#endif
+
// APIs available after CUDA 9.0
#if CUDA_VERSION >= 9000
-#define CUBLAS_BLAS_ROUTINE_EACH_R2(__macro) __macro(cublasSetMathMode);
-CUBLAS_BLAS_ROUTINE_EACH_R2(DECLARE_DYNAMIC_LOAD_CUBLAS_WRAP)
+#define CUBLAS_BLAS_ROUTINE_EACH_R3(__macro) __macro(cublasSetMathMode);
+
+CUBLAS_BLAS_ROUTINE_EACH_R3(DECLARE_DYNAMIC_LOAD_CUBLAS_WRAP)
#endif
#undef DECLARE_DYNAMIC_LOAD_CUBLAS_WRAP
diff --git a/python/paddle/fluid/distribute_transpiler.py b/python/paddle/fluid/distribute_transpiler.py
index 3c6be913200716ae4f70e2b48ee8faf8078223d2..0ec3ebc7e3dba6e4cf89c8a76622761d210276cf 100644
--- a/python/paddle/fluid/distribute_transpiler.py
+++ b/python/paddle/fluid/distribute_transpiler.py
@@ -278,11 +278,21 @@ class DistributeTranspiler:
# we don't need to create them when grad arrives.
# change client side var name to origin name by
# removing ".trainer_%d" suffix
+
suff_idx = v.name.find(".trainer_")
if suff_idx >= 0:
orig_var_name = v.name[:suff_idx]
else:
orig_var_name = v.name
+ # NOTE: single_trainer_var must be created for multi-trainer
+ # case to merge grads from multiple trainers
+ single_trainer_var = \
+ pserver_program.global_block().create_var(
+ name=orig_var_name,
+ persistable=True,
+ type=v.type,
+ dtype=v.dtype,
+ shape=v.shape)
if self.trainers > 1:
for trainer_id in xrange(self.trainers):
var = pserver_program.global_block().create_var(
@@ -293,12 +303,6 @@ class DistributeTranspiler:
shape=v.shape)
recv_inputs.append(var)
else:
- single_trainer_var = pserver_program.global_block().create_var(
- name=orig_var_name,
- persistable=True,
- type=v.type,
- dtype=v.dtype,
- shape=v.shape)
recv_inputs.append(single_trainer_var)
# step3
diff --git a/python/paddle/fluid/parallel_executor.py b/python/paddle/fluid/parallel_executor.py
index b93f2f974ca28cfd8d03c0dbbf1d401620a15e53..24dfa6144ae9584f1678e662716da123352430dd 100644
--- a/python/paddle/fluid/parallel_executor.py
+++ b/python/paddle/fluid/parallel_executor.py
@@ -87,7 +87,8 @@ class ParallelExecutor(object):
# performance. Worth tunning for other models in the future.
num_threads = len(self._places)
else:
- min(len(self._places) * 2, multiprocessing.cpu_count())
+ num_threads = min(
+ len(self._places) * 2, multiprocessing.cpu_count())
main = main_program
main = main if main else framework.default_main_program()