提交 f5e5baf7 编写于 作者: T Travis CI

Deploy to GitHub Pages: 4c2b3b6e

上级 0b061960
# Sphinx build info version 1 # Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 70a318b9e7a63a79aedc16f559247671 config: abb235454c522821afda02c2aa921d6f
tags: 645f666f9bcd5a90fca523b33c5a78b7 tags: 645f666f9bcd5a90fca523b33c5a78b7
Installing from Sources Installing from Sources
================= ==========================
* [1. Download and Setup](#download) * [1. Download and Setup](#download)
* [2. Requirements](#requirements) * [2. Requirements](#requirements)
* [3. Build on Ubuntu](#ubuntu) * [3. Build on Ubuntu](#ubuntu)
* [4. Build on Mac OS X](#mac)
## <span id="download">Download and Setup</span> ## <span id="download">Download and Setup</span>
You can download PaddlePaddle from the [github source](https://github.com/gangliao/Paddle). You can download PaddlePaddle from the [github source](https://github.com/gangliao/Paddle).
...@@ -28,51 +27,26 @@ To compile the source code, your computer must be equipped with GCC >=4.6 or Cla ...@@ -28,51 +27,26 @@ To compile the source code, your computer must be equipped with GCC >=4.6 or Cla
PaddlePaddle supports some build options. To enable it, first you need to install the related libraries. PaddlePaddle supports some build options. To enable it, first you need to install the related libraries.
<style type="text/css"> <html>
.tg {border-collapse:collapse;border-spacing:0;border-color:#ccc;} <table>
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;border-top-width:1px;border-bottom-width:1px;} <thead>
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;border-top-width:1px;border-bottom-width:1px;} <tr>
.tg .tg-yw4l{vertical-align:top} <th scope="col" class="left">Optional</th>
.tg .tg-9hbo{font-weight:bold;vertical-align:top} <th scope="col" class="left">Description</th>
</style> </tr>
<table class="tg"> </thead>
<tr> <tbody>
<th class="tg-yw4l">Optional</th> <tr><td class="left">WITH_GPU</td><td class="left">Compile with GPU mode.</td></tr>
<th class="tg-yw4l">Description</th> <tr><td class="left">WITH_DOUBLE</td><td class="left">Compile with double precision floating-point, default: single precision.</td></tr>
</tr> <tr><td class="left">WITH_GLOG</td><td class="left">Compile with glog. If not found, default: an internal log implementation.</td></tr>
<tr> <tr><td class="left">WITH_GFLAGS</td><td class="left">Compile with gflags. If not found, default: an internal flag implementation.</td></tr>
<td class="tg-9hbo">WITH_GPU</td> <tr><td class="left">WITH_TESTING</td><td class="left">Compile with gtest for PaddlePaddle's unit testing.</td></tr>
<td class="tg-yw4l">Compile with GPU mode.</td> <tr><td class="left">WITH_DOC</td><td class="left"> Compile to generate PaddlePaddle's docs, default: disabled (OFF).</td></tr>
</tr> <tr><td class="left">WITH_SWIG_PY</td><td class="left">Compile with python predict API, default: disabled (OFF).</td></tr>
<tr> <tr><td class="left">WITH_STYLE_CHECK</td><td class="left">Compile with code style check, default: enabled (ON).</td></tr>
<td class="tg-9hbo">WITH_DOUBLE</td> </tbody>
<td class="tg-yw4l">Compile with double precision floating-point, default: single precision.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_GLOG</td>
<td class="tg-yw4l">Compile with glog. If not found, default: an internal log implementation.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_GFLAGS</td>
<td class="tg-yw4l">Compile with gflags. If not found, default: an internal flag implementation.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_TESTING</td>
<td class="tg-yw4l">Compile with gtest for PaddlePaddle's unit testing.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_DOC</td>
<td class="tg-yw4l">Compile to generate PaddlePaddle's docs, default: disabled (OFF)</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_SWIG_PY</td>
<td class="tg-yw4l">Compile with python predict API, default: disabled (OFF).</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_STYLE_CHECK</td>
<td class="tg-yw4l">Compile with code style check, default: enabled (ON).</td>
</tr>
</table> </table>
</html>
**Note:** **Note:**
- The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5. - The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5.
...@@ -178,12 +152,12 @@ As a simple example, consider the following: ...@@ -178,12 +152,12 @@ As a simple example, consider the following:
- **Only CPU** - **Only CPU**
```bash ```bash
cmake .. -DWITH_GPU=OFF -DWITH_DOC=OFF cmake .. -DWITH_GPU=OFF
``` ```
- **GPU** - **GPU**
```bash ```bash
cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF cmake .. -DWITH_GPU=ON
``` ```
- **GPU with doc and swig** - **GPU with doc and swig**
...@@ -196,7 +170,7 @@ Finally, you can build PaddlePaddle: ...@@ -196,7 +170,7 @@ Finally, you can build PaddlePaddle:
```bash ```bash
# you can add build option here, such as: # you can add build option here, such as:
cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<path to install> cmake .. -DWITH_GPU=ON -DCMAKE_INSTALL_PREFIX=<path to install>
# please use sudo make install, if you want to install PaddlePaddle into the system # please use sudo make install, if you want to install PaddlePaddle into the system
make -j `nproc` && make install make -j `nproc` && make install
# set PaddlePaddle installation path in ~/.bashrc # set PaddlePaddle installation path in ~/.bashrc
...@@ -216,122 +190,3 @@ sudo pip install <path to install>/opt/paddle/share/wheels/*.whl ...@@ -216,122 +190,3 @@ sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
# or just run # or just run
sudo paddle version sudo paddle version
``` ```
## <span id="mac">Building on Mac OS X</span>
### Prerequisites
This guide is based on Mac OS X 10.11 (El Capitan). Note that if you are running an up to date version of OS X,
you will already have Python 2.7.10 and Numpy 1.8 installed.
The best option is to use the package manager homebrew to handle installations and upgrades for you.
To install [homebrew](http://brew.sh/), first open a terminal window (you can find Terminal in the Utilities folder in Applications), and issue the command:
```bash
# install brew
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
# install pip
easy_install pip
```
### Install Dependencies
- **CPU Dependencies**
```bash
# Install fundamental dependents
brew install glog gflags cmake protobuf openblas
# Install google test on Mac OS X
# Download gtest 1.7.0
wget https://github.com/google/googletest/archive/release-1.7.0.tar.gz
tar -xvf googletest-release-1.7.0.tar.gz && cd googletest-release-1.7.0
# Build gtest
mkdir build && cmake ..
make
# Install gtest library
sudo cp -r ../include/gtest /usr/local/include/
sudo cp lib*.a /usr/local/lib
```
- **GPU Dependencies(optional)**
To build GPU version, you will need the following installed:
1. a CUDA-capable GPU
2. Mac OS X 10.11 or later
2. the Clang compiler and toolchain installed using Xcode
3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)
4. NVIDIA cuDNN Library (availabel at https://developer.nvidia.com/cudnn)
The CUDA development environment relies on tight integration with the host development environment,
including the host compiler and C runtime libraries, and is therefore only supported on
distribution versions that have been qualified for this CUDA Toolkit release.
1. After downloading cuDNN library, issue the following commands:
```bash
sudo tar -xzf cudnn-7.5-osx-x64-v5.0-ga.tgz -C /usr/local
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
```
2. Then you need to set DYLD\_LIBRARY\_PATH, PATH environment variables in ~/.bashrc.
```bash
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
export PATH=/usr/local/cuda/bin:$PATH
```
### Build and Install
As usual, the best option is to create build folder under paddle project directory.
```bash
mkdir build && cd build
cmake ..
```
CMake first check PaddlePaddle's dependencies in system default path. After installing some optional
libraries, corresponding build option will be set automatically (for instance, glog, gtest and gflags).
If still not found, you can manually set it based on CMake error information from your screen.
As a simple example, consider the following:
- **Only CPU**
```bash
cmake .. -DWITH_GPU=OFF -DWITH_DOC=OFF
```
- **GPU**
```bash
cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF
```
- **GPU with doc and swig**
```bash
cmake .. -DWITH_GPU=ON -DWITH_DOC=ON -DWITH_SWIG_PY=ON
```
Finally, you can build PaddlePaddle:
```bash
# you can add build option here, such as:
cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<installation path>
# please use sudo make install, if you want to install PaddlePaddle into the system
make -j `nproc` && make install
# set PaddlePaddle installation path in ~/.bashrc
export PATH=<installation path>/bin:$PATH
```
**Note:**
If you set `WITH_SWIG_PY=ON`, related python dependencies also need to be installed.
Otherwise, PaddlePaddle will automatically install python dependencies
at first time when user run paddle commands, such as `paddle version`, `paddle train`.
It may require sudo privileges:
```bash
# you can run
sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
# or just run
sudo paddle version
```
\ No newline at end of file
...@@ -4,7 +4,7 @@ We sincerely appreciate your contributions. You can use fork and pull request ...@@ -4,7 +4,7 @@ We sincerely appreciate your contributions. You can use fork and pull request
workflow to merge your code. workflow to merge your code.
## Code Requirements ## Code Requirements
- Your code mush be fully documented by - Your code must be fully documented by
[doxygen](http://www.stack.nl/~dimitri/doxygen/) style. [doxygen](http://www.stack.nl/~dimitri/doxygen/) style.
- Make sure the compiler option WITH\_STYLE\_CHECK is on and the compiler - Make sure the compiler option WITH\_STYLE\_CHECK is on and the compiler
passes the code style check. passes the code style check.
...@@ -20,16 +20,30 @@ It's just that simple. ...@@ -20,16 +20,30 @@ It's just that simple.
## Clone ## Clone
Paddle is currently using [git-flow branching model](http://nvie.com/posts/a-successful-git-branching-model/).
The **develop** is the main branch, and other user's branches are feature branches.
Once you've created a fork, you can use your favorite git client to clone your Once you've created a fork, you can use your favorite git client to clone your
repo or just head straight to the command line: repo or just head straight to the command line:
```shell ```shell
# Clone your fork to your local machine # Clone your fork to your local machine
git clone https://github.com/USERNAME/Paddle.git git clone --branch develop https://github.com/USERNAME/Paddle.git
```
If your repository doesn't contain **develop** branch, just create it by your own.
```shell
git clone https://github.com/USERNAME/Paddle.git Paddle
cd Paddle
git checkout -b develop # create develop branch.
git remote add upstream https://github.com/baidu/Paddle.git # add upstream to baidu/Paddle
git pull upstream develop # update to upstream
``` ```
Then you can start to develop by making a local developement branch Then you can start to develop by making a local developement branch
```shell ```shell
git checkout -b MY_COOL_STUFF_BRANCH origin/master git checkout -b MY_COOL_STUFF_BRANCH
``` ```
## Commit ## Commit
...@@ -41,7 +55,7 @@ Commit your changes by following command lines: ...@@ -41,7 +55,7 @@ Commit your changes by following command lines:
git status git status
# add modified files # add modified files
git add xx git add xx
git commit -m "commit info" env EDITOR=vim git commit # You can write your comments by vim/nano/emacs.
``` ```
The first line of commit infomation is the title. The second and later lines The first line of commit infomation is the title. The second and later lines
are the details if any. are the details if any.
...@@ -63,7 +77,7 @@ git remote -v ...@@ -63,7 +77,7 @@ git remote -v
Update your fork with the latest upstream changes: Update your fork with the latest upstream changes:
```shell ```shell
git pull --rebase upstream HEAD git pull --rebase upstream develop
``` ```
If there are no unique commits locally, git will simply perform a fast-forward. If there are no unique commits locally, git will simply perform a fast-forward.
...@@ -76,7 +90,7 @@ Now, your local master branch is up-to-date with everything modified upstream. ...@@ -76,7 +90,7 @@ Now, your local master branch is up-to-date with everything modified upstream.
```shell ```shell
# push to your repository in Github # push to your repository in Github
git push origin HEAD git push -u origin MY_COOL_STUFF_BRANCH # create remote branch MY_COOL_STUFF_BRANCH to origin.
``` ```
## Pull Request ## Pull Request
...@@ -93,9 +107,24 @@ of conflict, you need to do the update manually. You need to do the following on ...@@ -93,9 +107,24 @@ of conflict, you need to do the update manually. You need to do the following on
your local repository: your local repository:
```shell ```shell
git checkout MY_COOL_STUFF_BRANCH git checkout MY_COOL_STUFF_BRANCH
git pull --rebase upstream HEAD git pull upstream develop
# You may need to resolve the conflict according to the git prompt. # You may need to resolve the conflict according to the git prompt.
# Make and test your code. # Make and test your code.
git push -f origin HEAD git push origin MY_COOL_STUFF_BRANCH
``` ```
Now your Pull Request is updated with the latest version. Now your Pull Request is updated with the latest version.
## Revise your pull request
When you revise your pull request according to reviewer's comments, please use 'git commit' instead of 'git commit --amend' to commit your changes so that the reviewers can see the difference between the new pull requrest and the old pull request.
The possible commands are
```shell
git checkout MY_COOL_STUFF_BRANCH
git pull upstream develop # update local to newest code base.
# May be some conflicts will occured.
# And develop your cool stuff
env EDITOR=vim git commit # add your revise log
git push origin MY_COOL_STUFF_BRANCH
```
Docker installation guide Docker installation guide
==================== ==========================
PaddlePaddle provides some pre-compiled binary, including Docker images, ubuntu deb packages. It is welcomed to contributed more installation package of different linux distribution (such as ubuntu, centos, debian, gentoo and so on). We recommend to use Docker images to deploy PaddlePaddle.
## Docker installation
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. PaddlePaddle provide the `Docker <https://www.docker.com/>`_ image. `Docker`_ is a lightweight container utilities. The performance of PaddlePaddle in `Docker`_ container is basically as same as run it in a normal linux. The `Docker`_ is a very convenient way to deliver the binary release for linux programs.
### PaddlePaddle Docker images .. note::
There are six Docker images:
- paddledev/paddle:cpu-latest: PaddlePaddle CPU binary image. The `Docker`_ image is the recommended way to run PaddlePaddle
- paddledev/paddle:gpu-latest: PaddlePaddle GPU binary image.
- paddledev/paddle:cpu-devel-latest: PaddlePaddle CPU binary image plus source code.
- paddledev/paddle:gpu-devel-latest: PaddlePaddle GPU binary image plus source code.
- paddledev/paddle:cpu-demo-latest: PaddlePaddle CPU binary image plus source code and demo
- paddledev/paddle:gpu-demo-latest: PaddlePaddle GPU binary image plus source code and demo
Tags with latest will be replaced by a released version. PaddlePaddle Docker images
--------------------------
### Download and Run Docker images There are 12 `images <https://hub.docker.com/r/paddledev/paddle/tags/>`_ for PaddlePaddle, and the name is :code:`paddle-dev/paddle`, tags are\:
+-----------------+------------------+------------------------+-----------------------+
| | normal | devel | demo |
+=================+==================+========================+=======================+
| CPU | cpu-latest | cpu-devel-latest | cpu-demo-latest |
+-----------------+------------------+------------------------+-----------------------+
| GPU | gpu-latest | gpu-devel-latest | gpu-demo-latest |
+-----------------+------------------+------------------------+-----------------------+
| CPU WITHOUT AVX | cpu-noavx-latest | cpu-devel-noavx-latest | cpu-demo-noavx-latest |
+-----------------+------------------+------------------------+-----------------------+
| GPU WITHOUT AVX | gpu-noavx-latest | gpu-devel-noavx-latest | gpu-demo-noavx-latest |
+-----------------+------------------+------------------------+-----------------------+
And the three columns are:
* normal\: The docker image only contains binary of PaddlePaddle.
* devel\: The docker image contains PaddlePaddle binary, source code and essential build environment.
* demo\: The docker image contains the dependencies to run PaddlePaddle demo.
And the four rows are:
* CPU\: CPU Version. Support CPU which has :code:`AVX` instructions.
* GPU\: GPU Version. Support GPU, and cpu has :code:`AVX` instructions.
* CPU WITHOUT AVX\: CPU Version, which support most CPU even doesn't have :code:`AVX` instructions.
* GPU WITHOUT AVX\: GPU Version, which support most CPU even doesn't have :code:`AVX` instructions.
User can choose any version depends on machine. The following script can help you to detect your CPU support :code:`AVX` or not.
.. code-block:: bash
if cat /proc/cpuinfo | grep -q avx ; then echo "Support AVX"; else echo "Not support AVX"; fi
If the output is :code:`Support AVX`, then you can choose the AVX version of PaddlePaddle, otherwise, you need select :code:`noavx` version of PaddlePaddle. For example, the CPU develop version of PaddlePaddle is :code:`paddle-dev/paddle:cpu-devel-latest`.
The PaddlePaddle images don't contain any entry command. You need to write your entry command to use this image. See :code:`Remote Access` part or just use following command to run a :code:`bash`
.. code-block:: bash
docker run -it paddledev/paddle:cpu-latest /bin/bash
Download and Run Docker images
------------------------------
You have to install Docker in your machine which has linux kernel version 3.10+ first. You can refer to the official guide https://docs.docker.com/engine/installation/ for further information. You have to install Docker in your machine which has linux kernel version 3.10+ first. You can refer to the official guide https://docs.docker.com/engine/installation/ for further information.
You can use ```docker pull ```to download images first, or just launch a container with ```docker run```: You can use :code:`docker pull ` to download images first, or just launch a container with :code:`docker run` \:
```bash
docker run -it paddledev/paddle:cpu-latest .. code-block:: bash
```
docker run -it paddledev/paddle:cpu-latest
If you want to launch container with GPU support, you need to set some environment variables at the same time: If you want to launch container with GPU support, you need to set some environment variables at the same time:
```bash .. code-block:: bash
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
docker run -it paddledev/paddle:gpu-latest export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
``` docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest
### Notice
#### Performance Some notes for docker
---------------------
Performance
+++++++++++
Since Docker is based on the lightweight virtual containers, the CPU computing performance maintains well. And GPU driver and equipments are all mapped to the container, so the GPU computing performance would not be seriously affected. Since Docker is based on the lightweight virtual containers, the CPU computing performance maintains well. And GPU driver and equipments are all mapped to the container, so the GPU computing performance would not be seriously affected.
...@@ -45,47 +87,36 @@ If you use high performance nic, such as RDMA(RoCE 40GbE or IB 56GbE), Ethernet( ...@@ -45,47 +87,36 @@ If you use high performance nic, such as RDMA(RoCE 40GbE or IB 56GbE), Ethernet(
#### Remote access Remote access
If you want to enable ssh access background, you need to build an image by yourself. Please refer to official guide https://docs.docker.com/engine/reference/builder/ for further information. +++++++++++++
Following is a simple Dockerfile with ssh:
```bash
FROM paddledev/paddle
MAINTAINER PaddlePaddle dev team <paddle-dev@baidu.com> If you want to enable ssh access background, you need to build an image by yourself. Please refer to official guide https://docs.docker.com/engine/reference/builder/ for further information.
RUN apt-get update Following is a simple Dockerfile with ssh:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config .. literalinclude:: ../../doc_cn/build_and_install/install/paddle_ssh.Dockerfile
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
EXPOSE 22 Then you can build an image with Dockerfile and launch a container:
CMD ["/usr/sbin/sshd", "-D"] .. code-block:: bash
```
Then you can build an image with Dockerfile and launch a container: # cd into Dockerfile directory
docker build . -t paddle_ssh
# run container, and map host machine port 8022 to container port 22
docker run -d -p 8022:22 --name paddle_ssh_machine paddle_ssh
```bash
# cd into Dockerfile directory
docker build . -t paddle_ssh
# run container, and map host machine port 8022 to container port 22
docker run -d -p 8022:22 --name paddle_ssh_machine paddle_ssh
```
Now, you can ssh on port 8022 to access the container, username is root, password is also root: Now, you can ssh on port 8022 to access the container, username is root, password is also root:
```bash .. code-block:: bash
ssh -p 8022 root@YOUR_HOST_MACHINE
```
ssh -p 8022 root@YOUR_HOST_MACHINE
You can stop and delete the container as following: You can stop and delete the container as following:
```bash
# stop .. code-block:: bash
docker stop paddle_ssh_machine
# delete # stop
docker rm paddle_ssh_machine docker stop paddle_ssh_machine
``` # delete
docker rm paddle_ssh_machine
...@@ -10,31 +10,24 @@ Install PaddlePaddle ...@@ -10,31 +10,24 @@ Install PaddlePaddle
install_* install_*
internal/install_from_jumbo.md internal/install_from_jumbo.md
docker_install.rst
ubuntu_install.rst
Build from Source Build from Source
----------------- -----------------
If you want to hack and contribute PaddlePaddle source code, following guides can help you\: .. warning::
.. toctree:: Please use :code:`deb` package or :code:`docker` image to install paddle. The building guide is used for hacking or contributing to PaddlePaddle.
:maxdepth: 1
:glob:
build_from_source.md If you want to hack and contribute PaddlePaddle source code, following guides can help you\:
contribute_to_paddle.md
Docker and Debian Package installation
--------------------------------------
Note: The installation packages are still in pre-release
state and your experience of installation may not be smooth.
If you want to pack docker image, the following guide can help you\:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:glob: :glob:
docker_install.md build_from_source.md
ubuntu_install.md contribute_to_paddle.md
Debian Package installation guide Debian Package installation guide
================================= =================================
## Debian Package installation PaddlePaddle supports :code:`deb` pacakge. The installation of this :code:`deb` package is tested in ubuntu 14.04, but it should be support other debian based linux, too.
Currently , PaddlePaddle only provides ubuntu14.04 debian packages.
There are two versions package, including CPU and GPU. The download address is:
https://github.com/baidu/Paddle/releases/tag/V0.8.0b0 There are four versions of debian package, :code:`cpu`, :code:`gpu`, :code:`cpu-noavx`, :code:`gpu-noavx`. And :code:`noavx` version is used to support CPU which does not contain :code:`AVX` instructions. The download url of :code:`deb` package is \: https://github.com/baidu/Paddle/releases/
After downloading PaddlePaddle deb packages, you can run: After downloading PaddlePaddle deb packages, you can use :code:`gdebi` install.
```bash .. code-block:: bash
dpkg -i paddle-0.8.0b-cpu.deb
apt-get install -f gdebi paddle-*.deb
```
And if you use GPU version deb package, you need to install CUDA toolkit and cuDNN, and set related environment variables(such as LD_LIBRARY_PATH) first. It is normal when `dpkg -i` get errors. `apt-get install -f` will continue install paddle, and install dependences. If :code:`gdebi` is not installed, you can use :code:`sudo apt-get install gdebi` to install it.
Or you can use following commands to install PaddlePaddle.
**Note** .. code-block:: bash
dpkg -i paddle-*.deb
apt-get install -f
And if you use GPU version deb package, you need to install CUDA toolkit and cuDNN, and set related environment variables(such as LD_LIBRARY_PATH) first. It is normal when `dpkg -i` get errors. `apt-get install -f` will continue install paddle, and install dependences.
PaddlePaddle package only supports x86 CPU with AVX instructions. If not, you have to download and build from source code.
# Cluster Training # Distributed Training
We provide some simple scripts ```paddle/scripts/cluster_train``` to help you to launch cluster training Job to harness PaddlePaddle's distributed trainning. For MPI and other cluster scheduler refer this naive script to implement more robust cluster training platform by yourself. In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, [recommendation](https://github.com/baidu/Paddle/tree/develop/demo/recommendation).
The following cluster demo is based on RECOMMENDATION local training demo in PaddlePaddle ```demo/recommendation``` directory. Assuming you enter the ```paddle/scripts/cluster_train/``` directory. [Scripts](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train) used in this article launch distributed jobs via SSH. They also work as a reference for users running more sophisticated cluster management systems like MPI and Kubernetes.
## Pre-requirements ## Prerequisite
Firstly, 1. Aforementioned scripts use a Python library [fabric](http://www.fabfile.org/) to run SSH commands. We can use `pip` to install fabric:
```bash ```bash
pip install fabric pip install fabric
``` ```
Secondly, go through installing scripts to install PaddlePaddle at all nodes to make sure demo can run as local mode. For CUDA enabled training, we assume that CUDA is installed in ```/usr/local/cuda```, otherwise missed cuda runtime libraries error could be reported at cluster runtime. In one word, the local training environment should be well prepared for the simple scripts.
Then you should prepare same ROOT_DIR directory in all nodes. ROOT_DIR is from in cluster_train/conf.py. Assuming that the ROOT_DIR = /home/paddle, you can create ```paddle``` user account as well, at last ```paddle.py``` can ssh connections to all nodes with ```paddle``` user automatically. 1. We need to install PaddlePaddle on all nodes in the cluster. To enable GPUs, we need to install CUDA in `/usr/local/cuda`; otherwise Paddle would report errors at runtime.
At last you can create ssh mutual trust relationship between all nodes for easy ssh login, otherwise ```password``` should be provided at runtime from ```paddle.py```. 1. Set the `ROOT_DIR` variable in [`cluster_train/conf.py`] on all nodes. For convenience, we often create a Unix user `paddle` on all nodes and set `ROOT_DIR=/home/paddle`. In this way, we can write public SSH keys into `/home/paddle/.ssh/authorized_keys` so that user `paddle` can SSH to all nodes without password.
## Prepare Job Workspace ## Prepare Job Workspace
```Job workspace``` is defined as one package directory which contains dependency libraries, train data, test data, model config file and all other related file dependencies. We refer to the directory where we put dependent libraries, config files, etc., as *workspace*.
These ```train/test``` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as ```train.list/test.list``` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition. These ```train/test``` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as ```train.list/test.list``` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition.
......
# Quick Start Tutorial # Quick Start
This tutorial will teach the basics of deep learning (DL), including how to implement many different models in PaddlePaddle. You will learn how to: This tutorial will teach the basics of deep learning (DL), including how to implement many different models in PaddlePaddle. You will learn how to:
- Prepare data into the standardized format that PaddlePaddle accepts. - Prepare data into the standardized format that PaddlePaddle accepts.
...@@ -134,7 +134,7 @@ def process(settings, file_name): ...@@ -134,7 +134,7 @@ def process(settings, file_name):
You need to add a data provider definition `define_py_data_sources2` in our network configuration. This definition specifies: You need to add a data provider definition `define_py_data_sources2` in our network configuration. This definition specifies:
- The path of the training and testing data (`data/train.list`, `data/test.list`). - The path of the training and testing data (`data/train.list`, `data/test.list`).
- The location of the data provider file (`dataprovider_pow`). - The location of the data provider file (`dataprovider_bow`).
- The function to call to get data. (`process`). - The function to call to get data. (`process`).
- Additional arguments or data. Here it passes the path of word dictionary. - Additional arguments or data. Here it passes the path of word dictionary.
...@@ -477,7 +477,7 @@ The scripts of data downloading, network configurations, and training scrips are ...@@ -477,7 +477,7 @@ The scripts of data downloading, network configurations, and training scrips are
<td class="left">Word embedding</td> <td class="left">Word embedding</td>
<td class="left"> 15MB </td> <td class="left"> 15MB </td>
<td class="left"> 8.484%</td> <td class="left"> 8.484%</td>
<td class="left">trainer_config.bow.py</td> <td class="left">trainer_config.emb.py</td>
</tr> </tr>
<tr> <tr>
......
# Semantic Role labeling Tutorial # # Semantic Role labeling Tutorial #
Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]: Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]:
[ <sub>A0</sub> He ] [ <sub>AM-MOD</sub> would ][ <sub>AM-NEG</sub> n’t ] [ <sub>V</sub> accept] [ <sub>A1</sub> anything of value ] from [<sub>A2</sub> those he was writing about ]. [ <sub>A0</sub> He ] [ <sub>AM-MOD</sub> would ][ <sub>AM-NEG</sub> n’t ] [ <sub>V</sub> accept] [ <sub>A1</sub> anything of value ] from [<sub>A2</sub> those he was writing about ].
- V: verb - V: verb
- A0: acceptor - A0: acceptor
- A1: thing accepted - A1: thing accepted
- A2: accepted-from - A2: accepted-from
- A3: Attribute - A3: Attribute
- AM-MOD: modal - AM-MOD: modal
- AM-NEG: negation - AM-NEG: negation
Given the verb "accept", the chunks in sentence would play certain semantic roles. Here, the label scheme is from Penn Proposition Bank. Given the verb "accept", the chunks in sentence would play certain semantic roles. Here, the label scheme is from Penn Proposition Bank.
To this date, most of the successful SRL systems are built on top of some form of parsing results where pre-defined feature templates over the syntactic structure are used. This tutorial will present an end-to-end system using deep bidirectional long short-term memory (DB-LSTM)[2] for solving the SRL task, which largely outperforms the previous state-of-the-art systems. The system regards SRL task as the sequence labelling problem. To this date, most of the successful SRL systems are built on top of some form of parsing results where pre-defined feature templates over the syntactic structure are used. This tutorial will present an end-to-end system using deep bidirectional long short-term memory (DB-LSTM)[2] for solving the SRL task, which largely outperforms the previous state-of-the-art systems. The system regards SRL task as the sequence labelling problem.
## Data Description ## Data Description
The relevant paper[2] takes the data set in CoNLL-2005&2012 Shared Task for training and testing. Accordingto data license, the demo adopts the test data set of CoNLL-2005, which can be reached on website. The relevant paper[2] takes the data set in CoNLL-2005&2012 Shared Task for training and testing. Accordingto data license, the demo adopts the test data set of CoNLL-2005, which can be reached on website.
To download and process the original data, user just need to execute the following command: To download and process the original data, user just need to execute the following command:
```bash ```bash
cd data cd data
./get_data.sh ./get_data.sh
``` ```
Several new files appear in the `data `directory as follows. Several new files appear in the `data `directory as follows.
```bash ```bash
conll05st-release:the test data set of CoNll-2005 shared task conll05st-release:the test data set of CoNll-2005 shared task
test.wsj.words:the Wall Street Journal data sentences test.wsj.words:the Wall Street Journal data sentences
test.wsj.props: the propositional arguments test.wsj.props: the propositional arguments
src.dict:the dictionary of words in sentences src.dict:the dictionary of words in sentences
tgt.dict:the labels dictionary tgt.dict:the labels dictionary
feature: the extracted features from data set feature: the extracted features from data set
``` ```
## Training ## Training
### DB-LSTM ### DB-LSTM
Please refer to the Sentiment Analysis demo to learn more about the long short-term memory unit. Please refer to the Sentiment Analysis demo to learn more about the long short-term memory unit.
Unlike Bidirectional-LSTM that used in Sentiment Analysis demo, the DB-LSTM adopts another way to stack LSTM layer. First a standard LSTM processes the sequence in forward direction. The input and output of this LSTM layer are taken by the next LSTM layer as input, processed in reversed direction. These two standard LSTM layers compose a pair of LSTM. Then we stack LSTM layers pair after pair to obtain the deep LSTM model. Unlike Bidirectional-LSTM that used in Sentiment Analysis demo, the DB-LSTM adopts another way to stack LSTM layer. First a standard LSTM processes the sequence in forward direction. The input and output of this LSTM layer are taken by the next LSTM layer as input, processed in reversed direction. These two standard LSTM layers compose a pair of LSTM. Then we stack LSTM layers pair after pair to obtain the deep LSTM model.
The following figure shows a temporal expanded 2-layer DB-LSTM network. The following figure shows a temporal expanded 2-layer DB-LSTM network.
<center> <center>
![pic](./network_arch.png) ![pic](./network_arch.png)
</center> </center>
### Features ### Features
Two input features play an essential role in this pipeline: predicate (pred) and argument (argu). Two other features: predicate context (ctx-p) and region mark (mr) are also adopted. Because a single predicate word can not exactly describe the predicate information, especially when the same words appear more than one times in a sentence. With the predicate context, the ambiguity can be largely eliminated. Similarly, we use region mark m<sub>r</sub> = 1 to denote the argument position if it locates in the predicate context region, or m<sub>r</sub> = 0 if does not. These four simple features are all we need for our SRL system. Features of one sample with context size set to 1 is showed as following[2]: Two input features play an essential role in this pipeline: predicate (pred) and argument (argu). Two other features: predicate context (ctx-p) and region mark (mr) are also adopted. Because a single predicate word can not exactly describe the predicate information, especially when the same words appear more than one times in a sentence. With the predicate context, the ambiguity can be largely eliminated. Similarly, we use region mark m<sub>r</sub> = 1 to denote the argument position if it locates in the predicate context region, or m<sub>r</sub> = 0 if does not. These four simple features are all we need for our SRL system. Features of one sample with context size set to 1 is showed as following[2]:
<center> <center>
![pic](./feature.jpg) ![pic](./feature.jpg)
</center> </center>
In this sample, the coresponding labelled sentence is: In this sample, the coresponding labelled sentence is:
[ <sub>A1</sub> A record date ] has [ <sub>AM-NEG</sub> n't ] been [ <sub>V</sub> set ] . [ <sub>A1</sub> A record date ] has [ <sub>AM-NEG</sub> n't ] been [ <sub>V</sub> set ] .
In the demo, we adopt the feature template as above, consists of : `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` and use `B/I/O` scheme to label each argument. These features and labels are stored in `feature` file, and separated by `\t`. In the demo, we adopt the feature template as above, consists of : `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` and use `B/I/O` scheme to label each argument. These features and labels are stored in `feature` file, and separated by `\t`.
### Data Provider ### Data Provider
`dataprovider.py` is the python file to wrap data. `hook()` function is to define the data slots for network. The Six features and label are all IndexSlots. `dataprovider.py` is the python file to wrap data. `hook()` function is to define the data slots for network. The Six features and label are all IndexSlots.
``` ```
def hook(settings, word_dict, label_dict, **kwargs): def hook(settings, word_dict, label_dict, **kwargs):
settings.word_dict = word_dict settings.word_dict = word_dict
settings.label_dict = label_dict settings.label_dict = label_dict
#all inputs are integral and sequential type #all inputs are integral and sequential type
settings.slots = [ settings.slots = [
integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)),
integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)),
integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)),
integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)),
integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)),
integer_value_sequence(2), integer_value_sequence(2),
integer_value_sequence(len(label_dict))] integer_value_sequence(len(label_dict))]
``` ```
The corresponding data iterator is as following: The corresponding data iterator is as following:
``` ```
@provider(use_seq=True, init_hook=hook) @provider(use_seq=True, init_hook=hook)
def process(obj, file_name): def process(obj, file_name):
with open(file_name, 'r') as fdata: with open(file_name, 'r') as fdata:
for line in fdata: for line in fdata:
sentence, predicate, ctx_n1, ctx_0, ctx_p1, mark, label = line.strip().split('\t') sentence, predicate, ctx_n1, ctx_0, ctx_p1, mark, label = line.strip().split('\t')
words = sentence.split() words = sentence.split()
sen_len = len(words) sen_len = len(words)
word_slot = [obj.word_dict.get(w, UNK_IDX) for w in words] word_slot = [obj.word_dict.get(w, UNK_IDX) for w in words]
predicate_slot = [obj.word_dict.get(predicate, UNK_IDX)] * sen_len predicate_slot = [obj.word_dict.get(predicate, UNK_IDX)] * sen_len
ctx_n1_slot = [obj.word_dict.get(ctx_n1, UNK_IDX) ] * sen_len ctx_n1_slot = [obj.word_dict.get(ctx_n1, UNK_IDX) ] * sen_len
ctx_0_slot = [obj.word_dict.get(ctx_0, UNK_IDX) ] * sen_len ctx_0_slot = [obj.word_dict.get(ctx_0, UNK_IDX) ] * sen_len
ctx_p1_slot = [obj.word_dict.get(ctx_p1, UNK_IDX) ] * sen_len ctx_p1_slot = [obj.word_dict.get(ctx_p1, UNK_IDX) ] * sen_len
marks = mark.split() marks = mark.split()
mark_slot = [int(w) for w in marks] mark_slot = [int(w) for w in marks]
label_list = label.split() label_list = label.split()
label_slot = [obj.label_dict.get(w) for w in label_list] label_slot = [obj.label_dict.get(w) for w in label_list]
yield word_slot, predicate_slot, ctx_n1_slot, ctx_0_slot, ctx_p1_slot, mark_slot, label_slot yield word_slot, predicate_slot, ctx_n1_slot, ctx_0_slot, ctx_p1_slot, mark_slot, label_slot
``` ```
The `process`function yield 7 lists which are six features and labels. The `process`function yield 7 lists which are six features and labels.
### Neural Network Config ### Neural Network Config
`db_lstm.py` is the neural network config file to load the dictionaries and define the data provider module and network architecture during the training procedure. `db_lstm.py` is the neural network config file to load the dictionaries and define the data provider module and network architecture during the training procedure.
Seven `data_layer` load instances from data provider. Six features are transformed into embedddings respectively, and mixed by `mixed_layer` . Deep bidirectional LSTM layers extract features for the softmax layer. The objective function is cross entropy of labels. Seven `data_layer` load instances from data provider. Six features are transformed into embedddings respectively, and mixed by `mixed_layer` . Deep bidirectional LSTM layers extract features for the softmax layer. The objective function is cross entropy of labels.
### Run Training ### Run Training
The script for training is `train.sh`, user just need to execute: The script for training is `train.sh`, user just need to execute:
```bash ```bash
./train.sh ./train.sh
``` ```
The content in `train.sh`: The content in `train.sh`:
``` ```
paddle train \ paddle train \
--config=./db_lstm.py \ --config=./db_lstm.py \
--save_dir=./output \ --save_dir=./output \
--trainer_count=4 \ --trainer_count=4 \
--log_period=10 \ --log_period=10 \
--num_passes=500 \ --num_passes=500 \
--use_gpu=false \ --use_gpu=false \
--show_parameter_stats_period=10 \ --show_parameter_stats_period=10 \
--test_all_data_in_one_period=1 \ --test_all_data_in_one_period=1 \
2>&1 | tee 'train.log' 2>&1 | tee 'train.log'
``` ```
- \--config=./db_lstm.py : network config file. - \--config=./db_lstm.py : network config file.
- \--save_di=./output: output path to save models. - \--save_di=./output: output path to save models.
- \--trainer_count=4 : set thread number (or GPU count). - \--trainer_count=4 : set thread number (or GPU count).
- \--log_period=10 : print log every 20 batches. - \--log_period=10 : print log every 20 batches.
- \--num_passes=500: set pass number, one pass in PaddlePaddle means training all samples in dataset one time. - \--num_passes=500: set pass number, one pass in PaddlePaddle means training all samples in dataset one time.
- \--use_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train. - \--use_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train.
- \--show_parameter_stats_period=10: show parameter statistic every 100 batches. - \--show_parameter_stats_period=10: show parameter statistic every 100 batches.
- \--test_all_data_in_one_period=1: test all data in every testing. - \--test_all_data_in_one_period=1: test all data in every testing.
After training, the models will be saved in directory `output`. After training, the models will be saved in directory `output`.
### Run testing ### Run testing
The script for testing is `test.sh`, user just need to execute: The script for testing is `test.sh`, user just need to execute:
```bash ```bash
./test.sh ./test.sh
``` ```
The main part in `tesh.sh` The main part in `tesh.sh`
``` ```
paddle train \ paddle train \
--config=./db_lstm.py \ --config=./db_lstm.py \
--model_list=$model_list \ --model_list=$model_list \
--job=test \ --job=test \
--config_args=is_test=1 \ --config_args=is_test=1 \
``` ```
- \--config=./db_lstm.py: network config file - \--config=./db_lstm.py: network config file
- \--model_list=$model_list.list: model list file - \--model_list=$model_list.list: model list file
- \--job=test: indicate the test job - \--job=test: indicate the test job
- \--config_args=is_test=1: flag to indicate test - \--config_args=is_test=1: flag to indicate test
### Run prediction ### Run prediction
The script for prediction is `predict.sh`, user just need to execute: The script for prediction is `predict.sh`, user just need to execute:
```bash ```bash
./predict.sh ./predict.sh
``` ```
In `predict.sh`, user should offer the network config file, model path, label file, word dictionary file, feature file In `predict.sh`, user should offer the network config file, model path, label file, word dictionary file, feature file
``` ```
python predict.py python predict.py
-c $config_file -c $config_file
-w $model_path -w $model_path
-l $label_file -l $label_file
-d $dict_file -d $dict_file
-i $input_file -i $input_file
``` ```
`predict.py` is the main executable python script, which includes functions: load model, load data, data prediction. The network model will output the probability distribution of labels. In the demo, we take the label with maximum probability as result. User can also implement the beam search or viterbi decoding upon the probability distribution matrix. `predict.py` is the main executable python script, which includes functions: load model, load data, data prediction. The network model will output the probability distribution of labels. In the demo, we take the label with maximum probability as result. User can also implement the beam search or viterbi decoding upon the probability distribution matrix.
After prediction, the result is saved in `predict.res`. After prediction, the result is saved in `predict.res`.
## Reference ## Reference
[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. [1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005.
[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. [2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015.
...@@ -3,11 +3,12 @@ PaddlePaddle Documentation ...@@ -3,11 +3,12 @@ PaddlePaddle Documentation
User Guide User Guide
---------- ----------
* [Introduction](introduction/index.md)
* [Quick Start](demo/quick_start/index_en.md) * [Quick Start](demo/quick_start/index_en.md)
* [Build and Installation](build/index.rst) * [Build and Installation](build/index.rst)
* [Contribute Code](build/contribute_to_paddle.md) * [Contribute Code](build/contribute_to_paddle.md)
* [User Interface](ui/index.md) * [User Interface](ui/index.md)
* [Model Config Interface](ui/api/trainer_config_helpers/index.md) * [Model Config Interface](ui/api/trainer_config_helpers/index.rst)
* [Example and Demo](demo/index.md) * [Example and Demo](demo/index.md)
* [Cluster Train](cluster/index.md) * [Cluster Train](cluster/index.md)
......
# Introduction
PaddlePaddle is a deep learning platform open-sourced by Baidu. With PaddlePaddle, you can easily train a classic neural network within a couple lines of configuration, or you can build sophisticated models that provide state-of-the-art performance on difficult learning tasks like sentiment analysis, machine translation, image caption and so on.
## 1. A Classic Problem
Now, to give you a hint of what using PaddlePaddle looks like, let's start with a fundamental learning problem - <a href="https://en.wikipedia.org/wiki/Simple_linear_regression">**simple linear regression**</a> : you have observed a set of two-dimensional data points of `X` and `Y`, where `X` is an explanatory variable and `Y` is corresponding dependent variable, and you want to recover the underlying correlation between `X` and `Y`. Linear regression can be used in many practical scenarios. For example, `X` can be a variable about house size, and `Y` a variable about house price. You can build a model that captures relationship between them by observing real estate markets.
## 2. Prepare the Data
Suppose the true relationship can be characterized as `Y = 2X + 0.3`, let's see how to recover this pattern only from observed data. Here is a piece of python code that feeds synthetic data to PaddlePaddle. The code is pretty self-explanatory, the only extra thing you need to add for PaddlePaddle is a definition of input data types.
```python
# dataprovider.py
from paddle.trainer.PyDataProvider2 import *
import random
# define data types of input: 2 real numbers
@provider(input_types=[dense_vector(1), dense_vector(1)],use_seq=False)
def process(settings, input_file):
for i in xrange(2000):
x = random.random()
yield [x], [2*x+0.3]
```
## 3. Train a NeuralNetwork in PaddlePaddle
To recover this relationship between `X` and `Y`, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line `Y' = wX + b` , then we gradually adapt `w` and `b` to minimize the difference between `Y'` and `Y`. Here is what it looks like in PaddlePaddle:
```python
# trainer_config.py
from paddle.trainer_config_helpers import *
# 1. read data. Suppose you saved above python code as dataprovider.py
data_file = 'empty.list'
with open(data_file, 'w') as f: f.writelines(' ')
define_py_data_sources2(train_list=data_file, test_list=None,
module='dataprovider', obj='process',args={})
# 2. learning algorithm
settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer())
# 3. Network configuration
x = data_layer(name='x', size=1)
y = data_layer(name='y', size=1)
y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b'))
cost = regression_cost(input=y_predict, label=y)
outputs(cost)
```
Some of the most fundamental usages of PaddlePaddle are demonstrated:
- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly.
- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum based optimizer will suffice here, and it processes 12 data points each time.
- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration:
- **Data Layer**: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for `X` and `Y`.
- **FC Layer**: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as activation function. Computation layers like this are the fundamental building blocks of a deeper model.
- **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters.
Now that everything is ready, you can train the network with a simple command line call:
```
paddle train --config=trainer_config.py --save_dir=./output --num_passes=30
```
This means that PaddlePaddle will train this network on the synthectic dataset for 30 passes, and save all the models under path `./output`. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess.
## 4. Evaluate the Model
Usually, a different dataset that left out during training phase should be used to evalute the models. However, we are lucky enough to know the real answer: `w=2, b=0.3`, thus a better option is to check out model parameters directly.
In PaddlePaddle, training is just to get a collection of model parameters, which are `w` and `b` in this case. Each parameter is saved in an individual file in the popular `numpy` array format. Here is the code that reads parameters from last pass.
```python
import numpy as np
import os
def load(file_name):
with open(file_name, 'rb') as f:
f.read(16) # skip header for float type.
return np.fromfile(f, dtype=np.float32)
print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b'))
# w=1.999743, b=0.300137
```
<center> ![](./parameters.png) </center>
Although starts from a random guess, you can see that value of `w` changes quickly towards 2 and `b` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer.
There, you have recovered the underlying pattern between `X` and `Y` only from observed data.
## 5. Where to Go from Here
- <a href="../build/index.html"> Build and Installation </a>
- <a href="../demo/quick_start/index_en.html">Quick Start</a>
- <a href="../demo/index.html">Example and Demo</a>
...@@ -465,6 +465,11 @@ SumOfSquaresCostLayer ...@@ -465,6 +465,11 @@ SumOfSquaresCostLayer
.. doxygenclass:: paddle::SumOfSquaresCostLayer .. doxygenclass:: paddle::SumOfSquaresCostLayer
:members: :members:
SumCostLayer
`````````````````````
.. doxygenclass:: paddle::SumCostLayer
:members:
CosSimLayer CosSimLayer
----------- -----------
.. doxygenclass:: paddle::CosSimLayer .. doxygenclass:: paddle::CosSimLayer
......
===========
Activations
===========
BaseActivation BaseActivation
============== ==============
...@@ -32,6 +36,13 @@ LinearActivation ...@@ -32,6 +36,13 @@ LinearActivation
.. automodule:: paddle.trainer_config_helpers.activations .. automodule:: paddle.trainer_config_helpers.activations
:members: LinearActivation :members: LinearActivation
:noindex: :noindex:
LogActivation
==================
.. automodule:: paddle.trainer_config_helpers.activations
:members: LogActivation
:noindex:
SquareActivation SquareActivation
================ ================
...@@ -95,4 +106,3 @@ STanhActivation ...@@ -95,4 +106,3 @@ STanhActivation
.. automodule:: paddle.trainer_config_helpers.activations .. automodule:: paddle.trainer_config_helpers.activations
:members: STanhActivation :members: STanhActivation
:noindex: :noindex:
Activations
===========
.. toctree::
:maxdepth: 3
activations.rst
==========
Evaluators
==========
Base Base
==== ====
.. automodule:: paddle.trainer_config_helpers.evaluators .. automodule:: paddle.trainer_config_helpers.evaluators
......
Evaluators
==========
.. toctree::
:maxdepth: 3
evaluators.rst
# Model Config Interface Model Config Interface
======================
* [Optimizer](optimizers_index.rst) .. toctree::
* [Data Source](data_sources.rst) :maxdepth: 1
* [Layers](layers_index.rst)
* [Activations](activations_index.rst) optimizers.rst
* [Poolings](poolings_index.rst) data_sources.rst
* [Networks](networks_index.rst) layers.rst
* [Evaluators](evaluators_index.rst) activations.rst
* [Parameter and Extra Layer Attribute](attrs.rst) poolings.rst
networks.rst
evaluators.rst
attrs.rst
======
Layers
======
Base Base
====== ======
...@@ -46,6 +50,12 @@ conv_operator ...@@ -46,6 +50,12 @@ conv_operator
:members: conv_operator :members: conv_operator
:noindex: :noindex:
conv_projection
---------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: conv_projection
:noindex:
conv_shift_layer conv_shift_layer
------------------ ------------------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
...@@ -71,6 +81,18 @@ img_pool_layer ...@@ -71,6 +81,18 @@ img_pool_layer
-------------- --------------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
:members: img_pool_layer :members: img_pool_layer
:noindex:
spp_layer
--------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: spp_layer
:noindex:
maxout_layer
------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: maxout_layer
:noindex: :noindex:
Norm Layer Norm Layer
...@@ -130,6 +152,12 @@ gru_step_layer ...@@ -130,6 +152,12 @@ gru_step_layer
Recurrent Layer Group Recurrent Layer Group
===================== =====================
memory
------
.. automodule:: paddle.trainer_config_helpers.layers
:members: memory
:noindex:
recurrent_group recurrent_group
--------------- ---------------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
...@@ -163,6 +191,12 @@ embedding_layer ...@@ -163,6 +191,12 @@ embedding_layer
:members: embedding_layer :members: embedding_layer
:noindex: :noindex:
scaling_projection
-----------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: scaling_projection
:noindex:
dotmul_projection dotmul_projection
----------------- -----------------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
...@@ -242,6 +276,12 @@ expand_layer ...@@ -242,6 +276,12 @@ expand_layer
:members: expand_layer :members: expand_layer
:noindex: :noindex:
repeat_layer
------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: repeat_layer
:noindex:
Math Layers Math Layers
=========== ===========
...@@ -263,6 +303,12 @@ interpolation_layer ...@@ -263,6 +303,12 @@ interpolation_layer
:members: interpolation_layer :members: interpolation_layer
:noindex: :noindex:
bilinear_interp_layer
----------------------
.. automodule:: paddle.trainer_config_helpers.layers
:members: bilinear_interp_layer
:noindex:
power_layer power_layer
----------- -----------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
...@@ -371,12 +417,24 @@ ctc_layer ...@@ -371,12 +417,24 @@ ctc_layer
:members: ctc_layer :members: ctc_layer
:noindex: :noindex:
nce_layer
-----------
.. automodule:: paddle.trainer_config_helpers.layers
:members: nce_layer
:noindex:
hsigmoid hsigmoid
--------- ---------
.. automodule:: paddle.trainer_config_helpers.layers .. automodule:: paddle.trainer_config_helpers.layers
:members: hsigmoid :members: hsigmoid
:noindex: :noindex:
sum_cost
---------
.. automodule:: paddle.trainer_config_helpers.layers
:members: sum_cost
:noindex:
Check Layer Check Layer
============ ============
......
Layers
======
.. toctree::
:maxdepth: 3
layers.rst
========
Networks
========
The networks module contains pieces of neural network that combine multiple layers.
NLP NLP
=== ===
...@@ -111,4 +117,3 @@ outputs ...@@ -111,4 +117,3 @@ outputs
.. automodule:: paddle.trainer_config_helpers.networks .. automodule:: paddle.trainer_config_helpers.networks
:members: outputs :members: outputs
:noindex: :noindex:
Networks
========
The networks module contains pieces of neural network that combine multiple layers.
.. toctree::
:maxdepth: 3
networks.rst
==========
Optimizers
==========
BaseSGDOptimizer BaseSGDOptimizer
================ ================
.. automodule:: paddle.trainer_config_helpers.optimizers .. automodule:: paddle.trainer_config_helpers.optimizers
...@@ -51,4 +55,3 @@ settings ...@@ -51,4 +55,3 @@ settings
.. automodule:: paddle.trainer_config_helpers.optimizers .. automodule:: paddle.trainer_config_helpers.optimizers
:members: settings :members: settings
:noindex: :noindex:
Optimizers
==========
.. toctree::
:maxdepth: 3
optimizers.rst
========
Poolings
========
BasePoolingType BasePoolingType
=============== ===============
.. automodule:: paddle.trainer_config_helpers.poolings .. automodule:: paddle.trainer_config_helpers.poolings
...@@ -27,4 +31,3 @@ SquareRootNPooling ...@@ -27,4 +31,3 @@ SquareRootNPooling
.. automodule:: paddle.trainer_config_helpers.poolings .. automodule:: paddle.trainer_config_helpers.poolings
:members: SquareRootNPooling :members: SquareRootNPooling
:noindex: :noindex:
Poolings
========
These pooling types are used for sequence input, not for images.
.. toctree::
:maxdepth: 3
poolings.rst
...@@ -183,7 +183,7 @@ It looks like there are a lot of arguments. However, most of them are for develo ...@@ -183,7 +183,7 @@ It looks like there are a lot of arguments. However, most of them are for develo
</tr> </tr>
<tr> <tr>
<td class="left" rowspan = "5">GPU</td><td class="left">gpu_id</td> <td class="left" rowspan = "6">GPU</td><td class="left">gpu_id</td>
<td class="left">√</td><td class="left">√</td><td class="left">√</td><td class="left">√</td> <td class="left">√</td><td class="left">√</td><td class="left">√</td><td class="left">√</td>
</tr> </tr>
...@@ -207,6 +207,11 @@ It looks like there are a lot of arguments. However, most of them are for develo ...@@ -207,6 +207,11 @@ It looks like there are a lot of arguments. However, most of them are for develo
<td class="left">√</td><td class="left">√</td><td class="left">√</td><td class="left">√</td> <td class="left">√</td><td class="left">√</td><td class="left">√</td><td class="left">√</td>
</tr> </tr>
<tr>
<td class="left">cudnn_conv_workspace_limit_in_mb</td>
<td class="left">√</td><td class="left">√</td><td class="left">√</td><td class="left">√</td>
</tr>
<tr> <tr>
<td class="left" rowspan = "4">RNN</td> <td class="left" rowspan = "4">RNN</td>
<td class="left">beam_size</td> <td class="left">beam_size</td>
......
...@@ -163,6 +163,10 @@ ...@@ -163,6 +163,10 @@
- Choose path to dynamic load NVIDIA CUDA library, for instance, /usr/local/cuda/lib64. [Default]: LD_LIBRARY_PATH - Choose path to dynamic load NVIDIA CUDA library, for instance, /usr/local/cuda/lib64. [Default]: LD_LIBRARY_PATH
- type: string (default: "", null) - type: string (default: "", null)
* `--cudnn_conv_workspace_limit_in_mb`
- Specify cuDNN max workspace limit, in units MB, 4096MB=4GB by default.
- type: int32 (default: 4096MB=4GB)
## NLP: RNN/LSTM/GRU ## NLP: RNN/LSTM/GRU
* `--rnn_use_batch` * `--rnn_use_batch`
- Whether to use batch method for calculation in simple RecurrentLayer. - Whether to use batch method for calculation in simple RecurrentLayer.
......
...@@ -226,6 +226,106 @@ var Scorer = { ...@@ -226,6 +226,106 @@ var Scorer = {
}; };
var splitChars = (function() {
var result = {};
var singles = [96, 180, 187, 191, 215, 247, 749, 885, 903, 907, 909, 930, 1014, 1648,
1748, 1809, 2416, 2473, 2481, 2526, 2601, 2609, 2612, 2615, 2653, 2702,
2706, 2729, 2737, 2740, 2857, 2865, 2868, 2910, 2928, 2948, 2961, 2971,
2973, 3085, 3089, 3113, 3124, 3213, 3217, 3241, 3252, 3295, 3341, 3345,
3369, 3506, 3516, 3633, 3715, 3721, 3736, 3744, 3748, 3750, 3756, 3761,
3781, 3912, 4239, 4347, 4681, 4695, 4697, 4745, 4785, 4799, 4801, 4823,
4881, 5760, 5901, 5997, 6313, 7405, 8024, 8026, 8028, 8030, 8117, 8125,
8133, 8181, 8468, 8485, 8487, 8489, 8494, 8527, 11311, 11359, 11687, 11695,
11703, 11711, 11719, 11727, 11735, 12448, 12539, 43010, 43014, 43019, 43587,
43696, 43713, 64286, 64297, 64311, 64317, 64319, 64322, 64325, 65141];
var i, j, start, end;
for (i = 0; i < singles.length; i++) {
result[singles[i]] = true;
}
var ranges = [[0, 47], [58, 64], [91, 94], [123, 169], [171, 177], [182, 184], [706, 709],
[722, 735], [741, 747], [751, 879], [888, 889], [894, 901], [1154, 1161],
[1318, 1328], [1367, 1368], [1370, 1376], [1416, 1487], [1515, 1519], [1523, 1568],
[1611, 1631], [1642, 1645], [1750, 1764], [1767, 1773], [1789, 1790], [1792, 1807],
[1840, 1868], [1958, 1968], [1970, 1983], [2027, 2035], [2038, 2041], [2043, 2047],
[2070, 2073], [2075, 2083], [2085, 2087], [2089, 2307], [2362, 2364], [2366, 2383],
[2385, 2391], [2402, 2405], [2419, 2424], [2432, 2436], [2445, 2446], [2449, 2450],
[2483, 2485], [2490, 2492], [2494, 2509], [2511, 2523], [2530, 2533], [2546, 2547],
[2554, 2564], [2571, 2574], [2577, 2578], [2618, 2648], [2655, 2661], [2672, 2673],
[2677, 2692], [2746, 2748], [2750, 2767], [2769, 2783], [2786, 2789], [2800, 2820],
[2829, 2830], [2833, 2834], [2874, 2876], [2878, 2907], [2914, 2917], [2930, 2946],
[2955, 2957], [2966, 2968], [2976, 2978], [2981, 2983], [2987, 2989], [3002, 3023],
[3025, 3045], [3059, 3076], [3130, 3132], [3134, 3159], [3162, 3167], [3170, 3173],
[3184, 3191], [3199, 3204], [3258, 3260], [3262, 3293], [3298, 3301], [3312, 3332],
[3386, 3388], [3390, 3423], [3426, 3429], [3446, 3449], [3456, 3460], [3479, 3481],
[3518, 3519], [3527, 3584], [3636, 3647], [3655, 3663], [3674, 3712], [3717, 3718],
[3723, 3724], [3726, 3731], [3752, 3753], [3764, 3772], [3774, 3775], [3783, 3791],
[3802, 3803], [3806, 3839], [3841, 3871], [3892, 3903], [3949, 3975], [3980, 4095],
[4139, 4158], [4170, 4175], [4182, 4185], [4190, 4192], [4194, 4196], [4199, 4205],
[4209, 4212], [4226, 4237], [4250, 4255], [4294, 4303], [4349, 4351], [4686, 4687],
[4702, 4703], [4750, 4751], [4790, 4791], [4806, 4807], [4886, 4887], [4955, 4968],
[4989, 4991], [5008, 5023], [5109, 5120], [5741, 5742], [5787, 5791], [5867, 5869],
[5873, 5887], [5906, 5919], [5938, 5951], [5970, 5983], [6001, 6015], [6068, 6102],
[6104, 6107], [6109, 6111], [6122, 6127], [6138, 6159], [6170, 6175], [6264, 6271],
[6315, 6319], [6390, 6399], [6429, 6469], [6510, 6511], [6517, 6527], [6572, 6592],
[6600, 6607], [6619, 6655], [6679, 6687], [6741, 6783], [6794, 6799], [6810, 6822],
[6824, 6916], [6964, 6980], [6988, 6991], [7002, 7042], [7073, 7085], [7098, 7167],
[7204, 7231], [7242, 7244], [7294, 7400], [7410, 7423], [7616, 7679], [7958, 7959],
[7966, 7967], [8006, 8007], [8014, 8015], [8062, 8063], [8127, 8129], [8141, 8143],
[8148, 8149], [8156, 8159], [8173, 8177], [8189, 8303], [8306, 8307], [8314, 8318],
[8330, 8335], [8341, 8449], [8451, 8454], [8456, 8457], [8470, 8472], [8478, 8483],
[8506, 8507], [8512, 8516], [8522, 8525], [8586, 9311], [9372, 9449], [9472, 10101],
[10132, 11263], [11493, 11498], [11503, 11516], [11518, 11519], [11558, 11567],
[11622, 11630], [11632, 11647], [11671, 11679], [11743, 11822], [11824, 12292],
[12296, 12320], [12330, 12336], [12342, 12343], [12349, 12352], [12439, 12444],
[12544, 12548], [12590, 12592], [12687, 12689], [12694, 12703], [12728, 12783],
[12800, 12831], [12842, 12880], [12896, 12927], [12938, 12976], [12992, 13311],
[19894, 19967], [40908, 40959], [42125, 42191], [42238, 42239], [42509, 42511],
[42540, 42559], [42592, 42593], [42607, 42622], [42648, 42655], [42736, 42774],
[42784, 42785], [42889, 42890], [42893, 43002], [43043, 43055], [43062, 43071],
[43124, 43137], [43188, 43215], [43226, 43249], [43256, 43258], [43260, 43263],
[43302, 43311], [43335, 43359], [43389, 43395], [43443, 43470], [43482, 43519],
[43561, 43583], [43596, 43599], [43610, 43615], [43639, 43641], [43643, 43647],
[43698, 43700], [43703, 43704], [43710, 43711], [43715, 43738], [43742, 43967],
[44003, 44015], [44026, 44031], [55204, 55215], [55239, 55242], [55292, 55295],
[57344, 63743], [64046, 64047], [64110, 64111], [64218, 64255], [64263, 64274],
[64280, 64284], [64434, 64466], [64830, 64847], [64912, 64913], [64968, 65007],
[65020, 65135], [65277, 65295], [65306, 65312], [65339, 65344], [65371, 65381],
[65471, 65473], [65480, 65481], [65488, 65489], [65496, 65497]];
for (i = 0; i < ranges.length; i++) {
start = ranges[i][0];
end = ranges[i][1];
for (j = start; j <= end; j++) {
result[j] = true;
}
}
return result;
})();
function splitQuery(query) {
var result = [];
var start = -1;
for (var i = 0; i < query.length; i++) {
if (splitChars[query.charCodeAt(i)]) {
if (start !== -1) {
result.push(query.slice(start, i));
start = -1;
}
} else if (start === -1) {
start = i;
}
}
if (start !== -1) {
result.push(query.slice(start));
}
return result;
}
/** /**
* Search Module * Search Module
*/ */
...@@ -324,7 +424,7 @@ var Search = { ...@@ -324,7 +424,7 @@ var Search = {
var searchterms = []; var searchterms = [];
var excluded = []; var excluded = [];
var hlterms = []; var hlterms = [];
var tmp = query.split(/\W+/); var tmp = splitQuery(query);
var objectterms = []; var objectterms = [];
for (i = 0; i < tmp.length; i++) { for (i = 0; i < tmp.length; i++) {
if (tmp[i] !== "") { if (tmp[i] !== "") {
......
...@@ -330,7 +330,7 @@ Its <strong>output function</strong> simply takes <span class="math">\(x_t\)</sp ...@@ -330,7 +330,7 @@ Its <strong>output function</strong> simply takes <span class="math">\(x_t\)</sp
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="up" title="Build And Install PaddlePaddle" href="index.html" /> <link rel="up" title="Build And Install PaddlePaddle" href="index.html" />
<link rel="next" title="Contribute to PaddlePaddle" href="contribute_to_paddle.html" /> <link rel="next" title="Contribute to PaddlePaddle" href="contribute_to_paddle.html" />
<link rel="prev" title="Build And Install PaddlePaddle" href="index.html" /> <link rel="prev" title="Debian Package installation guide" href="ubuntu_install.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -57,7 +57,7 @@ var _hmt = _hmt || []; ...@@ -57,7 +57,7 @@ var _hmt = _hmt || [];
<a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle" <a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="index.html" title="Build And Install PaddlePaddle" <a href="ubuntu_install.html" title="Debian Package installation guide"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
<li class="nav-item nav-item-1"><a href="index.html" accesskey="U">Build And Install PaddlePaddle</a> &#187;</li> <li class="nav-item nav-item-1"><a href="index.html" accesskey="U">Build And Install PaddlePaddle</a> &#187;</li>
...@@ -75,7 +75,6 @@ var _hmt = _hmt || []; ...@@ -75,7 +75,6 @@ var _hmt = _hmt || [];
<li><a class="reference external" href="#download">1. Download and Setup</a></li> <li><a class="reference external" href="#download">1. Download and Setup</a></li>
<li><a class="reference external" href="#requirements">2. Requirements</a></li> <li><a class="reference external" href="#requirements">2. Requirements</a></li>
<li><a class="reference external" href="#ubuntu">3. Build on Ubuntu</a></li> <li><a class="reference external" href="#ubuntu">3. Build on Ubuntu</a></li>
<li><a class="reference external" href="#mac">4. Build on Mac OS X</a></li>
</ul> </ul>
<div class="section" id="span-id-download-download-and-setup-span"> <div class="section" id="span-id-download-download-and-setup-span">
<span id="span-id-download-download-and-setup-span"></span><h2><span id="download">Download and Setup</span><a class="headerlink" href="#span-id-download-download-and-setup-span" title="Permalink to this headline"></a></h2> <span id="span-id-download-download-and-setup-span"></span><h2><span id="download">Download and Setup</span><a class="headerlink" href="#span-id-download-download-and-setup-span" title="Permalink to this headline"></a></h2>
...@@ -100,51 +99,26 @@ var _hmt = _hmt || []; ...@@ -100,51 +99,26 @@ var _hmt = _hmt || [];
<div class="section" id="options"> <div class="section" id="options">
<span id="options"></span><h3>Options<a class="headerlink" href="#options" title="Permalink to this headline"></a></h3> <span id="options"></span><h3>Options<a class="headerlink" href="#options" title="Permalink to this headline"></a></h3>
<p>PaddlePaddle supports some build options. To enable it, first you need to install the related libraries.</p> <p>PaddlePaddle supports some build options. To enable it, first you need to install the related libraries.</p>
<style type="text/css"> <p><html></p>
.tg {border-collapse:collapse;border-spacing:0;border-color:#ccc;} <table>
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;border-top-width:1px;border-bottom-width:1px;} <thead>
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;border-top-width:1px;border-bottom-width:1px;} <tr>
.tg .tg-yw4l{vertical-align:top} <th scope="col" class="left">Optional</th>
.tg .tg-9hbo{font-weight:bold;vertical-align:top} <th scope="col" class="left">Description</th>
</style> </tr>
<table class="tg"> </thead>
<tr> <tbody>
<th class="tg-yw4l">Optional</th> <tr><td class="left">WITH_GPU</td><td class="left">Compile with GPU mode.</td></tr>
<th class="tg-yw4l">Description</th> <tr><td class="left">WITH_DOUBLE</td><td class="left">Compile with double precision floating-point, default: single precision.</td></tr>
</tr> <tr><td class="left">WITH_GLOG</td><td class="left">Compile with glog. If not found, default: an internal log implementation.</td></tr>
<tr> <tr><td class="left">WITH_GFLAGS</td><td class="left">Compile with gflags. If not found, default: an internal flag implementation.</td></tr>
<td class="tg-9hbo">WITH_GPU</td> <tr><td class="left">WITH_TESTING</td><td class="left">Compile with gtest for PaddlePaddle's unit testing.</td></tr>
<td class="tg-yw4l">Compile with GPU mode.</td> <tr><td class="left">WITH_DOC</td><td class="left"> Compile to generate PaddlePaddle's docs, default: disabled (OFF).</td></tr>
</tr> <tr><td class="left">WITH_SWIG_PY</td><td class="left">Compile with python predict API, default: disabled (OFF).</td></tr>
<tr> <tr><td class="left">WITH_STYLE_CHECK</td><td class="left">Compile with code style check, default: enabled (ON).</td></tr>
<td class="tg-9hbo">WITH_DOUBLE</td> </tbody>
<td class="tg-yw4l">Compile with double precision floating-point, default: single precision.</td> </table>
</tr> </html><p><strong>Note:</strong></p>
<tr>
<td class="tg-9hbo">WITH_GLOG</td>
<td class="tg-yw4l">Compile with glog. If not found, default: an internal log implementation.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_GFLAGS</td>
<td class="tg-yw4l">Compile with gflags. If not found, default: an internal flag implementation.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_TESTING</td>
<td class="tg-yw4l">Compile with gtest for PaddlePaddle's unit testing.</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_DOC</td>
<td class="tg-yw4l">Compile to generate PaddlePaddle's docs, default: disabled (OFF)</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_SWIG_PY</td>
<td class="tg-yw4l">Compile with python predict API, default: disabled (OFF).</td>
</tr>
<tr>
<td class="tg-9hbo">WITH_STYLE_CHECK</td>
<td class="tg-yw4l">Compile with code style check, default: enabled (ON).</td>
</tr>
</table><p><strong>Note:</strong></p>
<ul class="simple"> <ul class="simple">
<li>The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5.</li> <li>The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5.</li>
<li>Other versions like Cuda Toolkit 6.5, 7.0, 8.0 and cuDNN v2, v3, v4 are also supported.</li> <li>Other versions like Cuda Toolkit 6.5, 7.0, 8.0 and cuDNN v2, v3, v4 are also supported.</li>
...@@ -241,12 +215,12 @@ If still not found, you can manually set it based on CMake error information fro ...@@ -241,12 +215,12 @@ If still not found, you can manually set it based on CMake error information fro
<p>As a simple example, consider the following:</p> <p>As a simple example, consider the following:</p>
<ul> <ul>
<li><p class="first"><strong>Only CPU</strong></p> <li><p class="first"><strong>Only CPU</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>OFF -DWITH_DOC<span class="o">=</span>OFF <div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>OFF
</pre></div> </pre></div>
</div> </div>
</li> </li>
<li><p class="first"><strong>GPU</strong></p> <li><p class="first"><strong>GPU</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>ON -DWITH_DOC<span class="o">=</span>OFF <div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>ON
</pre></div> </pre></div>
</div> </div>
</li> </li>
...@@ -258,7 +232,7 @@ If still not found, you can manually set it based on CMake error information fro ...@@ -258,7 +232,7 @@ If still not found, you can manually set it based on CMake error information fro
</ul> </ul>
<p>Finally, you can build PaddlePaddle:</p> <p>Finally, you can build PaddlePaddle:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># you can add build option here, such as: </span> <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># you can add build option here, such as: </span>
cmake .. -DWITH_GPU<span class="o">=</span>ON -DWITH_DOC<span class="o">=</span>OFF -DCMAKE_INSTALL_PREFIX<span class="o">=</span>&lt;path to install&gt; cmake .. -DWITH_GPU<span class="o">=</span>ON -DCMAKE_INSTALL_PREFIX<span class="o">=</span>&lt;path to install&gt;
<span class="c1"># please use sudo make install, if you want to install PaddlePaddle into the system</span> <span class="c1"># please use sudo make install, if you want to install PaddlePaddle into the system</span>
make -j <span class="sb">`</span>nproc<span class="sb">`</span> <span class="o">&amp;&amp;</span> make install make -j <span class="sb">`</span>nproc<span class="sb">`</span> <span class="o">&amp;&amp;</span> make install
<span class="c1"># set PaddlePaddle installation path in ~/.bashrc</span> <span class="c1"># set PaddlePaddle installation path in ~/.bashrc</span>
...@@ -278,120 +252,6 @@ sudo paddle version ...@@ -278,120 +252,6 @@ sudo paddle version
</div> </div>
</div> </div>
</div> </div>
<div class="section" id="span-id-mac-building-on-mac-os-x-span">
<span id="span-id-mac-building-on-mac-os-x-span"></span><h2><span id="mac">Building on Mac OS X</span><a class="headerlink" href="#span-id-mac-building-on-mac-os-x-span" title="Permalink to this headline"></a></h2>
<div class="section" id="prerequisites">
<span id="prerequisites"></span><h3>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this headline"></a></h3>
<p>This guide is based on Mac OS X 10.11 (El Capitan). Note that if you are running an up to date version of OS X,
you will already have Python 2.7.10 and Numpy 1.8 installed.</p>
<p>The best option is to use the package manager homebrew to handle installations and upgrades for you.
To install <a class="reference external" href="http://brew.sh/">homebrew</a>, first open a terminal window (you can find Terminal in the Utilities folder in Applications), and issue the command:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># install brew</span>
/usr/bin/ruby -e <span class="s2">&quot;</span><span class="k">$(</span>curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install<span class="k">)</span><span class="s2">&quot;</span>
<span class="c1"># install pip</span>
easy_install pip
</pre></div>
</div>
</div>
<div class="section" id="install-dependencies">
<span id="id1"></span><h3>Install Dependencies<a class="headerlink" href="#install-dependencies" title="Permalink to this headline"></a></h3>
<ul>
<li><p class="first"><strong>CPU Dependencies</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Install fundamental dependents </span>
brew install glog gflags cmake protobuf openblas
<span class="c1"># Install google test on Mac OS X</span>
<span class="c1"># Download gtest 1.7.0</span>
wget https://github.com/google/googletest/archive/release-1.7.0.tar.gz
tar -xvf googletest-release-1.7.0.tar.gz <span class="o">&amp;&amp;</span> <span class="nb">cd</span> googletest-release-1.7.0
<span class="c1"># Build gtest</span>
mkdir build <span class="o">&amp;&amp;</span> cmake ..
make
<span class="c1"># Install gtest library</span>
sudo cp -r ../include/gtest /usr/local/include/
sudo cp lib*.a /usr/local/lib
</pre></div>
</div>
</li>
<li><p class="first"><strong>GPU Dependencies(optional)</strong></p>
<p>To build GPU version, you will need the following installed:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="mf">1.</span> <span class="n">a</span> <span class="n">CUDA</span><span class="o">-</span><span class="n">capable</span> <span class="n">GPU</span>
<span class="mf">2.</span> <span class="n">Mac</span> <span class="n">OS</span> <span class="n">X</span> <span class="mf">10.11</span> <span class="ow">or</span> <span class="n">later</span>
<span class="mf">2.</span> <span class="n">the</span> <span class="n">Clang</span> <span class="n">compiler</span> <span class="ow">and</span> <span class="n">toolchain</span> <span class="n">installed</span> <span class="n">using</span> <span class="n">Xcode</span>
<span class="mf">3.</span> <span class="n">NVIDIA</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cuda</span><span class="o">-</span><span class="n">downloads</span><span class="p">)</span>
<span class="mf">4.</span> <span class="n">NVIDIA</span> <span class="n">cuDNN</span> <span class="n">Library</span> <span class="p">(</span><span class="n">availabel</span> <span class="n">at</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cudnn</span><span class="p">)</span>
</pre></div>
</div>
<p>The CUDA development environment relies on tight integration with the host development environment,
including the host compiler and C runtime libraries, and is therefore only supported on
distribution versions that have been qualified for this CUDA Toolkit release.</p>
<ol>
<li><p class="first">After downloading cuDNN library, issue the following commands:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>sudo tar -xzf cudnn-7.5-osx-x64-v5.0-ga.tgz -C /usr/local
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
</pre></div>
</div>
</li>
<li><p class="first">Then you need to set DYLD_LIBRARY_PATH, PATH environment variables in ~/.bashrc.</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">DYLD_LIBRARY_PATH</span><span class="o">=</span>/usr/local/cuda/lib:<span class="nv">$DYLD_LIBRARY_PATH</span>
<span class="nb">export</span> <span class="nv">PATH</span><span class="o">=</span>/usr/local/cuda/bin:<span class="nv">$PATH</span>
</pre></div>
</div>
</li>
</ol>
</li>
</ul>
</div>
<div class="section" id="build-and-install">
<span id="id2"></span><h3>Build and Install<a class="headerlink" href="#build-and-install" title="Permalink to this headline"></a></h3>
<p>As usual, the best option is to create build folder under paddle project directory.</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>mkdir build <span class="o">&amp;&amp;</span> <span class="nb">cd</span> build
cmake ..
</pre></div>
</div>
<p>CMake first check PaddlePaddle&#8217;s dependencies in system default path. After installing some optional
libraries, corresponding build option will be set automatically (for instance, glog, gtest and gflags).
If still not found, you can manually set it based on CMake error information from your screen.</p>
<p>As a simple example, consider the following:</p>
<ul>
<li><p class="first"><strong>Only CPU</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>OFF -DWITH_DOC<span class="o">=</span>OFF
</pre></div>
</div>
</li>
<li><p class="first"><strong>GPU</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>ON -DWITH_DOC<span class="o">=</span>OFF
</pre></div>
</div>
</li>
<li><p class="first"><strong>GPU with doc and swig</strong></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>cmake .. -DWITH_GPU<span class="o">=</span>ON -DWITH_DOC<span class="o">=</span>ON -DWITH_SWIG_PY<span class="o">=</span>ON
</pre></div>
</div>
</li>
</ul>
<p>Finally, you can build PaddlePaddle:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># you can add build option here, such as: </span>
cmake .. -DWITH_GPU<span class="o">=</span>ON -DWITH_DOC<span class="o">=</span>OFF -DCMAKE_INSTALL_PREFIX<span class="o">=</span>&lt;installation path&gt;
<span class="c1"># please use sudo make install, if you want to install PaddlePaddle into the system</span>
make -j <span class="sb">`</span>nproc<span class="sb">`</span> <span class="o">&amp;&amp;</span> make install
<span class="c1"># set PaddlePaddle installation path in ~/.bashrc</span>
<span class="nb">export</span> <span class="nv">PATH</span><span class="o">=</span>&lt;installation path&gt;/bin:<span class="nv">$PATH</span>
</pre></div>
</div>
<p><strong>Note:</strong></p>
<p>If you set <code class="docutils literal"><span class="pre">WITH_SWIG_PY=ON</span></code>, related python dependencies also need to be installed.
Otherwise, PaddlePaddle will automatically install python dependencies
at first time when user run paddle commands, such as <code class="docutils literal"><span class="pre">paddle</span> <span class="pre">version</span></code>, <code class="docutils literal"><span class="pre">paddle</span> <span class="pre">train</span></code>.
It may require sudo privileges:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># you can run</span>
sudo pip install &lt;path to install&gt;/opt/paddle/share/wheels/*.whl
<span class="c1"># or just run </span>
sudo paddle version
</pre></div>
</div>
</div>
</div>
</div> </div>
...@@ -414,19 +274,13 @@ sudo paddle version ...@@ -414,19 +274,13 @@ sudo paddle version
<li><a class="reference internal" href="#build-and-install">Build and Install</a></li> <li><a class="reference internal" href="#build-and-install">Build and Install</a></li>
</ul> </ul>
</li> </li>
<li><a class="reference internal" href="#span-id-mac-building-on-mac-os-x-span"><span id="mac">Building on Mac OS X</span></a><ul>
<li><a class="reference internal" href="#prerequisites">Prerequisites</a></li>
<li><a class="reference internal" href="#install-dependencies">Install Dependencies</a></li>
<li><a class="reference internal" href="#build-and-install">Build and Install</a></li>
</ul>
</li>
</ul> </ul>
</li> </li>
</ul> </ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="index.html" <p class="topless"><a href="ubuntu_install.html"
title="previous chapter">Build And Install PaddlePaddle</a></p> title="previous chapter">Debian Package installation guide</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="contribute_to_paddle.html" <p class="topless"><a href="contribute_to_paddle.html"
title="next chapter">Contribute to PaddlePaddle</a></p> title="next chapter">Contribute to PaddlePaddle</a></p>
...@@ -464,7 +318,7 @@ sudo paddle version ...@@ -464,7 +318,7 @@ sudo paddle version
<a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle" <a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="index.html" title="Build And Install PaddlePaddle" <a href="ubuntu_install.html" title="Debian Package installation guide"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
<li class="nav-item nav-item-1"><a href="index.html" >Build And Install PaddlePaddle</a> &#187;</li> <li class="nav-item nav-item-1"><a href="index.html" >Build And Install PaddlePaddle</a> &#187;</li>
...@@ -472,7 +326,7 @@ sudo paddle version ...@@ -472,7 +326,7 @@ sudo paddle version
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="up" title="Build And Install PaddlePaddle" href="index.html" /> <link rel="up" title="Build And Install PaddlePaddle" href="index.html" />
<link rel="next" title="Docker installation guide" href="docker_install.html" /> <link rel="next" title="User Interface" href="../ui/index.html" />
<link rel="prev" title="Installing from Sources" href="build_from_source.html" /> <link rel="prev" title="Installing from Sources" href="build_from_source.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
...@@ -54,7 +54,7 @@ var _hmt = _hmt || []; ...@@ -54,7 +54,7 @@ var _hmt = _hmt || [];
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="docker_install.html" title="Docker installation guide" <a href="../ui/index.html" title="User Interface"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="build_from_source.html" title="Installing from Sources" <a href="build_from_source.html" title="Installing from Sources"
...@@ -76,7 +76,7 @@ workflow to merge your code.</p> ...@@ -76,7 +76,7 @@ workflow to merge your code.</p>
<div class="section" id="code-requirements"> <div class="section" id="code-requirements">
<span id="code-requirements"></span><h2>Code Requirements<a class="headerlink" href="#code-requirements" title="Permalink to this headline"></a></h2> <span id="code-requirements"></span><h2>Code Requirements<a class="headerlink" href="#code-requirements" title="Permalink to this headline"></a></h2>
<ul class="simple"> <ul class="simple">
<li>Your code mush be fully documented by <li>Your code must be fully documented by
<a class="reference external" href="http://www.stack.nl/~dimitri/doxygen/">doxygen</a> style.</li> <a class="reference external" href="http://www.stack.nl/~dimitri/doxygen/">doxygen</a> style.</li>
<li>Make sure the compiler option WITH_STYLE_CHECK is on and the compiler <li>Make sure the compiler option WITH_STYLE_CHECK is on and the compiler
passes the code style check.</li> passes the code style check.</li>
...@@ -92,14 +92,24 @@ It&#8217;s just that simple.</p> ...@@ -92,14 +92,24 @@ It&#8217;s just that simple.</p>
</div> </div>
<div class="section" id="clone"> <div class="section" id="clone">
<span id="clone"></span><h2>Clone<a class="headerlink" href="#clone" title="Permalink to this headline"></a></h2> <span id="clone"></span><h2>Clone<a class="headerlink" href="#clone" title="Permalink to this headline"></a></h2>
<p>Paddle is currently using <a class="reference external" href="http://nvie.com/posts/a-successful-git-branching-model/">git-flow branching model</a>.
The <strong>develop</strong> is the main branch, and other user&#8217;s branches are feature branches.</p>
<p>Once you&#8217;ve created a fork, you can use your favorite git client to clone your <p>Once you&#8217;ve created a fork, you can use your favorite git client to clone your
repo or just head straight to the command line:</p> repo or just head straight to the command line:</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span><span class="c1"># Clone your fork to your local machine</span> <div class="highlight-shell"><div class="highlight"><pre><span></span><span class="c1"># Clone your fork to your local machine</span>
git clone https://github.com/USERNAME/Paddle.git git clone --branch develop https://github.com/USERNAME/Paddle.git
</pre></div>
</div>
<p>If your repository doesn&#8217;t contain <strong>develop</strong> branch, just create it by your own.</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span>git clone https://github.com/USERNAME/Paddle.git Paddle
<span class="nb">cd</span> Paddle
git checkout -b develop <span class="c1"># create develop branch.</span>
git remote add upstream https://github.com/baidu/Paddle.git <span class="c1"># add upstream to baidu/Paddle</span>
git pull upstream develop <span class="c1"># update to upstream</span>
</pre></div> </pre></div>
</div> </div>
<p>Then you can start to develop by making a local developement branch</p> <p>Then you can start to develop by making a local developement branch</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span>git checkout -b MY_COOL_STUFF_BRANCH origin/master <div class="highlight-shell"><div class="highlight"><pre><span></span>git checkout -b MY_COOL_STUFF_BRANCH
</pre></div> </pre></div>
</div> </div>
</div> </div>
...@@ -110,7 +120,7 @@ git clone https://github.com/USERNAME/Paddle.git ...@@ -110,7 +120,7 @@ git clone https://github.com/USERNAME/Paddle.git
git status git status
<span class="c1"># add modified files</span> <span class="c1"># add modified files</span>
git add xx git add xx
git commit -m <span class="s2">&quot;commit info&quot;</span> env <span class="nv">EDITOR</span><span class="o">=</span>vim git commit <span class="c1"># You can write your comments by vim/nano/emacs.</span>
</pre></div> </pre></div>
</div> </div>
<p>The first line of commit infomation is the title. The second and later lines <p>The first line of commit infomation is the title. The second and later lines
...@@ -129,7 +139,7 @@ git remote -v ...@@ -129,7 +139,7 @@ git remote -v
</pre></div> </pre></div>
</div> </div>
<p>Update your fork with the latest upstream changes:</p> <p>Update your fork with the latest upstream changes:</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span>git pull --rebase upstream HEAD <div class="highlight-shell"><div class="highlight"><pre><span></span>git pull --rebase upstream develop
</pre></div> </pre></div>
</div> </div>
<p>If there are no unique commits locally, git will simply perform a fast-forward. <p>If there are no unique commits locally, git will simply perform a fast-forward.
...@@ -140,7 +150,7 @@ probably shouldn&#8217;t be), you may have to deal with conflicts.</p> ...@@ -140,7 +150,7 @@ probably shouldn&#8217;t be), you may have to deal with conflicts.</p>
<div class="section" id="push-to-github"> <div class="section" id="push-to-github">
<span id="push-to-github"></span><h2>Push to GitHub<a class="headerlink" href="#push-to-github" title="Permalink to this headline"></a></h2> <span id="push-to-github"></span><h2>Push to GitHub<a class="headerlink" href="#push-to-github" title="Permalink to this headline"></a></h2>
<div class="highlight-shell"><div class="highlight"><pre><span></span><span class="c1"># push to your repository in Github</span> <div class="highlight-shell"><div class="highlight"><pre><span></span><span class="c1"># push to your repository in Github</span>
git push origin HEAD git push -u origin MY_COOL_STUFF_BRANCH <span class="c1"># create remote branch MY_COOL_STUFF_BRANCH to origin.</span>
</pre></div> </pre></div>
</div> </div>
</div> </div>
...@@ -157,14 +167,27 @@ by clicking the &#8220;Update Branch&#8221; button in your pull request page. Ho ...@@ -157,14 +167,27 @@ by clicking the &#8220;Update Branch&#8221; button in your pull request page. Ho
of conflict, you need to do the update manually. You need to do the following on of conflict, you need to do the update manually. You need to do the following on
your local repository:</p> your local repository:</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span>git checkout MY_COOL_STUFF_BRANCH <div class="highlight-shell"><div class="highlight"><pre><span></span>git checkout MY_COOL_STUFF_BRANCH
git pull --rebase upstream HEAD git pull upstream develop
<span class="c1"># You may need to resolve the conflict according to the git prompt.</span> <span class="c1"># You may need to resolve the conflict according to the git prompt.</span>
<span class="c1"># Make and test your code.</span> <span class="c1"># Make and test your code.</span>
git push -f origin HEAD git push origin MY_COOL_STUFF_BRANCH
</pre></div> </pre></div>
</div> </div>
<p>Now your Pull Request is updated with the latest version.</p> <p>Now your Pull Request is updated with the latest version.</p>
</div> </div>
<div class="section" id="revise-your-pull-request">
<span id="revise-your-pull-request"></span><h2>Revise your pull request<a class="headerlink" href="#revise-your-pull-request" title="Permalink to this headline"></a></h2>
<p>When you revise your pull request according to reviewer&#8217;s comments, please use &#8216;git commit&#8217; instead of &#8216;git commit &#8211;amend&#8217; to commit your changes so that the reviewers can see the difference between the new pull requrest and the old pull request.</p>
<p>The possible commands are</p>
<div class="highlight-shell"><div class="highlight"><pre><span></span>git checkout MY_COOL_STUFF_BRANCH
git pull upstream develop <span class="c1"># update local to newest code base.</span>
<span class="c1"># May be some conflicts will occured.</span>
<span class="c1"># And develop your cool stuff</span>
env <span class="nv">EDITOR</span><span class="o">=</span>vim git commit <span class="c1"># add your revise log</span>
git push origin MY_COOL_STUFF_BRANCH
</pre></div>
</div>
</div>
</div> </div>
...@@ -184,6 +207,7 @@ git push -f origin HEAD ...@@ -184,6 +207,7 @@ git push -f origin HEAD
<li><a class="reference internal" href="#push-to-github">Push to GitHub</a></li> <li><a class="reference internal" href="#push-to-github">Push to GitHub</a></li>
<li><a class="reference internal" href="#pull-request">Pull Request</a></li> <li><a class="reference internal" href="#pull-request">Pull Request</a></li>
<li><a class="reference internal" href="#update-your-pull-request-with-the-lastest-version">Update your pull request with the lastest version</a></li> <li><a class="reference internal" href="#update-your-pull-request-with-the-lastest-version">Update your pull request with the lastest version</a></li>
<li><a class="reference internal" href="#revise-your-pull-request">Revise your pull request</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>
...@@ -192,8 +216,8 @@ git push -f origin HEAD ...@@ -192,8 +216,8 @@ git push -f origin HEAD
<p class="topless"><a href="build_from_source.html" <p class="topless"><a href="build_from_source.html"
title="previous chapter">Installing from Sources</a></p> title="previous chapter">Installing from Sources</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="docker_install.html" <p class="topless"><a href="../ui/index.html"
title="next chapter">Docker installation guide</a></p> title="next chapter">User Interface</a></p>
<div role="note" aria-label="source link"> <div role="note" aria-label="source link">
<h3>This Page</h3> <h3>This Page</h3>
<ul class="this-page-menu"> <ul class="this-page-menu">
...@@ -225,7 +249,7 @@ git push -f origin HEAD ...@@ -225,7 +249,7 @@ git push -f origin HEAD
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="docker_install.html" title="Docker installation guide" <a href="../ui/index.html" title="User Interface"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="build_from_source.html" title="Installing from Sources" <a href="build_from_source.html" title="Installing from Sources"
...@@ -236,7 +260,7 @@ git push -f origin HEAD ...@@ -236,7 +260,7 @@ git push -f origin HEAD
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="up" title="Build And Install PaddlePaddle" href="index.html" /> <link rel="up" title="Build And Install PaddlePaddle" href="index.html" />
<link rel="next" title="Debian Package installation guide" href="ubuntu_install.html" /> <link rel="next" title="Debian Package installation guide" href="ubuntu_install.html" />
<link rel="prev" title="Contribute to PaddlePaddle" href="contribute_to_paddle.html" /> <link rel="prev" title="Build And Install PaddlePaddle" href="index.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -57,7 +57,7 @@ var _hmt = _hmt || []; ...@@ -57,7 +57,7 @@ var _hmt = _hmt || [];
<a href="ubuntu_install.html" title="Debian Package installation guide" <a href="ubuntu_install.html" title="Debian Package installation guide"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle" <a href="index.html" title="Build And Install PaddlePaddle"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
<li class="nav-item nav-item-1"><a href="index.html" accesskey="U">Build And Install PaddlePaddle</a> &#187;</li> <li class="nav-item nav-item-1"><a href="index.html" accesskey="U">Build And Install PaddlePaddle</a> &#187;</li>
...@@ -70,64 +70,115 @@ var _hmt = _hmt || []; ...@@ -70,64 +70,115 @@ var _hmt = _hmt || [];
<div class="body" role="main"> <div class="body" role="main">
<div class="section" id="docker-installation-guide"> <div class="section" id="docker-installation-guide">
<span id="docker-installation-guide"></span><h1>Docker installation guide<a class="headerlink" href="#docker-installation-guide" title="Permalink to this headline"></a></h1> <h1>Docker installation guide<a class="headerlink" href="#docker-installation-guide" title="Permalink to this headline"></a></h1>
<p>PaddlePaddle provides some pre-compiled binary, including Docker images, ubuntu deb packages. It is welcomed to contributed more installation package of different linux distribution (such as ubuntu, centos, debian, gentoo and so on). We recommend to use Docker images to deploy PaddlePaddle.</p> <p>PaddlePaddle provide the <a class="reference external" href="https://www.docker.com/">Docker</a> image. <a class="reference external" href="https://www.docker.com/">Docker</a> is a lightweight container utilities. The performance of PaddlePaddle in <a class="reference external" href="https://www.docker.com/">Docker</a> container is basically as same as run it in a normal linux. The <a class="reference external" href="https://www.docker.com/">Docker</a> is a very convenient way to deliver the binary release for linux programs.</p>
<div class="section" id="docker-installation"> <div class="admonition note">
<span id="docker-installation"></span><h2>Docker installation<a class="headerlink" href="#docker-installation" title="Permalink to this headline"></a></h2> <p class="first admonition-title">Note</p>
<p>Docker is a tool designed to make it easier to create, deploy, and run applications by using containers.</p> <p class="last">The <a class="reference external" href="https://www.docker.com/">Docker</a> image is the recommended way to run PaddlePaddle</p>
</div>
<div class="section" id="paddlepaddle-docker-images"> <div class="section" id="paddlepaddle-docker-images">
<span id="paddlepaddle-docker-images"></span><h3>PaddlePaddle Docker images<a class="headerlink" href="#paddlepaddle-docker-images" title="Permalink to this headline"></a></h3> <h2>PaddlePaddle Docker images<a class="headerlink" href="#paddlepaddle-docker-images" title="Permalink to this headline"></a></h2>
<p>There are six Docker images:</p> <p>There are 12 <a class="reference external" href="https://hub.docker.com/r/paddledev/paddle/tags/">images</a> for PaddlePaddle, and the name is <code class="code docutils literal"><span class="pre">paddle-dev/paddle</span></code>, tags are:</p>
<table border="1" class="docutils">
<colgroup>
<col width="21%" />
<col width="22%" />
<col width="29%" />
<col width="28%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">&nbsp;</th>
<th class="head">normal</th>
<th class="head">devel</th>
<th class="head">demo</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td>CPU</td>
<td>cpu-latest</td>
<td>cpu-devel-latest</td>
<td>cpu-demo-latest</td>
</tr>
<tr class="row-odd"><td>GPU</td>
<td>gpu-latest</td>
<td>gpu-devel-latest</td>
<td>gpu-demo-latest</td>
</tr>
<tr class="row-even"><td>CPU WITHOUT AVX</td>
<td>cpu-noavx-latest</td>
<td>cpu-devel-noavx-latest</td>
<td>cpu-demo-noavx-latest</td>
</tr>
<tr class="row-odd"><td>GPU WITHOUT AVX</td>
<td>gpu-noavx-latest</td>
<td>gpu-devel-noavx-latest</td>
<td>gpu-demo-noavx-latest</td>
</tr>
</tbody>
</table>
<p>And the three columns are:</p>
<ul class="simple">
<li>normal: The docker image only contains binary of PaddlePaddle.</li>
<li>devel: The docker image contains PaddlePaddle binary, source code and essential build environment.</li>
<li>demo: The docker image contains the dependencies to run PaddlePaddle demo.</li>
</ul>
<p>And the four rows are:</p>
<ul class="simple"> <ul class="simple">
<li>paddledev/paddle:cpu-latest: PaddlePaddle CPU binary image.</li> <li>CPU: CPU Version. Support CPU which has <code class="code docutils literal"><span class="pre">AVX</span></code> instructions.</li>
<li>paddledev/paddle:gpu-latest: PaddlePaddle GPU binary image.</li> <li>GPU: GPU Version. Support GPU, and cpu has <code class="code docutils literal"><span class="pre">AVX</span></code> instructions.</li>
<li>paddledev/paddle:cpu-devel-latest: PaddlePaddle CPU binary image plus source code.</li> <li>CPU WITHOUT AVX: CPU Version, which support most CPU even doesn&#8217;t have <code class="code docutils literal"><span class="pre">AVX</span></code> instructions.</li>
<li>paddledev/paddle:gpu-devel-latest: PaddlePaddle GPU binary image plus source code.</li> <li>GPU WITHOUT AVX: GPU Version, which support most CPU even doesn&#8217;t have <code class="code docutils literal"><span class="pre">AVX</span></code> instructions.</li>
<li>paddledev/paddle:cpu-demo-latest: PaddlePaddle CPU binary image plus source code and demo</li>
<li>paddledev/paddle:gpu-demo-latest: PaddlePaddle GPU binary image plus source code and demo</li>
</ul> </ul>
<p>Tags with latest will be replaced by a released version.</p> <p>User can choose any version depends on machine. The following script can help you to detect your CPU support <code class="code docutils literal"><span class="pre">AVX</span></code> or not.</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="k">if</span> cat /proc/cpuinfo <span class="p">|</span> grep -q avx <span class="p">;</span> <span class="k">then</span> <span class="nb">echo</span> <span class="s2">&quot;Support AVX&quot;</span><span class="p">;</span> <span class="k">else</span> <span class="nb">echo</span> <span class="s2">&quot;Not support AVX&quot;</span><span class="p">;</span> <span class="k">fi</span>
</pre></div>
</div>
<p>If the output is <code class="code docutils literal"><span class="pre">Support</span> <span class="pre">AVX</span></code>, then you can choose the AVX version of PaddlePaddle, otherwise, you need select <code class="code docutils literal"><span class="pre">noavx</span></code> version of PaddlePaddle. For example, the CPU develop version of PaddlePaddle is <code class="code docutils literal"><span class="pre">paddle-dev/paddle:cpu-devel-latest</span></code>.</p>
<p>The PaddlePaddle images don&#8217;t contain any entry command. You need to write your entry command to use this image. See <code class="code docutils literal"><span class="pre">Remote</span> <span class="pre">Access</span></code> part or just use following command to run a <code class="code docutils literal"><span class="pre">bash</span></code></p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it paddledev/paddle:cpu-latest /bin/bash
</pre></div>
</div>
</div> </div>
<div class="section" id="download-and-run-docker-images"> <div class="section" id="download-and-run-docker-images">
<span id="download-and-run-docker-images"></span><h3>Download and Run Docker images<a class="headerlink" href="#download-and-run-docker-images" title="Permalink to this headline"></a></h3> <h2>Download and Run Docker images<a class="headerlink" href="#download-and-run-docker-images" title="Permalink to this headline"></a></h2>
<p>You have to install Docker in your machine which has linux kernel version 3.10+ first. You can refer to the official guide https://docs.docker.com/engine/installation/ for further information.</p> <p>You have to install Docker in your machine which has linux kernel version 3.10+ first. You can refer to the official guide <a class="reference external" href="https://docs.docker.com/engine/installation/">https://docs.docker.com/engine/installation/</a> for further information.</p>
<p>You can use <code class="docutils literal"><span class="pre">docker</span> <span class="pre">pull</span></code>to download images first, or just launch a container with <code class="docutils literal"><span class="pre">docker</span> <span class="pre">run</span></code>:</p> <p>You can use <code class="code docutils literal"><span class="pre">docker</span> <span class="pre">pull</span> <span class="pre">`</span> <span class="pre">to</span> <span class="pre">download</span> <span class="pre">images</span> <span class="pre">first,</span> <span class="pre">or</span> <span class="pre">just</span> <span class="pre">launch</span> <span class="pre">a</span> <span class="pre">container</span> <span class="pre">with</span> <span class="pre">:code:`docker</span> <span class="pre">run</span></code> :</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it paddledev/paddle:cpu-latest <div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it paddledev/paddle:cpu-latest
</pre></div> </pre></div>
</div> </div>
<p>If you want to launch container with GPU support, you need to set some environment variables at the same time:</p> <p>If you want to launch container with GPU support, you need to set some environment variables at the same time:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>export CUDA_SO=&quot;$(\ls /usr/lib64/libcuda* | xargs -I{} echo &#39;-v {}:{}&#39;) $(\ls /usr/lib64/libnvidia* | xargs -I{} echo &#39;-v {}:{}&quot; <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">CUDA_SO</span><span class="o">=</span><span class="s2">&quot;</span><span class="k">$(</span><span class="se">\l</span>s /usr/lib64/libcuda* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;-v {}:{}&#39;</span><span class="k">)</span><span class="s2"> </span><span class="k">$(</span><span class="se">\l</span>s /usr/lib64/libnvidia* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;-v {}:{}&#39;</span><span class="k">)</span><span class="s2">&quot;</span>
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo &#39;--device {}:{}&#39;) <span class="nb">export</span> <span class="nv">DEVICES</span><span class="o">=</span><span class="k">$(</span><span class="se">\l</span>s /dev/nvidia* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;--device {}:{}&#39;</span><span class="k">)</span>
docker run -it paddledev/paddle:gpu-latest docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class="si">}</span> <span class="si">${</span><span class="nv">DEVICES</span><span class="si">}</span> -it paddledev/paddle:gpu-latest
</pre></div> </pre></div>
</div> </div>
</div> </div>
<div class="section" id="notice"> <div class="section" id="some-notes-for-docker">
<span id="notice"></span><h3>Notice<a class="headerlink" href="#notice" title="Permalink to this headline"></a></h3> <h2>Some notes for docker<a class="headerlink" href="#some-notes-for-docker" title="Permalink to this headline"></a></h2>
<div class="section" id="performance"> <div class="section" id="performance">
<span id="performance"></span><h4>Performance<a class="headerlink" href="#performance" title="Permalink to this headline"></a></h4> <h3>Performance<a class="headerlink" href="#performance" title="Permalink to this headline"></a></h3>
<p>Since Docker is based on the lightweight virtual containers, the CPU computing performance maintains well. And GPU driver and equipments are all mapped to the container, so the GPU computing performance would not be seriously affected.</p> <p>Since Docker is based on the lightweight virtual containers, the CPU computing performance maintains well. And GPU driver and equipments are all mapped to the container, so the GPU computing performance would not be seriously affected.</p>
<p>If you use high performance nic, such as RDMA(RoCE 40GbE or IB 56GbE), Ethernet(10GbE), it is recommended to use config &#8220;-net = host&#8221;.</p> <p>If you use high performance nic, such as RDMA(RoCE 40GbE or IB 56GbE), Ethernet(10GbE), it is recommended to use config &#8220;-net = host&#8221;.</p>
</div> </div>
<div class="section" id="remote-access"> <div class="section" id="remote-access">
<span id="remote-access"></span><h4>Remote access<a class="headerlink" href="#remote-access" title="Permalink to this headline"></a></h4> <h3>Remote access<a class="headerlink" href="#remote-access" title="Permalink to this headline"></a></h3>
<p>If you want to enable ssh access background, you need to build an image by yourself. Please refer to official guide https://docs.docker.com/engine/reference/builder/ for further information.</p> <p>If you want to enable ssh access background, you need to build an image by yourself. Please refer to official guide <a class="reference external" href="https://docs.docker.com/engine/reference/builder/">https://docs.docker.com/engine/reference/builder/</a> for further information.</p>
<p>Following is a simple Dockerfile with ssh:</p> <p>Following is a simple Dockerfile with ssh:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>FROM paddledev/paddle <div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">FROM</span> <span class="n">paddledev</span><span class="o">/</span><span class="n">paddle</span><span class="p">:</span><span class="n">cpu</span><span class="o">-</span><span class="n">latest</span>
MAINTAINER PaddlePaddle dev team &lt;paddle-dev@baidu.com&gt; <span class="n">MAINTAINER</span> <span class="n">PaddlePaddle</span> <span class="n">dev</span> <span class="n">team</span> <span class="o">&lt;</span><span class="n">paddle</span><span class="o">-</span><span class="n">dev</span><span class="nd">@baidu</span><span class="o">.</span><span class="n">com</span><span class="o">&gt;</span>
RUN apt-get update <span class="n">RUN</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">update</span>
RUN apt-get install -y openssh-server <span class="n">RUN</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">openssh</span><span class="o">-</span><span class="n">server</span>
RUN mkdir /var/run/sshd <span class="n">RUN</span> <span class="n">mkdir</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">run</span><span class="o">/</span><span class="n">sshd</span>
RUN <span class="nb">echo</span> <span class="s1">&#39;root:root&#39;</span> <span class="p">|</span> chpasswd <span class="n">RUN</span> <span class="n">echo</span> <span class="s1">&#39;root:root&#39;</span> <span class="o">|</span> <span class="n">chpasswd</span>
RUN sed -ri <span class="s1">&#39;s/^PermitRootLogin\s+.*/PermitRootLogin yes/&#39;</span> /etc/ssh/sshd_config <span class="n">RUN</span> <span class="n">sed</span> <span class="o">-</span><span class="n">ri</span> <span class="s1">&#39;s/^PermitRootLogin\s+.*/PermitRootLogin yes/&#39;</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">ssh</span><span class="o">/</span><span class="n">sshd_config</span>
RUN sed -ri <span class="s1">&#39;s/UsePAM yes/#UsePAM yes/g&#39;</span> /etc/ssh/sshd_config <span class="n">RUN</span> <span class="n">sed</span> <span class="o">-</span><span class="n">ri</span> <span class="s1">&#39;s/UsePAM yes/#UsePAM yes/g&#39;</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">ssh</span><span class="o">/</span><span class="n">sshd_config</span>
EXPOSE 22 <span class="n">EXPOSE</span> <span class="mi">22</span>
CMD <span class="o">[</span><span class="s2">&quot;/usr/sbin/sshd&quot;</span>, <span class="s2">&quot;-D&quot;</span><span class="o">]</span> <span class="n">CMD</span> <span class="p">[</span><span class="s2">&quot;/usr/sbin/sshd&quot;</span><span class="p">,</span> <span class="s2">&quot;-D&quot;</span><span class="p">]</span>
</pre></div> </pre></div>
</div> </div>
<p>Then you can build an image with Dockerfile and launch a container:</p> <p>Then you can build an image with Dockerfile and launch a container:</p>
...@@ -150,7 +201,6 @@ docker rm paddle_ssh_machine ...@@ -150,7 +201,6 @@ docker rm paddle_ssh_machine
</div> </div>
</div> </div>
</div> </div>
</div>
</div> </div>
...@@ -162,23 +212,20 @@ docker rm paddle_ssh_machine ...@@ -162,23 +212,20 @@ docker rm paddle_ssh_machine
<h3><a href="../index.html">Table Of Contents</a></h3> <h3><a href="../index.html">Table Of Contents</a></h3>
<ul> <ul>
<li><a class="reference internal" href="#">Docker installation guide</a><ul> <li><a class="reference internal" href="#">Docker installation guide</a><ul>
<li><a class="reference internal" href="#docker-installation">Docker installation</a><ul>
<li><a class="reference internal" href="#paddlepaddle-docker-images">PaddlePaddle Docker images</a></li> <li><a class="reference internal" href="#paddlepaddle-docker-images">PaddlePaddle Docker images</a></li>
<li><a class="reference internal" href="#download-and-run-docker-images">Download and Run Docker images</a></li> <li><a class="reference internal" href="#download-and-run-docker-images">Download and Run Docker images</a></li>
<li><a class="reference internal" href="#notice">Notice</a><ul> <li><a class="reference internal" href="#some-notes-for-docker">Some notes for docker</a><ul>
<li><a class="reference internal" href="#performance">Performance</a></li> <li><a class="reference internal" href="#performance">Performance</a></li>
<li><a class="reference internal" href="#remote-access">Remote access</a></li> <li><a class="reference internal" href="#remote-access">Remote access</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>
</li> </li>
</ul>
</li>
</ul> </ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="contribute_to_paddle.html" <p class="topless"><a href="index.html"
title="previous chapter">Contribute to PaddlePaddle</a></p> title="previous chapter">Build And Install PaddlePaddle</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="ubuntu_install.html" <p class="topless"><a href="ubuntu_install.html"
title="next chapter">Debian Package installation guide</a></p> title="next chapter">Debian Package installation guide</a></p>
...@@ -216,7 +263,7 @@ docker rm paddle_ssh_machine ...@@ -216,7 +263,7 @@ docker rm paddle_ssh_machine
<a href="ubuntu_install.html" title="Debian Package installation guide" <a href="ubuntu_install.html" title="Debian Package installation guide"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="contribute_to_paddle.html" title="Contribute to PaddlePaddle" <a href="index.html" title="Build And Install PaddlePaddle"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
<li class="nav-item nav-item-1"><a href="index.html" >Build And Install PaddlePaddle</a> &#187;</li> <li class="nav-item nav-item-1"><a href="index.html" >Build And Install PaddlePaddle</a> &#187;</li>
...@@ -224,7 +271,7 @@ docker rm paddle_ssh_machine ...@@ -224,7 +271,7 @@ docker rm paddle_ssh_machine
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -29,8 +29,8 @@ ...@@ -29,8 +29,8 @@
<link rel="index" title="Index" href="../genindex.html" /> <link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="next" title="Installing from Sources" href="build_from_source.html" /> <link rel="next" title="Docker installation guide" href="docker_install.html" />
<link rel="prev" title="Quick Start Tutorial" href="../demo/quick_start/index_en.html" /> <link rel="prev" title="Quick Start" href="../demo/quick_start/index_en.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -53,10 +53,10 @@ var _hmt = _hmt || []; ...@@ -53,10 +53,10 @@ var _hmt = _hmt || [];
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="build_from_source.html" title="Installing from Sources" <a href="docker_install.html" title="Docker installation guide"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="../demo/quick_start/index_en.html" title="Quick Start Tutorial" <a href="../demo/quick_start/index_en.html" title="Quick Start"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
...@@ -72,10 +72,18 @@ var _hmt = _hmt || []; ...@@ -72,10 +72,18 @@ var _hmt = _hmt || [];
<div class="section" id="install-paddlepaddle"> <div class="section" id="install-paddlepaddle">
<h2>Install PaddlePaddle<a class="headerlink" href="#install-paddlepaddle" title="Permalink to this headline"></a></h2> <h2>Install PaddlePaddle<a class="headerlink" href="#install-paddlepaddle" title="Permalink to this headline"></a></h2>
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="docker_install.html">Docker installation guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="ubuntu_install.html">Debian Package installation guide</a></li>
</ul>
</div> </div>
</div> </div>
<div class="section" id="build-from-source"> <div class="section" id="build-from-source">
<h2>Build from Source<a class="headerlink" href="#build-from-source" title="Permalink to this headline"></a></h2> <h2>Build from Source<a class="headerlink" href="#build-from-source" title="Permalink to this headline"></a></h2>
<div class="admonition warning">
<p class="first admonition-title">Warning</p>
<p class="last">Please use <code class="code docutils literal"><span class="pre">deb</span></code> package or <code class="code docutils literal"><span class="pre">docker</span></code> image to install paddle. The building guide is used for hacking or contributing to PaddlePaddle.</p>
</div>
<p>If you want to hack and contribute PaddlePaddle source code, following guides can help you:</p> <p>If you want to hack and contribute PaddlePaddle source code, following guides can help you:</p>
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul> <ul>
...@@ -84,18 +92,6 @@ var _hmt = _hmt || []; ...@@ -84,18 +92,6 @@ var _hmt = _hmt || [];
</ul> </ul>
</div> </div>
</div> </div>
<div class="section" id="docker-and-debian-package-installation">
<h2>Docker and Debian Package installation<a class="headerlink" href="#docker-and-debian-package-installation" title="Permalink to this headline"></a></h2>
<p>Note: The installation packages are still in pre-release
state and your experience of installation may not be smooth.</p>
<p>If you want to pack docker image, the following guide can help you:</p>
<div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="docker_install.html">Docker installation guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="ubuntu_install.html">Debian Package installation guide</a></li>
</ul>
</div>
</div>
</div> </div>
...@@ -109,17 +105,16 @@ state and your experience of installation may not be smooth.</p> ...@@ -109,17 +105,16 @@ state and your experience of installation may not be smooth.</p>
<li><a class="reference internal" href="#">Build And Install PaddlePaddle</a><ul> <li><a class="reference internal" href="#">Build And Install PaddlePaddle</a><ul>
<li><a class="reference internal" href="#install-paddlepaddle">Install PaddlePaddle</a></li> <li><a class="reference internal" href="#install-paddlepaddle">Install PaddlePaddle</a></li>
<li><a class="reference internal" href="#build-from-source">Build from Source</a></li> <li><a class="reference internal" href="#build-from-source">Build from Source</a></li>
<li><a class="reference internal" href="#docker-and-debian-package-installation">Docker and Debian Package installation</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="../demo/quick_start/index_en.html" <p class="topless"><a href="../demo/quick_start/index_en.html"
title="previous chapter">Quick Start Tutorial</a></p> title="previous chapter">Quick Start</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="build_from_source.html" <p class="topless"><a href="docker_install.html"
title="next chapter">Installing from Sources</a></p> title="next chapter">Docker installation guide</a></p>
<div role="note" aria-label="source link"> <div role="note" aria-label="source link">
<h3>This Page</h3> <h3>This Page</h3>
<ul class="this-page-menu"> <ul class="this-page-menu">
...@@ -151,17 +146,17 @@ state and your experience of installation may not be smooth.</p> ...@@ -151,17 +146,17 @@ state and your experience of installation may not be smooth.</p>
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="build_from_source.html" title="Installing from Sources" <a href="docker_install.html" title="Docker installation guide"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="../demo/quick_start/index_en.html" title="Quick Start Tutorial" <a href="../demo/quick_start/index_en.html" title="Quick Start"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="up" title="Build And Install PaddlePaddle" href="index.html" /> <link rel="up" title="Build And Install PaddlePaddle" href="index.html" />
<link rel="next" title="User Interface" href="../ui/index.html" /> <link rel="next" title="Installing from Sources" href="build_from_source.html" />
<link rel="prev" title="Docker installation guide" href="docker_install.html" /> <link rel="prev" title="Docker installation guide" href="docker_install.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
...@@ -54,7 +54,7 @@ var _hmt = _hmt || []; ...@@ -54,7 +54,7 @@ var _hmt = _hmt || [];
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="../ui/index.html" title="User Interface" <a href="build_from_source.html" title="Installing from Sources"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="docker_install.html" title="Docker installation guide" <a href="docker_install.html" title="Docker installation guide"
...@@ -70,21 +70,20 @@ var _hmt = _hmt || []; ...@@ -70,21 +70,20 @@ var _hmt = _hmt || [];
<div class="body" role="main"> <div class="body" role="main">
<div class="section" id="debian-package-installation-guide"> <div class="section" id="debian-package-installation-guide">
<span id="debian-package-installation-guide"></span><h1>Debian Package installation guide<a class="headerlink" href="#debian-package-installation-guide" title="Permalink to this headline"></a></h1> <h1>Debian Package installation guide<a class="headerlink" href="#debian-package-installation-guide" title="Permalink to this headline"></a></h1>
<div class="section" id="debian-package-installation"> <p>PaddlePaddle supports <code class="code docutils literal"><span class="pre">deb</span></code> pacakge. The installation of this <code class="code docutils literal"><span class="pre">deb</span></code> package is tested in ubuntu 14.04, but it should be support other debian based linux, too.</p>
<span id="debian-package-installation"></span><h2>Debian Package installation<a class="headerlink" href="#debian-package-installation" title="Permalink to this headline"></a></h2> <p>There are four versions of debian package, <code class="code docutils literal"><span class="pre">cpu</span></code>, <code class="code docutils literal"><span class="pre">gpu</span></code>, <code class="code docutils literal"><span class="pre">cpu-noavx</span></code>, <code class="code docutils literal"><span class="pre">gpu-noavx</span></code>. And <code class="code docutils literal"><span class="pre">noavx</span></code> version is used to support CPU which does not contain <code class="code docutils literal"><span class="pre">AVX</span></code> instructions. The download url of <code class="code docutils literal"><span class="pre">deb</span></code> package is : <a class="reference external" href="https://github.com/baidu/Paddle/releases/">https://github.com/baidu/Paddle/releases/</a></p>
<p>Currently , PaddlePaddle only provides ubuntu14.04 debian packages. <p>After downloading PaddlePaddle deb packages, you can use <code class="code docutils literal"><span class="pre">gdebi</span></code> install.</p>
There are two versions package, including CPU and GPU. The download address is:</p> <div class="highlight-bash"><div class="highlight"><pre><span></span>gdebi paddle-*.deb
<p>https://github.com/baidu/Paddle/releases/tag/V0.8.0b0</p>
<p>After downloading PaddlePaddle deb packages, you can run:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>dpkg -i paddle-0.8.0b-cpu.deb
apt-get install -f
</pre></div> </pre></div>
</div> </div>
<p>And if you use GPU version deb package, you need to install CUDA toolkit and cuDNN, and set related environment variables(such as LD_LIBRARY_PATH) first. It is normal when <code class="docutils literal"><span class="pre">dpkg</span> <span class="pre">-i</span></code> get errors. <code class="docutils literal"><span class="pre">apt-get</span> <span class="pre">install</span> <span class="pre">-f</span></code> will continue install paddle, and install dependences.</p> <p>If <code class="code docutils literal"><span class="pre">gdebi</span></code> is not installed, you can use <code class="code docutils literal"><span class="pre">sudo</span> <span class="pre">apt-get</span> <span class="pre">install</span> <span class="pre">gdebi</span></code> to install it.</p>
<p><strong>Note</strong></p> <p>Or you can use following commands to install PaddlePaddle.</p>
<p>PaddlePaddle package only supports x86 CPU with AVX instructions. If not, you have to download and build from source code.</p> <div class="highlight-bash"><div class="highlight"><pre><span></span>dpkg -i paddle-*.deb
apt-get install -f
</pre></div>
</div> </div>
<p>And if you use GPU version deb package, you need to install CUDA toolkit and cuDNN, and set related environment variables(such as LD_LIBRARY_PATH) first. It is normal when <cite>dpkg -i</cite> get errors. <cite>apt-get install -f</cite> will continue install paddle, and install dependences.</p>
</div> </div>
...@@ -93,20 +92,12 @@ apt-get install -f ...@@ -93,20 +92,12 @@ apt-get install -f
</div> </div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation"> <div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper"> <div class="sphinxsidebarwrapper">
<h3><a href="../index.html">Table Of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">Debian Package installation guide</a><ul>
<li><a class="reference internal" href="#debian-package-installation">Debian Package installation</a></li>
</ul>
</li>
</ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="docker_install.html" <p class="topless"><a href="docker_install.html"
title="previous chapter">Docker installation guide</a></p> title="previous chapter">Docker installation guide</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="../ui/index.html" <p class="topless"><a href="build_from_source.html"
title="next chapter">User Interface</a></p> title="next chapter">Installing from Sources</a></p>
<div role="note" aria-label="source link"> <div role="note" aria-label="source link">
<h3>This Page</h3> <h3>This Page</h3>
<ul class="this-page-menu"> <ul class="this-page-menu">
...@@ -138,7 +129,7 @@ apt-get install -f ...@@ -138,7 +129,7 @@ apt-get install -f
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="../ui/index.html" title="User Interface" <a href="build_from_source.html" title="Installing from Sources"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="docker_install.html" title="Docker installation guide" <a href="docker_install.html" title="Docker installation guide"
...@@ -149,7 +140,7 @@ apt-get install -f ...@@ -149,7 +140,7 @@ apt-get install -f
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
<link rel="index" title="Index" href="../genindex.html" /> <link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="next" title="Cluster Training" href="opensource/cluster_train.html" /> <link rel="next" title="Distributed Training" href="opensource/cluster_train.html" />
<link rel="prev" title="Chinese Word Embedding Model Tutorial" href="../demo/embedding_model/index.html" /> <link rel="prev" title="Chinese Word Embedding Model Tutorial" href="../demo/embedding_model/index.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
...@@ -53,7 +53,7 @@ var _hmt = _hmt || []; ...@@ -53,7 +53,7 @@ var _hmt = _hmt || [];
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="opensource/cluster_train.html" title="Cluster Training" <a href="opensource/cluster_train.html" title="Distributed Training"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="../demo/embedding_model/index.html" title="Chinese Word Embedding Model Tutorial" <a href="../demo/embedding_model/index.html" title="Chinese Word Embedding Model Tutorial"
...@@ -71,10 +71,8 @@ var _hmt = _hmt || []; ...@@ -71,10 +71,8 @@ var _hmt = _hmt || [];
<h1>Cluster Train<a class="headerlink" href="#cluster-train" title="Permalink to this headline"></a></h1> <h1>Cluster Train<a class="headerlink" href="#cluster-train" title="Permalink to this headline"></a></h1>
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul> <ul>
<li class="toctree-l1"><a class="reference internal" href="opensource/cluster_train.html">Cluster Training</a><ul> <li class="toctree-l1"><a class="reference internal" href="opensource/cluster_train.html">Distributed Training</a><ul>
<li class="toctree-l2"><a class="reference internal" href="opensource/cluster_train.html#pre-requirements">Pre-requirements</a></li> <li class="toctree-l2"><a class="reference internal" href="opensource/cluster_train.html#prerequisite">Prerequisite</a><ul>
<li class="toctree-l2"><a class="reference internal" href="opensource/cluster_train.html#prepare-job-workspace">Prepare Job Workspace</a></li>
<li class="toctree-l2"><a class="reference internal" href="opensource/cluster_train.html#prepare-cluster-job-configuration">Prepare Cluster Job Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#launching-cluster-job">Launching Cluster Job</a></li> <li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#launching-cluster-job">Launching Cluster Job</a></li>
<li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#kill-cluster-job">Kill Cluster Job</a></li> <li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#kill-cluster-job">Kill Cluster Job</a></li>
<li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#check-cluster-training-result">Check Cluster Training Result</a></li> <li class="toctree-l3"><a class="reference internal" href="opensource/cluster_train.html#check-cluster-training-result">Check Cluster Training Result</a></li>
...@@ -98,7 +96,7 @@ var _hmt = _hmt || []; ...@@ -98,7 +96,7 @@ var _hmt = _hmt || [];
title="previous chapter">Chinese Word Embedding Model Tutorial</a></p> title="previous chapter">Chinese Word Embedding Model Tutorial</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="opensource/cluster_train.html" <p class="topless"><a href="opensource/cluster_train.html"
title="next chapter">Cluster Training</a></p> title="next chapter">Distributed Training</a></p>
<div role="note" aria-label="source link"> <div role="note" aria-label="source link">
<h3>This Page</h3> <h3>This Page</h3>
<ul class="this-page-menu"> <ul class="this-page-menu">
...@@ -130,7 +128,7 @@ var _hmt = _hmt || []; ...@@ -130,7 +128,7 @@ var _hmt = _hmt || [];
<a href="../py-modindex.html" title="Python Module Index" <a href="../py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="opensource/cluster_train.html" title="Cluster Training" <a href="opensource/cluster_train.html" title="Distributed Training"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="../demo/embedding_model/index.html" title="Chinese Word Embedding Model Tutorial" <a href="../demo/embedding_model/index.html" title="Chinese Word Embedding Model Tutorial"
...@@ -140,7 +138,7 @@ var _hmt = _hmt || []; ...@@ -140,7 +138,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
<head> <head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Cluster Training &#8212; PaddlePaddle documentation</title> <title>Distributed Training &#8212; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/classic.css" type="text/css" /> <link rel="stylesheet" href="../../_static/classic.css" type="text/css" />
<link rel="stylesheet" href="../../_static/pygments.css" type="text/css" /> <link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
...@@ -69,99 +69,117 @@ var _hmt = _hmt || []; ...@@ -69,99 +69,117 @@ var _hmt = _hmt || [];
<div class="bodywrapper"> <div class="bodywrapper">
<div class="body" role="main"> <div class="body" role="main">
<div class="section" id="cluster-training"> <div class="section" id="distributed-training">
<span id="cluster-training"></span><h1>Cluster Training<a class="headerlink" href="#cluster-training" title="Permalink to this headline"></a></h1> <span id="distributed-training"></span><h1>Distributed Training<a class="headerlink" href="#distributed-training" title="Permalink to this headline"></a></h1>
<p>We provide some simple scripts <code class="docutils literal"><span class="pre">paddle/scripts/cluster_train</span></code> to help you to launch cluster training Job to harness PaddlePaddle&#8217;s distributed trainning. For MPI and other cluster scheduler refer this naive script to implement more robust cluster training platform by yourself.</p> <p>In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, <a class="reference external" href="https://github.com/baidu/Paddle/tree/develop/demo/recommendation">recommendation</a>.</p>
<p>The following cluster demo is based on RECOMMENDATION local training demo in PaddlePaddle <code class="docutils literal"><span class="pre">demo/recommendation</span></code> directory. Assuming you enter the <code class="docutils literal"><span class="pre">paddle/scripts/cluster_train/</span></code> directory.</p> <p><a class="reference external" href="https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train">Scripts</a> used in this article launch distributed jobs via SSH. They also work as a reference for users running more sophisticated cluster management systems like MPI and Kubernetes.</p>
<div class="section" id="pre-requirements"> <div class="section" id="prerequisite">
<span id="pre-requirements"></span><h2>Pre-requirements<a class="headerlink" href="#pre-requirements" title="Permalink to this headline"></a></h2> <span id="prerequisite"></span><h2>Prerequisite<a class="headerlink" href="#prerequisite" title="Permalink to this headline"></a></h2>
<p>Firstly,</p> <ol>
<div class="highlight-bash"><div class="highlight"><pre><span></span>pip install fabric <li><p class="first">Aforementioned scripts use a Python library <a class="reference external" href="http://www.fabfile.org/">fabric</a> to run SSH commands. We can use <code class="docutils literal"><span class="pre">pip</span></code> to install fabric:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>
</pre></div> </pre></div>
</div> </div>
<p>Secondly, go through installing scripts to install PaddlePaddle at all nodes to make sure demo can run as local mode. For CUDA enabled training, we assume that CUDA is installed in <code class="docutils literal"><span class="pre">/usr/local/cuda</span></code>, otherwise missed cuda runtime libraries error could be reported at cluster runtime. In one word, the local training environment should be well prepared for the simple scripts.</p> </li>
<p>Then you should prepare same ROOT_DIR directory in all nodes. ROOT_DIR is from in cluster_train/conf.py. Assuming that the ROOT_DIR = /home/paddle, you can create <code class="docutils literal"><span class="pre">paddle</span></code> user account as well, at last <code class="docutils literal"><span class="pre">paddle.py</span></code> can ssh connections to all nodes with <code class="docutils literal"><span class="pre">paddle</span></code> user automatically.</p> </ol>
<p>At last you can create ssh mutual trust relationship between all nodes for easy ssh login, otherwise <code class="docutils literal"><span class="pre">password</span></code> should be provided at runtime from <code class="docutils literal"><span class="pre">paddle.py</span></code>.</p> <p>pip install fabric</p>
</div> <div class="highlight-default"><div class="highlight"><pre><span></span>
<div class="section" id="prepare-job-workspace"> 1. We need to install PaddlePaddle on all nodes in the cluster. To enable GPUs, we need to install CUDA in `/usr/local/cuda`; otherwise Paddle would report errors at runtime.
<span id="prepare-job-workspace"></span><h2>Prepare Job Workspace<a class="headerlink" href="#prepare-job-workspace" title="Permalink to this headline"></a></h2>
<p><code class="docutils literal"><span class="pre">Job</span> <span class="pre">workspace</span></code> is defined as one package directory which contains dependency libraries, train data, test data, model config file and all other related file dependencies.</p> 1. Set the `ROOT_DIR` variable in [`cluster_train/conf.py`] on all nodes. For convenience, we often create a Unix user `paddle` on all nodes and set `ROOT_DIR=/home/paddle`. In this way, we can write public SSH keys into `/home/paddle/.ssh/authorized_keys` so that user `paddle` can SSH to all nodes without password.
<p>These <code class="docutils literal"><span class="pre">train/test</span></code> data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as <code class="docutils literal"><span class="pre">train.list/test.list</span></code> which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition.</p>
<p>Generally, you can use same model file from local training for cluster training. What you should have in mind that, the <code class="docutils literal"><span class="pre">batch_size</span></code> set in <code class="docutils literal"><span class="pre">setting</span></code> function in model file means batch size in <code class="docutils literal"><span class="pre">each</span></code> node of cluster job instead of total batch size if synchronization SGD was used.</p> ## Prepare Job Workspace
<p>Following steps are based on demo/recommendation demo in demo directory.</p>
<p>You just go through demo/recommendation tutorial doc until <code class="docutils literal"><span class="pre">Train</span></code> section, and at last you will get train/test data and model configuration file. Finaly, just use demo/recommendation as workspace for cluster training.</p> We refer to the directory where we put dependent libraries, config files, etc., as *workspace*.
<p>At last your workspace should look like as follow:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span>. These ```train/test``` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as ```train.list/test.list``` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition.
|-- common_utils.py
|-- data Generally, you can use same model file from local training for cluster training. What you should have in mind that, the ```batch_size``` set in ```setting``` function in model file means batch size in ```each``` node of cluster job instead of total batch size if synchronization SGD was used.
| |-- config.json
| |-- config_generator.py Following steps are based on demo/recommendation demo in demo directory.
| |-- meta.bin
| |-- meta_config.json You just go through demo/recommendation tutorial doc until ```Train``` section, and at last you will get train/test data and model configuration file. Finaly, just use demo/recommendation as workspace for cluster training.
| |-- meta_generator.py
| |-- ml-1m At last your workspace should look like as follow:
| |-- ml_data.sh
| |-- ratings.dat.test
| |-- ratings.dat.train
| |-- split.py
| |-- test.list
| `-- train.list
|-- dataprovider.py
|-- evaluate.sh
|-- prediction.py
|-- preprocess.sh
|-- requirements.txt
|-- run.sh
`-- trainer_config.py
</pre></div> </pre></div>
</div> </div>
<p>Not all of these files are needed for cluster training, but it&#8217;s not necessary to remove useless files.</p> <p>.
<p><code class="docutils literal"><span class="pre">trainer_config.py</span></code> |&#8211; common_utils.py
Indicates the model config file.</p> |&#8211; data
<p><code class="docutils literal"><span class="pre">train.list</span></code> and <code class="docutils literal"><span class="pre">test.list</span></code> | |&#8211; config.json
File index. It stores all relative or absolute file paths of all train/test data at current node.</p> | |&#8211; config_generator.py
<p><code class="docutils literal"><span class="pre">dataprovider.py</span></code> | |&#8211; meta.bin
used to read train/test samples. It&#8217;s same as local training.</p> | |&#8211; meta_config.json
<p><code class="docutils literal"><span class="pre">data</span></code> | |&#8211; meta_generator.py
all files in data directory are refered by train.list/test.list which are refered by data provider.</p> | |&#8211; ml-1m
</div> | |&#8211; ml_data.sh
<div class="section" id="prepare-cluster-job-configuration"> | |&#8211; ratings.dat.test
<span id="prepare-cluster-job-configuration"></span><h2>Prepare Cluster Job Configuration<a class="headerlink" href="#prepare-cluster-job-configuration" title="Permalink to this headline"></a></h2> | |&#8211; ratings.dat.train
<p>The options below must be carefully set in cluster_train/conf.py</p> | |&#8211; split.py
<p><code class="docutils literal"><span class="pre">HOSTS</span></code> all nodes hostname or ip that will run cluster job. You can also append user and ssh port with hostname, such as root&#64;192.168.100.17:9090.</p> | |&#8211; test.list
<p><code class="docutils literal"><span class="pre">ROOT_DIR</span></code> workspace ROOT directory for placing JOB workspace directory</p> | <code class="docutils literal"><span class="pre">--</span> <span class="pre">train.list</span> <span class="pre">|--</span> <span class="pre">dataprovider.py</span> <span class="pre">|--</span> <span class="pre">evaluate.sh</span> <span class="pre">|--</span> <span class="pre">prediction.py</span> <span class="pre">|--</span> <span class="pre">preprocess.sh</span> <span class="pre">|--</span> <span class="pre">requirements.txt</span> <span class="pre">|--</span> <span class="pre">run.sh</span></code>&#8211; trainer_config.py</p>
<p><code class="docutils literal"><span class="pre">PADDLE_NIC</span></code> the NIC(Network Interface Card) interface name for cluster communication channel, such as eth0 for ethternet, ib0 for infiniband.</p> <div class="highlight-default"><div class="highlight"><pre><span></span>Not all of these files are needed for cluster training, but it&#39;s not necessary to remove useless files.
<p><code class="docutils literal"><span class="pre">PADDLE_PORT</span></code> port number for cluster commnunication channel</p>
<p><code class="docutils literal"><span class="pre">PADDLE_PORTS_NUM</span></code> the number of port used for cluster communication channle. if the number of cluster nodes is small(less than 5~6nodes), recommend you set it to larger, such as 2 ~ 8, for better network performance.</p> ```trainer_config.py```
<p><code class="docutils literal"><span class="pre">PADDLE_PORTS_NUM_FOR_SPARSE</span></code> the number of port used for sparse updater cluster commnunication channel. if sparse remote update is used, set it like <code class="docutils literal"><span class="pre">PADDLE_PORTS_NUM</span></code></p> Indicates the model config file.
<p><code class="docutils literal"><span class="pre">LD_LIBRARY_PATH</span></code> set addtional LD_LIBRARY_PATH for cluster job. You can use it to set CUDA libraries path.</p>
<p>Default Configuration as follow:</p> ```train.list``` and ```test.list```
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">HOSTS</span> <span class="o">=</span> <span class="p">[</span> File index. It stores all relative or absolute file paths of all train/test data at current node.
<span class="s2">&quot;root@192.168.100.17&quot;</span><span class="p">,</span>
<span class="s2">&quot;root@192.168.100.18&quot;</span><span class="p">,</span> ```dataprovider.py```
<span class="p">]</span> used to read train/test samples. It&#39;s same as local training.
<span class="sd">&#39;&#39;&#39;</span> ```data```
<span class="sd">workspace configuration</span> all files in data directory are refered by train.list/test.list which are refered by data provider.
<span class="sd">&#39;&#39;&#39;</span>
<span class="c1">#root dir for workspace</span> ## Prepare Cluster Job Configuration
<span class="n">ROOT_DIR</span> <span class="o">=</span> <span class="s2">&quot;/home/paddle&quot;</span>
The options below must be carefully set in cluster_train/conf.py
<span class="sd">&#39;&#39;&#39;</span>
<span class="sd">network configuration</span> ```HOSTS``` all nodes hostname or ip that will run cluster job. You can also append user and ssh port with hostname, such as root@192.168.100.17:9090.
<span class="sd">&#39;&#39;&#39;</span>
<span class="c1">#pserver nics</span> ```ROOT_DIR``` workspace ROOT directory for placing JOB workspace directory
<span class="n">PADDLE_NIC</span> <span class="o">=</span> <span class="s2">&quot;eth0&quot;</span>
<span class="c1">#pserver port</span> ```PADDLE_NIC``` the NIC(Network Interface Card) interface name for cluster communication channel, such as eth0 for ethternet, ib0 for infiniband.
<span class="n">PADDLE_PORT</span> <span class="o">=</span> <span class="mi">7164</span>
<span class="c1">#pserver ports num</span> ```PADDLE_PORT``` port number for cluster commnunication channel
<span class="n">PADDLE_PORTS_NUM</span> <span class="o">=</span> <span class="mi">2</span>
<span class="c1">#pserver sparse ports num</span> ```PADDLE_PORTS_NUM``` the number of port used for cluster communication channle. if the number of cluster nodes is small(less than 5~6nodes), recommend you set it to larger, such as 2 ~ 8, for better network performance.
<span class="n">PADDLE_PORTS_NUM_FOR_SPARSE</span> <span class="o">=</span> <span class="mi">2</span>
```PADDLE_PORTS_NUM_FOR_SPARSE``` the number of port used for sparse updater cluster commnunication channel. if sparse remote update is used, set it like ```PADDLE_PORTS_NUM```
<span class="c1">#environments setting for all processes in cluster job</span>
<span class="n">LD_LIBRARY_PATH</span><span class="o">=</span><span class="s2">&quot;/usr/local/cuda/lib64:/usr/lib64&quot;</span> ```LD_LIBRARY_PATH``` set addtional LD_LIBRARY_PATH for cluster job. You can use it to set CUDA libraries path.
Default Configuration as follow:
```python
HOSTS = [
&quot;root@192.168.100.17&quot;,
&quot;root@192.168.100.18&quot;,
]
&#39;&#39;&#39;
workspace configuration
&#39;&#39;&#39;
#root dir for workspace
ROOT_DIR = &quot;/home/paddle&quot;
&#39;&#39;&#39;
network configuration
&#39;&#39;&#39;
#pserver nics
PADDLE_NIC = &quot;eth0&quot;
#pserver port
PADDLE_PORT = 7164
#pserver ports num
PADDLE_PORTS_NUM = 2
#pserver sparse ports num
PADDLE_PORTS_NUM_FOR_SPARSE = 2
#environments setting for all processes in cluster job
LD_LIBRARY_PATH=&quot;/usr/local/cuda/lib64:/usr/lib64&quot;
</pre></div> </pre></div>
</div> </div>
<div class="section" id="launching-cluster-job"> <div class="section" id="launching-cluster-job">
...@@ -209,10 +227,8 @@ It provides stderr and stdout of trainer process. Check error log if training cr ...@@ -209,10 +227,8 @@ It provides stderr and stdout of trainer process. Check error log if training cr
<div class="sphinxsidebarwrapper"> <div class="sphinxsidebarwrapper">
<h3><a href="../../index.html">Table Of Contents</a></h3> <h3><a href="../../index.html">Table Of Contents</a></h3>
<ul> <ul>
<li><a class="reference internal" href="#">Cluster Training</a><ul> <li><a class="reference internal" href="#">Distributed Training</a><ul>
<li><a class="reference internal" href="#pre-requirements">Pre-requirements</a></li> <li><a class="reference internal" href="#prerequisite">Prerequisite</a><ul>
<li><a class="reference internal" href="#prepare-job-workspace">Prepare Job Workspace</a></li>
<li><a class="reference internal" href="#prepare-cluster-job-configuration">Prepare Cluster Job Configuration</a><ul>
<li><a class="reference internal" href="#launching-cluster-job">Launching Cluster Job</a></li> <li><a class="reference internal" href="#launching-cluster-job">Launching Cluster Job</a></li>
<li><a class="reference internal" href="#kill-cluster-job">Kill Cluster Job</a></li> <li><a class="reference internal" href="#kill-cluster-job">Kill Cluster Job</a></li>
<li><a class="reference internal" href="#check-cluster-training-result">Check Cluster Training Result</a></li> <li><a class="reference internal" href="#check-cluster-training-result">Check Cluster Training Result</a></li>
...@@ -271,7 +287,7 @@ It provides stderr and stdout of trainer process. Check error log if training cr ...@@ -271,7 +287,7 @@ It provides stderr and stdout of trainer process. Check error log if training cr
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -317,7 +317,7 @@ python paraconvert.py --t2b -i INPUT -o OUTPUT ...@@ -317,7 +317,7 @@ python paraconvert.py --t2b -i INPUT -o OUTPUT
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -339,7 +339,7 @@ python prediction.py $model $image $use_gpu ...@@ -339,7 +339,7 @@ python prediction.py $model $image $use_gpu
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -143,7 +143,7 @@ var _hmt = _hmt || []; ...@@ -143,7 +143,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -403,7 +403,7 @@ shape: <code class="docutils literal"><span class="pre">(Co,</span> <span class= ...@@ -403,7 +403,7 @@ shape: <code class="docutils literal"><span class="pre">(Co,</span> <span class=
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../index.html" />
<link rel="next" title="Image Classification Tutorial" href="image_classification/index.html" /> <link rel="next" title="Image Classification Tutorial" href="image_classification/index.html" />
<link rel="prev" title="Python Prediction API" href="../ui/predict/swig_py_paddle_en.html" /> <link rel="prev" title="Parameter and Extra Layer Attribute" href="../ui/api/trainer_config_helpers/attrs.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -56,7 +56,7 @@ var _hmt = _hmt || []; ...@@ -56,7 +56,7 @@ var _hmt = _hmt || [];
<a href="image_classification/index.html" title="Image Classification Tutorial" <a href="image_classification/index.html" title="Image Classification Tutorial"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="../ui/predict/swig_py_paddle_en.html" title="Python Prediction API" <a href="../ui/api/trainer_config_helpers/attrs.html" title="Parameter and Extra Layer Attribute"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
...@@ -126,8 +126,8 @@ var _hmt = _hmt || []; ...@@ -126,8 +126,8 @@ var _hmt = _hmt || [];
</ul> </ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="../ui/predict/swig_py_paddle_en.html" <p class="topless"><a href="../ui/api/trainer_config_helpers/attrs.html"
title="previous chapter">Python Prediction API</a></p> title="previous chapter">Parameter and Extra Layer Attribute</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="image_classification/index.html" <p class="topless"><a href="image_classification/index.html"
title="next chapter">Image Classification Tutorial</a></p> title="next chapter">Image Classification Tutorial</a></p>
...@@ -165,14 +165,14 @@ var _hmt = _hmt || []; ...@@ -165,14 +165,14 @@ var _hmt = _hmt || [];
<a href="image_classification/index.html" title="Image Classification Tutorial" <a href="image_classification/index.html" title="Image Classification Tutorial"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="../ui/predict/swig_py_paddle_en.html" title="Python Prediction API" <a href="../ui/api/trainer_config_helpers/attrs.html" title="Parameter and Extra Layer Attribute"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
<head> <head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Quick Start Tutorial &#8212; PaddlePaddle documentation</title> <title>Quick Start &#8212; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/classic.css" type="text/css" /> <link rel="stylesheet" href="../../_static/classic.css" type="text/css" />
<link rel="stylesheet" href="../../_static/pygments.css" type="text/css" /> <link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
<link rel="search" title="Search" href="../../search.html" /> <link rel="search" title="Search" href="../../search.html" />
<link rel="top" title="PaddlePaddle documentation" href="../../index.html" /> <link rel="top" title="PaddlePaddle documentation" href="../../index.html" />
<link rel="next" title="Build And Install PaddlePaddle" href="../../build/index.html" /> <link rel="next" title="Build And Install PaddlePaddle" href="../../build/index.html" />
<link rel="prev" title="PaddlePaddle Documentation" href="../../index.html" /> <link rel="prev" title="Introduction" href="../../introduction/index.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -56,7 +56,7 @@ var _hmt = _hmt || []; ...@@ -56,7 +56,7 @@ var _hmt = _hmt || [];
<a href="../../build/index.html" title="Build And Install PaddlePaddle" <a href="../../build/index.html" title="Build And Install PaddlePaddle"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="../../index.html" title="PaddlePaddle Documentation" <a href="../../introduction/index.html" title="Introduction"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
...@@ -67,8 +67,8 @@ var _hmt = _hmt || []; ...@@ -67,8 +67,8 @@ var _hmt = _hmt || [];
<div class="bodywrapper"> <div class="bodywrapper">
<div class="body" role="main"> <div class="body" role="main">
<div class="section" id="quick-start-tutorial"> <div class="section" id="quick-start">
<span id="quick-start-tutorial"></span><h1>Quick Start Tutorial<a class="headerlink" href="#quick-start-tutorial" title="Permalink to this headline"></a></h1> <span id="quick-start"></span><h1>Quick Start<a class="headerlink" href="#quick-start" title="Permalink to this headline"></a></h1>
<p>This tutorial will teach the basics of deep learning (DL), including how to implement many different models in PaddlePaddle. You will learn how to:</p> <p>This tutorial will teach the basics of deep learning (DL), including how to implement many different models in PaddlePaddle. You will learn how to:</p>
<ul class="simple"> <ul class="simple">
<li>Prepare data into the standardized format that PaddlePaddle accepts.</li> <li>Prepare data into the standardized format that PaddlePaddle accepts.</li>
...@@ -205,7 +205,7 @@ var _hmt = _hmt || []; ...@@ -205,7 +205,7 @@ var _hmt = _hmt || [];
<p>You need to add a data provider definition <code class="docutils literal"><span class="pre">define_py_data_sources2</span></code> in our network configuration. This definition specifies:</p> <p>You need to add a data provider definition <code class="docutils literal"><span class="pre">define_py_data_sources2</span></code> in our network configuration. This definition specifies:</p>
<ul class="simple"> <ul class="simple">
<li>The path of the training and testing data (<code class="docutils literal"><span class="pre">data/train.list</span></code>, <code class="docutils literal"><span class="pre">data/test.list</span></code>).</li> <li>The path of the training and testing data (<code class="docutils literal"><span class="pre">data/train.list</span></code>, <code class="docutils literal"><span class="pre">data/test.list</span></code>).</li>
<li>The location of the data provider file (<code class="docutils literal"><span class="pre">dataprovider_pow</span></code>).</li> <li>The location of the data provider file (<code class="docutils literal"><span class="pre">dataprovider_bow</span></code>).</li>
<li>The function to call to get data. (<code class="docutils literal"><span class="pre">process</span></code>).</li> <li>The function to call to get data. (<code class="docutils literal"><span class="pre">process</span></code>).</li>
<li>Additional arguments or data. Here it passes the path of word dictionary.</li> <li>Additional arguments or data. Here it passes the path of word dictionary.</li>
</ul> </ul>
...@@ -502,7 +502,7 @@ mv rank-00000 result.txt ...@@ -502,7 +502,7 @@ mv rank-00000 result.txt
<td class="left">Word embedding</td> <td class="left">Word embedding</td>
<td class="left"> 15MB </td> <td class="left"> 15MB </td>
<td class="left"> 8.484%</td> <td class="left"> 8.484%</td>
<td class="left">trainer_config.bow.py</td> <td class="left">trainer_config.emb.py</td>
</tr><tr> </tr><tr>
<td class="left">Convolution model</td> <td class="left">Convolution model</td>
<td class="left"> 16MB </td> <td class="left"> 16MB </td>
...@@ -575,7 +575,7 @@ mv rank-00000 result.txt ...@@ -575,7 +575,7 @@ mv rank-00000 result.txt
<div class="sphinxsidebarwrapper"> <div class="sphinxsidebarwrapper">
<h3><a href="../../index.html">Table Of Contents</a></h3> <h3><a href="../../index.html">Table Of Contents</a></h3>
<ul> <ul>
<li><a class="reference internal" href="#">Quick Start Tutorial</a><ul> <li><a class="reference internal" href="#">Quick Start</a><ul>
<li><a class="reference internal" href="#install">Install</a></li> <li><a class="reference internal" href="#install">Install</a></li>
<li><a class="reference internal" href="#overview">Overview</a></li> <li><a class="reference internal" href="#overview">Overview</a></li>
<li><a class="reference internal" href="#preprocess-data-into-standardized-format">Preprocess data into standardized format</a></li> <li><a class="reference internal" href="#preprocess-data-into-standardized-format">Preprocess data into standardized format</a></li>
...@@ -605,8 +605,8 @@ mv rank-00000 result.txt ...@@ -605,8 +605,8 @@ mv rank-00000 result.txt
</ul> </ul>
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="../../index.html" <p class="topless"><a href="../../introduction/index.html"
title="previous chapter">PaddlePaddle Documentation</a></p> title="previous chapter">Introduction</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="../../build/index.html" <p class="topless"><a href="../../build/index.html"
title="next chapter">Build And Install PaddlePaddle</a></p> title="next chapter">Build And Install PaddlePaddle</a></p>
...@@ -644,14 +644,14 @@ mv rank-00000 result.txt ...@@ -644,14 +644,14 @@ mv rank-00000 result.txt
<a href="../../build/index.html" title="Build And Install PaddlePaddle" <a href="../../build/index.html" title="Build And Install PaddlePaddle"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="../../index.html" title="PaddlePaddle Documentation" <a href="../../introduction/index.html" title="Introduction"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="../../index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="../../index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -244,7 +244,7 @@ entries and/or test entries</li> ...@@ -244,7 +244,7 @@ entries and/or test entries</li>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -121,7 +121,6 @@ It specifics the field types and file names: 1) there are four types of field fo ...@@ -121,7 +121,6 @@ It specifics the field types and file names: 1) there are four types of field fo
<span class="s2">&quot;fields&quot;</span><span class="p">:</span> <span class="p">[</span><span class="s2">&quot;id&quot;</span><span class="p">,</span> <span class="s2">&quot;title&quot;</span><span class="p">,</span> <span class="s2">&quot;genres&quot;</span><span class="p">]</span> <span class="s2">&quot;fields&quot;</span><span class="p">:</span> <span class="p">[</span><span class="s2">&quot;id&quot;</span><span class="p">,</span> <span class="s2">&quot;title&quot;</span><span class="p">,</span> <span class="s2">&quot;genres&quot;</span><span class="p">]</span>
<span class="p">}</span> <span class="p">}</span>
<span class="p">}</span> <span class="p">}</span>
</pre></div> </pre></div>
</div> </div>
</div> </div>
...@@ -410,8 +409,8 @@ cp ml-1m/ratings.dat.test . ...@@ -410,8 +409,8 @@ cp ml-1m/ratings.dat.test .
<span class="c1"># load meta file</span> <span class="c1"># load meta file</span>
<span class="n">meta</span> <span class="o">=</span> <span class="n">pickle</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span> <span class="n">meta</span> <span class="o">=</span> <span class="n">pickle</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">settings</span><span class="p">(</span><span class="n">batch_size</span><span class="o">=</span><span class="mi">1600</span><span class="p">,</span> <span class="n">learning_rate</span><span class="o">=</span><span class="mf">1e-3</span><span class="p">,</span> <span class="n">settings</span><span class="p">(</span>
<span class="n">learning_method</span><span class="o">=</span><span class="n">RMSPropOptimizer</span><span class="p">())</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1600</span><span class="p">,</span> <span class="n">learning_rate</span><span class="o">=</span><span class="mf">1e-3</span><span class="p">,</span> <span class="n">learning_method</span><span class="o">=</span><span class="n">RMSPropOptimizer</span><span class="p">())</span>
<span class="k">def</span> <span class="nf">construct_feature</span><span class="p">(</span><span class="n">name</span><span class="p">):</span> <span class="k">def</span> <span class="nf">construct_feature</span><span class="p">(</span><span class="n">name</span><span class="p">):</span>
...@@ -442,11 +441,10 @@ cp ml-1m/ratings.dat.test . ...@@ -442,11 +441,10 @@ cp ml-1m/ratings.dat.test .
<span class="n">slot_name</span> <span class="o">=</span> <span class="n">each_meta</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;name&#39;</span><span class="p">,</span> <span class="s1">&#39;</span><span class="si">%s</span><span class="s1">_id&#39;</span> <span class="o">%</span> <span class="n">name</span><span class="p">)</span> <span class="n">slot_name</span> <span class="o">=</span> <span class="n">each_meta</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;name&#39;</span><span class="p">,</span> <span class="s1">&#39;</span><span class="si">%s</span><span class="s1">_id&#39;</span> <span class="o">%</span> <span class="n">name</span><span class="p">)</span>
<span class="k">if</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;id&#39;</span><span class="p">:</span> <span class="k">if</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;id&#39;</span><span class="p">:</span>
<span class="n">slot_dim</span> <span class="o">=</span> <span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;max&#39;</span><span class="p">]</span> <span class="n">slot_dim</span> <span class="o">=</span> <span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;max&#39;</span><span class="p">]</span>
<span class="n">embedding</span> <span class="o">=</span> <span class="n">embedding_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="n">slot_name</span><span class="p">,</span> <span class="n">embedding</span> <span class="o">=</span> <span class="n">embedding_layer</span><span class="p">(</span>
<span class="n">size</span><span class="o">=</span><span class="n">slot_dim</span><span class="p">),</span> <span class="nb">input</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span>
<span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span> <span class="n">slot_name</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">slot_dim</span><span class="p">),</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span>
<span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="k">elif</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;embedding&#39;</span><span class="p">:</span> <span class="k">elif</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;embedding&#39;</span><span class="p">:</span>
<span class="n">is_seq</span> <span class="o">=</span> <span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;seq&#39;</span><span class="p">]</span> <span class="o">==</span> <span class="s1">&#39;sequence&#39;</span> <span class="n">is_seq</span> <span class="o">=</span> <span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;seq&#39;</span><span class="p">]</span> <span class="o">==</span> <span class="s1">&#39;sequence&#39;</span>
<span class="n">slot_dim</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;dict&#39;</span><span class="p">])</span> <span class="n">slot_dim</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;dict&#39;</span><span class="p">])</span>
...@@ -454,17 +452,14 @@ cp ml-1m/ratings.dat.test . ...@@ -454,17 +452,14 @@ cp ml-1m/ratings.dat.test .
<span class="n">embedding</span> <span class="o">=</span> <span class="n">embedding_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">din</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span> <span class="n">embedding</span> <span class="o">=</span> <span class="n">embedding_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">din</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span>
<span class="k">if</span> <span class="n">is_seq</span><span class="p">:</span> <span class="k">if</span> <span class="n">is_seq</span><span class="p">:</span>
<span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span> <span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span>
<span class="n">text_conv_pool</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">context_len</span><span class="o">=</span><span class="mi">5</span><span class="p">,</span> <span class="n">text_conv_pool</span><span class="p">(</span>
<span class="n">hidden_size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span> <span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">context_len</span><span class="o">=</span><span class="mi">5</span><span class="p">,</span> <span class="n">hidden_size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="k">else</span><span class="p">:</span> <span class="k">else</span><span class="p">:</span>
<span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">embedding</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="k">elif</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;one_hot_dense&#39;</span><span class="p">:</span> <span class="k">elif</span> <span class="n">type_name</span> <span class="o">==</span> <span class="s1">&#39;one_hot_dense&#39;</span><span class="p">:</span>
<span class="n">slot_dim</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;dict&#39;</span><span class="p">])</span> <span class="n">slot_dim</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">each_meta</span><span class="p">[</span><span class="s1">&#39;dict&#39;</span><span class="p">])</span>
<span class="n">hidden</span> <span class="o">=</span> <span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="n">slot_name</span><span class="p">,</span> <span class="n">slot_dim</span><span class="p">),</span> <span class="n">hidden</span> <span class="o">=</span> <span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="n">slot_name</span><span class="p">,</span> <span class="n">slot_dim</span><span class="p">),</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span>
<span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span> <span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">hidden</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="n">fusion</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">hidden</span><span class="p">,</span>
<span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">))</span>
<span class="k">return</span> <span class="n">fc_layer</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;</span><span class="si">%s</span><span class="s2">_fusion&quot;</span> <span class="o">%</span> <span class="n">name</span><span class="p">,</span> <span class="nb">input</span><span class="o">=</span><span class="n">fusion</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span> <span class="k">return</span> <span class="n">fc_layer</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;</span><span class="si">%s</span><span class="s2">_fusion&quot;</span> <span class="o">%</span> <span class="n">name</span><span class="p">,</span> <span class="nb">input</span><span class="o">=</span><span class="n">fusion</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">256</span><span class="p">)</span>
...@@ -473,11 +468,17 @@ cp ml-1m/ratings.dat.test . ...@@ -473,11 +468,17 @@ cp ml-1m/ratings.dat.test .
<span class="n">user_feature</span> <span class="o">=</span> <span class="n">construct_feature</span><span class="p">(</span><span class="s2">&quot;user&quot;</span><span class="p">)</span> <span class="n">user_feature</span> <span class="o">=</span> <span class="n">construct_feature</span><span class="p">(</span><span class="s2">&quot;user&quot;</span><span class="p">)</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">cos_sim</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="n">movie_feature</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">user_feature</span><span class="p">)</span> <span class="n">similarity</span> <span class="o">=</span> <span class="n">cos_sim</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="n">movie_feature</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">user_feature</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">is_predict</span><span class="p">:</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">is_predict</span><span class="p">:</span>
<span class="n">outputs</span><span class="p">(</span><span class="n">regression_cost</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">outputs</span><span class="p">(</span>
<span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span><span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span> <span class="n">regression_cost</span><span class="p">(</span>
<span class="nb">input</span><span class="o">=</span><span class="n">similarity</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">data_layer</span><span class="p">(</span>
<span class="n">define_py_data_sources2</span><span class="p">(</span><span class="s1">&#39;data/train.list&#39;</span><span class="p">,</span> <span class="s1">&#39;data/test.list&#39;</span><span class="p">,</span> <span class="n">module</span><span class="o">=</span><span class="s1">&#39;dataprovider&#39;</span><span class="p">,</span> <span class="s1">&#39;rating&#39;</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)))</span>
<span class="n">obj</span><span class="o">=</span><span class="s1">&#39;process&#39;</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;meta&#39;</span><span class="p">:</span> <span class="n">meta</span><span class="p">})</span>
<span class="n">define_py_data_sources2</span><span class="p">(</span>
<span class="s1">&#39;data/train.list&#39;</span><span class="p">,</span>
<span class="s1">&#39;data/test.list&#39;</span><span class="p">,</span>
<span class="n">module</span><span class="o">=</span><span class="s1">&#39;dataprovider&#39;</span><span class="p">,</span>
<span class="n">obj</span><span class="o">=</span><span class="s1">&#39;process&#39;</span><span class="p">,</span>
<span class="n">args</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;meta&#39;</span><span class="p">:</span> <span class="n">meta</span><span class="p">})</span>
<span class="k">else</span><span class="p">:</span> <span class="k">else</span><span class="p">:</span>
<span class="n">outputs</span><span class="p">(</span><span class="n">similarity</span><span class="p">)</span> <span class="n">outputs</span><span class="p">(</span><span class="n">similarity</span><span class="p">)</span>
</pre></div> </pre></div>
...@@ -527,6 +528,7 @@ features.</p> ...@@ -527,6 +528,7 @@ features.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">paddle.trainer.PyDataProvider2</span> <span class="kn">import</span> <span class="o">*</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">paddle.trainer.PyDataProvider2</span> <span class="kn">import</span> <span class="o">*</span>
<span class="kn">import</span> <span class="nn">common_utils</span> <span class="c1"># parse</span> <span class="kn">import</span> <span class="nn">common_utils</span> <span class="c1"># parse</span>
<span class="k">def</span> <span class="nf">hook</span><span class="p">(</span><span class="n">settings</span><span class="p">,</span> <span class="n">meta</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="k">def</span> <span class="nf">hook</span><span class="p">(</span><span class="n">settings</span><span class="p">,</span> <span class="n">meta</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="sd">&quot;&quot;&quot;</span> <span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Init hook is invoked before process data. It will set obj.slots and store</span> <span class="sd"> Init hook is invoked before process data. It will set obj.slots and store</span>
...@@ -553,6 +555,7 @@ features.</p> ...@@ -553,6 +555,7 @@ features.</p>
<span class="n">settings</span><span class="o">.</span><span class="n">input_types</span> <span class="o">=</span> <span class="n">headers</span> <span class="n">settings</span><span class="o">.</span><span class="n">input_types</span> <span class="o">=</span> <span class="n">headers</span>
<span class="n">settings</span><span class="o">.</span><span class="n">meta</span> <span class="o">=</span> <span class="n">meta</span> <span class="n">settings</span><span class="o">.</span><span class="n">meta</span> <span class="o">=</span> <span class="n">meta</span>
<span class="nd">@provider</span><span class="p">(</span><span class="n">init_hook</span><span class="o">=</span><span class="n">hook</span><span class="p">,</span> <span class="n">cache</span><span class="o">=</span><span class="n">CacheType</span><span class="o">.</span><span class="n">CACHE_PASS_IN_MEM</span><span class="p">)</span> <span class="nd">@provider</span><span class="p">(</span><span class="n">init_hook</span><span class="o">=</span><span class="n">hook</span><span class="p">,</span> <span class="n">cache</span><span class="o">=</span><span class="n">CacheType</span><span class="o">.</span><span class="n">CACHE_PASS_IN_MEM</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">process</span><span class="p">(</span><span class="n">settings</span><span class="p">,</span> <span class="n">filename</span><span class="p">):</span> <span class="k">def</span> <span class="nf">process</span><span class="p">(</span><span class="n">settings</span><span class="p">,</span> <span class="n">filename</span><span class="p">):</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">filename</span><span class="p">,</span> <span class="s1">&#39;r&#39;</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span> <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">filename</span><span class="p">,</span> <span class="s1">&#39;r&#39;</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
...@@ -755,7 +758,7 @@ Prediction Score is 3.13 ...@@ -755,7 +758,7 @@ Prediction Score is 3.13
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -146,7 +146,7 @@ var _hmt = _hmt || []; ...@@ -146,7 +146,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -325,7 +325,7 @@ feature: the extracted features from data <span class="nb">set</span> ...@@ -325,7 +325,7 @@ feature: the extracted features from data <span class="nb">set</span>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -148,7 +148,7 @@ var _hmt = _hmt || []; ...@@ -148,7 +148,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -453,7 +453,7 @@ exists or change the model path.</p> ...@@ -453,7 +453,7 @@ exists or change the model path.</p>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -158,7 +158,7 @@ var _hmt = _hmt || []; ...@@ -158,7 +158,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -514,7 +514,7 @@ var _hmt = _hmt || []; ...@@ -514,7 +514,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -133,7 +133,7 @@ var _hmt = _hmt || []; ...@@ -133,7 +133,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -512,7 +512,7 @@ add_test<span class="o">(</span>NAME test_FCGrad ...@@ -512,7 +512,7 @@ add_test<span class="o">(</span>NAME test_FCGrad
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
<link rel="index" title="Index" href="genindex.html" /> <link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" /> <link rel="search" title="Search" href="search.html" />
<link rel="top" title="PaddlePaddle documentation" href="#" /> <link rel="top" title="PaddlePaddle documentation" href="#" />
<link rel="next" title="Quick Start Tutorial" href="demo/quick_start/index_en.html" /> <link rel="next" title="Introduction" href="introduction/index.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -52,7 +52,7 @@ var _hmt = _hmt || []; ...@@ -52,7 +52,7 @@ var _hmt = _hmt || [];
<a href="py-modindex.html" title="Python Module Index" <a href="py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="demo/quick_start/index_en.html" title="Quick Start Tutorial" <a href="introduction/index.html" title="Introduction"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="nav-item nav-item-0"><a href="#">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="#">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
...@@ -69,6 +69,7 @@ var _hmt = _hmt || []; ...@@ -69,6 +69,7 @@ var _hmt = _hmt || [];
<span id="user-guide"></span><h2>User Guide<a class="headerlink" href="#user-guide" title="Permalink to this headline"></a></h2> <span id="user-guide"></span><h2>User Guide<a class="headerlink" href="#user-guide" title="Permalink to this headline"></a></h2>
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul> <ul>
<li class="toctree-l1"><a class="reference internal" href="introduction/index.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="demo/quick_start/index_en.html">Quick Start</a></li> <li class="toctree-l1"><a class="reference internal" href="demo/quick_start/index_en.html">Quick Start</a></li>
<li class="toctree-l1"><a class="reference internal" href="build/index.html">Build and Installation</a></li> <li class="toctree-l1"><a class="reference internal" href="build/index.html">Build and Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="build/contribute_to_paddle.html">Contribute Code</a></li> <li class="toctree-l1"><a class="reference internal" href="build/contribute_to_paddle.html">Contribute Code</a></li>
...@@ -116,8 +117,8 @@ var _hmt = _hmt || []; ...@@ -116,8 +117,8 @@ var _hmt = _hmt || [];
</ul> </ul>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="demo/quick_start/index_en.html" <p class="topless"><a href="introduction/index.html"
title="next chapter">Quick Start Tutorial</a></p> title="next chapter">Introduction</a></p>
<div role="note" aria-label="source link"> <div role="note" aria-label="source link">
<h3>This Page</h3> <h3>This Page</h3>
<ul class="this-page-menu"> <ul class="this-page-menu">
...@@ -149,14 +150,14 @@ var _hmt = _hmt || []; ...@@ -149,14 +150,14 @@ var _hmt = _hmt || [];
<a href="py-modindex.html" title="Python Module Index" <a href="py-modindex.html" title="Python Module Index"
>modules</a> |</li> >modules</a> |</li>
<li class="right" > <li class="right" >
<a href="demo/quick_start/index_en.html" title="Quick Start Tutorial" <a href="introduction/index.html" title="Introduction"
>next</a> |</li> >next</a> |</li>
<li class="nav-item nav-item-0"><a href="#">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="#">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
<link rel="search" title="Search" href="search.html" /> <link rel="search" title="Search" href="search.html" />
<link rel="top" title="PaddlePaddle documentation" href="index.html" /> <link rel="top" title="PaddlePaddle documentation" href="index.html" />
<link rel="next" title="Layers Documents" href="source/gserver/layers/index.html" /> <link rel="next" title="Layers Documents" href="source/gserver/layers/index.html" />
<link rel="prev" title="Cluster Training" href="cluster/opensource/cluster_train.html" /> <link rel="prev" title="Distributed Training" href="cluster/opensource/cluster_train.html" />
<script> <script>
var _hmt = _hmt || []; var _hmt = _hmt || [];
(function() { (function() {
...@@ -56,7 +56,7 @@ var _hmt = _hmt || []; ...@@ -56,7 +56,7 @@ var _hmt = _hmt || [];
<a href="source/gserver/layers/index.html" title="Layers Documents" <a href="source/gserver/layers/index.html" title="Layers Documents"
accesskey="N">next</a> |</li> accesskey="N">next</a> |</li>
<li class="right" > <li class="right" >
<a href="cluster/opensource/cluster_train.html" title="Cluster Training" <a href="cluster/opensource/cluster_train.html" title="Distributed Training"
accesskey="P">previous</a> |</li> accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
...@@ -72,7 +72,6 @@ var _hmt = _hmt || []; ...@@ -72,7 +72,6 @@ var _hmt = _hmt || [];
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul> <ul>
<li class="toctree-l1"><a class="reference internal" href="source/gserver/layers/index.html">Layer Source Code Document</a></li> <li class="toctree-l1"><a class="reference internal" href="source/gserver/layers/index.html">Layer Source Code Document</a></li>
<li class="toctree-l1"><a class="reference internal" href="ui/api/trainer_config_helpers/layers_index.html">Layer Python API Document</a></li>
</ul> </ul>
</div> </div>
</div> </div>
...@@ -85,7 +84,7 @@ var _hmt = _hmt || []; ...@@ -85,7 +84,7 @@ var _hmt = _hmt || [];
<div class="sphinxsidebarwrapper"> <div class="sphinxsidebarwrapper">
<h4>Previous topic</h4> <h4>Previous topic</h4>
<p class="topless"><a href="cluster/opensource/cluster_train.html" <p class="topless"><a href="cluster/opensource/cluster_train.html"
title="previous chapter">Cluster Training</a></p> title="previous chapter">Distributed Training</a></p>
<h4>Next topic</h4> <h4>Next topic</h4>
<p class="topless"><a href="source/gserver/layers/index.html" <p class="topless"><a href="source/gserver/layers/index.html"
title="next chapter">Layers Documents</a></p> title="next chapter">Layers Documents</a></p>
...@@ -123,14 +122,14 @@ var _hmt = _hmt || []; ...@@ -123,14 +122,14 @@ var _hmt = _hmt || [];
<a href="source/gserver/layers/index.html" title="Layers Documents" <a href="source/gserver/layers/index.html" title="Layers Documents"
>next</a> |</li> >next</a> |</li>
<li class="right" > <li class="right" >
<a href="cluster/opensource/cluster_train.html" title="Cluster Training" <a href="cluster/opensource/cluster_train.html" title="Distributed Training"
>previous</a> |</li> >previous</a> |</li>
<li class="nav-item nav-item-0"><a href="index.html">PaddlePaddle documentation</a> &#187;</li> <li class="nav-item nav-item-0"><a href="index.html">PaddlePaddle documentation</a> &#187;</li>
</ul> </ul>
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
无法预览此类型文件
...@@ -125,7 +125,7 @@ var _hmt = _hmt || []; ...@@ -125,7 +125,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -114,7 +114,7 @@ var _hmt = _hmt || []; ...@@ -114,7 +114,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -149,7 +149,7 @@ var _hmt = _hmt || []; ...@@ -149,7 +149,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -155,7 +155,7 @@ var _hmt = _hmt || []; ...@@ -155,7 +155,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -141,7 +141,7 @@ var _hmt = _hmt || []; ...@@ -141,7 +141,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -146,7 +146,7 @@ var _hmt = _hmt || []; ...@@ -146,7 +146,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -95,6 +95,13 @@ var _hmt = _hmt || []; ...@@ -95,6 +95,13 @@ var _hmt = _hmt || [];
<p>HL_FLOAT_MIN: 2.2250738585072014e-308 </p> <p>HL_FLOAT_MIN: 2.2250738585072014e-308 </p>
</dd></dl> </dd></dl>
<dl class="macro">
<dt id="c.EXP_MAX_INPUT">
<span class="target" id="paddlehl__base_8h_1acdfa5ecb592af041069633c09a119caa"></span><code class="descname">EXP_MAX_INPUT</code><a class="headerlink" href="#c.EXP_MAX_INPUT" title="Permalink to this definition"></a></dt>
<dd><p>The maximum input value for exp, used to avoid overflow problem.</p>
<p>Currently only used for tanh function. </p>
</dd></dl>
<dl class="macro"> <dl class="macro">
<dt id="c.DIVUP"> <dt id="c.DIVUP">
<span class="target" id="paddlehl__base_8h_1ae637c5ecf04d7a87be9ce233d85abae0"></span><code class="descname">DIVUP</code><span class="sig-paren">(</span>x, y<span class="sig-paren">)</span><a class="headerlink" href="#c.DIVUP" title="Permalink to this definition"></a></dt> <span class="target" id="paddlehl__base_8h_1ae637c5ecf04d7a87be9ce233d85abae0"></span><code class="descname">DIVUP</code><span class="sig-paren">(</span>x, y<span class="sig-paren">)</span><a class="headerlink" href="#c.DIVUP" title="Permalink to this definition"></a></dt>
...@@ -289,17 +296,12 @@ var _hmt = _hmt || []; ...@@ -289,17 +296,12 @@ var _hmt = _hmt || [];
<dd><em>#include &lt;hl_base.h&gt;</em><p>Lstm value. </p> <dd><em>#include &lt;hl_base.h&gt;</em><p>Lstm value. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">gateValue</span></code> - <p>input value. </p> <li><code class="docutils literal"><span class="pre">gateValue</span></code>: input value. </li>
</li> <li><code class="docutils literal"><span class="pre">prevStateValue</span></code>: previous state value. </li>
<li><code class="first docutils literal"><span class="pre">prevStateValue</span></code> - <p>previous state value. </p> <li><code class="docutils literal"><span class="pre">stateValue</span></code>: state value. </li>
</li> <li><code class="docutils literal"><span class="pre">stateActiveValue</span></code>: state active value. </li>
<li><code class="first docutils literal"><span class="pre">stateValue</span></code> - <p>state value. </p> <li><code class="docutils literal"><span class="pre">outputValue</span></code>: output value. </li>
</li>
<li><code class="first docutils literal"><span class="pre">stateActiveValue</span></code> - <p>state active value. </p>
</li>
<li><code class="first docutils literal"><span class="pre">outputValue</span></code> - <p>output value. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -355,17 +357,12 @@ var _hmt = _hmt || []; ...@@ -355,17 +357,12 @@ var _hmt = _hmt || [];
<dd><em>#include &lt;hl_base.h&gt;</em><p>Lstm gradient. </p> <dd><em>#include &lt;hl_base.h&gt;</em><p>Lstm gradient. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">gateGrad</span></code> - <p>input gradient. </p> <li><code class="docutils literal"><span class="pre">gateGrad</span></code>: input gradient. </li>
</li> <li><code class="docutils literal"><span class="pre">prevStateGrad</span></code>: previous state gradient. </li>
<li><code class="first docutils literal"><span class="pre">prevStateGrad</span></code> - <p>previous state gradient. </p> <li><code class="docutils literal"><span class="pre">stateGrad</span></code>: state gradient. </li>
</li> <li><code class="docutils literal"><span class="pre">stateActiveGrad</span></code>: state active gradient. </li>
<li><code class="first docutils literal"><span class="pre">stateGrad</span></code> - <p>state gradient. </p> <li><code class="docutils literal"><span class="pre">outputGrad</span></code>: output gradient. </li>
</li>
<li><code class="first docutils literal"><span class="pre">stateActiveGrad</span></code> - <p>state active gradient. </p>
</li>
<li><code class="first docutils literal"><span class="pre">outputGrad</span></code> - <p>output gradient. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -421,19 +418,13 @@ var _hmt = _hmt || []; ...@@ -421,19 +418,13 @@ var _hmt = _hmt || [];
<dd><em>#include &lt;hl_base.h&gt;</em><p>Gru value. </p> <dd><em>#include &lt;hl_base.h&gt;</em><p>Gru value. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">gateWeight</span></code> - <p>gate weight (updateGate + resetGate). </p> <li><code class="docutils literal"><span class="pre">gateWeight</span></code>: gate weight (updateGate + resetGate). </li>
</li> <li><code class="docutils literal"><span class="pre">stateWeight</span></code>: frame state weight. </li>
<li><code class="first docutils literal"><span class="pre">stateWeight</span></code> - <p>frame state weight. </p> <li><code class="docutils literal"><span class="pre">gateValue</span></code>: gate value results. </li>
</li> <li><code class="docutils literal"><span class="pre">resetOutputValue</span></code>: resetOutput value. </li>
<li><code class="first docutils literal"><span class="pre">gateValue</span></code> - <p>gate value results. </p> <li><code class="docutils literal"><span class="pre">outputValue</span></code>: output value. </li>
</li> <li><code class="docutils literal"><span class="pre">prevOutValue</span></code>: previous output value. </li>
<li><code class="first docutils literal"><span class="pre">resetOutputValue</span></code> - <p>resetOutput value. </p>
</li>
<li><code class="first docutils literal"><span class="pre">outputValue</span></code> - <p>output value. </p>
</li>
<li><code class="first docutils literal"><span class="pre">prevOutValue</span></code> - <p>previous output value. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -479,19 +470,13 @@ var _hmt = _hmt || []; ...@@ -479,19 +470,13 @@ var _hmt = _hmt || [];
<dd><em>#include &lt;hl_base.h&gt;</em><p>Gru gradient. </p> <dd><em>#include &lt;hl_base.h&gt;</em><p>Gru gradient. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">gateWeightGrad</span></code> - <p>gate weight gradient. </p> <li><code class="docutils literal"><span class="pre">gateWeightGrad</span></code>: gate weight gradient. </li>
</li> <li><code class="docutils literal"><span class="pre">stateWeightGrad</span></code>: frame state weight gradient. </li>
<li><code class="first docutils literal"><span class="pre">stateWeightGrad</span></code> - <p>frame state weight gradient. </p> <li><code class="docutils literal"><span class="pre">gateGrad</span></code>: gate gradient results. </li>
</li> <li><code class="docutils literal"><span class="pre">resetOutputGrad</span></code>: resetOutput gradient. </li>
<li><code class="first docutils literal"><span class="pre">gateGrad</span></code> - <p>gate gradient results. </p> <li><code class="docutils literal"><span class="pre">outputGrad</span></code>: output gradient. </li>
</li> <li><code class="docutils literal"><span class="pre">prevOutGrad</span></code>: previous output gradient. </li>
<li><code class="first docutils literal"><span class="pre">resetOutputGrad</span></code> - <p>resetOutput gradient. </p>
</li>
<li><code class="first docutils literal"><span class="pre">outputGrad</span></code> - <p>output gradient. </p>
</li>
<li><code class="first docutils literal"><span class="pre">prevOutGrad</span></code> - <p>previous output gradient. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -537,19 +522,13 @@ var _hmt = _hmt || []; ...@@ -537,19 +522,13 @@ var _hmt = _hmt || [];
<dd><em>#include &lt;hl_base.h&gt;</em><p>HPPL sparse matrix. </p> <dd><em>#include &lt;hl_base.h&gt;</em><p>HPPL sparse matrix. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">matrix</span></code> - <p>sparse matrix. </p> <li><code class="docutils literal"><span class="pre">matrix</span></code>: sparse matrix. </li>
</li> <li><code class="docutils literal"><span class="pre">format</span></code>: matrix format. </li>
<li><code class="first docutils literal"><span class="pre">format</span></code> - <p>matrix format. </p> <li><code class="docutils literal"><span class="pre">type</span></code>: the type of matrix values. </li>
</li> <li><code class="docutils literal"><span class="pre">rows</span></code>: matrix rows. </li>
<li><code class="first docutils literal"><span class="pre">type</span></code> - <p>the type of matrix values. </p> <li><code class="docutils literal"><span class="pre">cols</span></code>: matrix columns. </li>
</li> <li><code class="docutils literal"><span class="pre">nnz</span></code>: nonzero values of sparse matrix. </li>
<li><code class="first docutils literal"><span class="pre">rows</span></code> - <p>matrix rows. </p>
</li>
<li><code class="first docutils literal"><span class="pre">cols</span></code> - <p>matrix columns. </p>
</li>
<li><code class="first docutils literal"><span class="pre">nnz</span></code> - <p>nonzero values of sparse matrix. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -641,11 +620,9 @@ var _hmt = _hmt || []; ...@@ -641,11 +620,9 @@ var _hmt = _hmt || [];
<dd><p>Initialize cudnn. </p> <dd><p>Initialize cudnn. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">cudnn_handle</span></code> - <p>Cudnn handle. </p> <li><code class="docutils literal"><span class="pre">cudnn_handle</span></code>: Cudnn handle. </li>
</li> <li><code class="docutils literal"><span class="pre">stream</span></code>: Cudnn stream. </li>
<li><code class="first docutils literal"><span class="pre">stream</span></code> - <p>Cudnn stream. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -658,11 +635,9 @@ var _hmt = _hmt || []; ...@@ -658,11 +635,9 @@ var _hmt = _hmt || [];
<dd><p>Initialize cublas. </p> <dd><p>Initialize cublas. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">cublas_handle</span></code> - <p>Cublas handle. </p> <li><code class="docutils literal"><span class="pre">cublas_handle</span></code>: Cublas handle. </li>
</li> <li><code class="docutils literal"><span class="pre">stream</span></code>: Cuda stream. </li>
<li><code class="first docutils literal"><span class="pre">stream</span></code> - <p>Cuda stream. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -675,9 +650,8 @@ var _hmt = _hmt || []; ...@@ -675,9 +650,8 @@ var _hmt = _hmt || [];
<dd><p>Initialize cudnn tensor descriptor. </p> <dd><p>Initialize cudnn tensor descriptor. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">cudnn_desc</span></code> - <p>Cudnn tensor descriptor. </p> <li><code class="docutils literal"><span class="pre">cudnn_desc</span></code>: Cudnn tensor descriptor. </li>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -700,31 +674,19 @@ var _hmt = _hmt || []; ...@@ -700,31 +674,19 @@ var _hmt = _hmt || [];
<dd><p>Thread resource structure. </p> <dd><p>Thread resource structure. </p>
<p><dl class="docutils"> <p><dl class="docutils">
<dt><strong>Parameters</strong></dt> <dt><strong>Parameters</strong></dt>
<dd><ul class="breatheparameterlist first last"> <dd><ul class="breatheparameterlist first last simple">
<li><code class="first docutils literal"><span class="pre">stream[HPPL_STREAM_END]</span></code> - <p>Stream for thread. </p> <li><code class="docutils literal"><span class="pre">stream[HPPL_STREAM_END]</span></code>: Stream for thread. </li>
</li> <li><code class="docutils literal"><span class="pre">handle</span></code>: Cublas Handle. </li>
<li><code class="first docutils literal"><span class="pre">handle</span></code> - <p>Cublas Handle. </p> <li><code class="docutils literal"><span class="pre">gen</span></code>: Curand Generator. </li>
</li> <li><code class="docutils literal"><span class="pre">cudnn_handle</span></code>: Cudnn handle. </li>
<li><code class="first docutils literal"><span class="pre">gen</span></code> - <p>Curand Generator. </p> <li><code class="docutils literal"><span class="pre">cudnn_desc</span></code>: Cudnn image descriptor. </li>
</li> <li><code class="docutils literal"><span class="pre">*gen_mutex</span></code>: Gen lock. </li>
<li><code class="first docutils literal"><span class="pre">cudnn_handle</span></code> - <p>Cudnn handle. </p> <li><code class="docutils literal"><span class="pre">*gpu_mem</span></code>: HPPL GPU Memory. </li>
</li> <li><code class="docutils literal"><span class="pre">*cpu_mem</span></code>: HPPL CPU Memory. </li>
<li><code class="first docutils literal"><span class="pre">cudnn_desc</span></code> - <p>Cudnn image descriptor. </p> <li><code class="docutils literal"><span class="pre">event</span></code>: gpu_mem event. </li>
</li> <li><code class="docutils literal"><span class="pre">device</span></code>: Thread device context. </li>
<li><code class="first docutils literal"><span class="pre">*gen_mutex</span></code> - <p>Gen lock. </p> <li><code class="docutils literal"><span class="pre">major</span></code>: Compute capability. </li>
</li> <li><code class="docutils literal"><span class="pre">is_init</span></code>: Thread init or not. </li>
<li><code class="first docutils literal"><span class="pre">*gpu_mem</span></code> - <p>HPPL GPU Memory. </p>
</li>
<li><code class="first docutils literal"><span class="pre">*cpu_mem</span></code> - <p>HPPL CPU Memory. </p>
</li>
<li><code class="first docutils literal"><span class="pre">event</span></code> - <p>gpu_mem event. </p>
</li>
<li><code class="first docutils literal"><span class="pre">device</span></code> - <p>Thread device context. </p>
</li>
<li><code class="first docutils literal"><span class="pre">major</span></code> - <p>Compute capability. </p>
</li>
<li><code class="first docutils literal"><span class="pre">is_init</span></code> - <p>Thread init or not. </p>
</li>
</ul> </ul>
</dd> </dd>
</dl> </dl>
...@@ -872,7 +834,7 @@ var _hmt = _hmt || []; ...@@ -872,7 +834,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -90,12 +90,12 @@ var _hmt = _hmt || []; ...@@ -90,12 +90,12 @@ var _hmt = _hmt || [];
<dl class="function"> <dl class="function">
<dt id="_CPPv2N6paddle18ActivationFunctionD0Ev"> <dt id="_CPPv2N6paddle18ActivationFunctionD0Ev">
<span id="paddle::ActivationFunction::~ActivationFunction"></span>virtual <span class="target" id="paddleclasspaddle_1_1ActivationFunction_1a97992626b45a327e1d9f405b51d11deb"></span><code class="descname">~ActivationFunction</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv2N6paddle18ActivationFunctionD0Ev" title="Permalink to this definition"></a></dt> <span id="paddle::ActivationFunction::~ActivationFunction"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1a97992626b45a327e1d9f405b51d11deb"></span><em class="property">virtual</em> <code class="descname">~ActivationFunction</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv2N6paddle18ActivationFunctionD0Ev" title="Permalink to this definition"></a></dt>
<dd></dd></dl> <dd></dd></dl>
<dl class="function"> <dl class="function">
<dt id="_CPPv2N6paddle18ActivationFunction7forwardER8Argument"> <dt id="_CPPv2N6paddle18ActivationFunction7forwardER8Argument">
<span id="paddle::ActivationFunction::forward__ArgumentR"></span>virtual <span class="target" id="paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"></span>void <code class="descname">forward</code><span class="sig-paren">(</span><a class="reference internal" href="../../parameter/parameter/parameter.html#_CPPv2N6paddle8ArgumentE" title="paddle::Argument">Argument</a> &amp;<em>act</em><span class="sig-paren">)</span> = 0<a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction7forwardER8Argument" title="Permalink to this definition"></a></dt> <span id="paddle::ActivationFunction::forward__ArgumentR"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"></span><em class="property">virtual</em> void <code class="descname">forward</code><span class="sig-paren">(</span><a class="reference internal" href="../../parameter/parameter/parameter.html#_CPPv2N6paddle8ArgumentE" title="paddle::Argument">Argument</a> &amp;<em>act</em><span class="sig-paren">)</span> = 0<a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction7forwardER8Argument" title="Permalink to this definition"></a></dt>
<dd><p>Foward propagation. </p> <dd><p>Foward propagation. </p>
<p>act.value &lt;- f(act.value), where f is the activation function. Suppose that before calling <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a>, act.value is x and after <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a> is called, act.value is y, then y = f(x).</p> <p>act.value &lt;- f(act.value), where f is the activation function. Suppose that before calling <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a>, act.value is x and after <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a> is called, act.value is y, then y = f(x).</p>
<p>Usually, act is <a class="reference internal" href="../layers/layer.html#paddleclasspaddle_1_1Layer_1a955c467f7d96f46d8e2ea0e995137097"><span class="std std-ref">Layer::output_</span></a> </p> <p>Usually, act is <a class="reference internal" href="../layers/layer.html#paddleclasspaddle_1_1Layer_1a955c467f7d96f46d8e2ea0e995137097"><span class="std std-ref">Layer::output_</span></a> </p>
...@@ -103,7 +103,7 @@ var _hmt = _hmt || []; ...@@ -103,7 +103,7 @@ var _hmt = _hmt || [];
<dl class="function"> <dl class="function">
<dt id="_CPPv2N6paddle18ActivationFunction8backwardER8Argument"> <dt id="_CPPv2N6paddle18ActivationFunction8backwardER8Argument">
<span id="paddle::ActivationFunction::backward__ArgumentR"></span>virtual <span class="target" id="paddleclasspaddle_1_1ActivationFunction_1aa567f66ac2dea1f209b6de8134634c4a"></span>void <code class="descname">backward</code><span class="sig-paren">(</span><a class="reference internal" href="../../parameter/parameter/parameter.html#_CPPv2N6paddle8ArgumentE" title="paddle::Argument">Argument</a> &amp;<em>act</em><span class="sig-paren">)</span> = 0<a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction8backwardER8Argument" title="Permalink to this definition"></a></dt> <span id="paddle::ActivationFunction::backward__ArgumentR"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1aa567f66ac2dea1f209b6de8134634c4a"></span><em class="property">virtual</em> void <code class="descname">backward</code><span class="sig-paren">(</span><a class="reference internal" href="../../parameter/parameter/parameter.html#_CPPv2N6paddle8ArgumentE" title="paddle::Argument">Argument</a> &amp;<em>act</em><span class="sig-paren">)</span> = 0<a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction8backwardER8Argument" title="Permalink to this definition"></a></dt>
<dd><p>Backward propagaion. </p> <dd><p>Backward propagaion. </p>
<p>x and y are defined in the above comment for <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a>.<ul class="simple"> <p>x and y are defined in the above comment for <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1a3d2d82f5548cff294b93d37f08559fe4"><span class="std std-ref">forward()</span></a>.<ul class="simple">
<li>Before calling <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1aa567f66ac2dea1f209b6de8134634c4a"><span class="std std-ref">backward()</span></a>, act.grad = dE / dy, where E is the error/cost</li> <li>Before calling <a class="reference internal" href="#paddleclasspaddle_1_1ActivationFunction_1aa567f66ac2dea1f209b6de8134634c4a"><span class="std std-ref">backward()</span></a>, act.grad = dE / dy, where E is the error/cost</li>
...@@ -113,8 +113,8 @@ var _hmt = _hmt || []; ...@@ -113,8 +113,8 @@ var _hmt = _hmt || [];
</dd></dl> </dd></dl>
<dl class="function"> <dl class="function">
<dt id="_CPPv2N6paddle18ActivationFunction7getNameEv"> <dt id="_CPPv2NK6paddle18ActivationFunction7getNameEv">
<span id="paddle::ActivationFunction::getName"></span>virtual <span class="target" id="paddleclasspaddle_1_1ActivationFunction_1aea8876019ab52ac0329d5f856ea1458c"></span><em class="property">const</em> std::string &amp;<code class="descname">getName</code><span class="sig-paren">(</span><span class="sig-paren">)</span> const = 0<a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction7getNameEv" title="Permalink to this definition"></a></dt> <span id="paddle::ActivationFunction::getNameC"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1aea8876019ab52ac0329d5f856ea1458c"></span><em class="property">virtual</em> <em class="property">const</em> std::string &amp;<code class="descname">getName</code><span class="sig-paren">(</span><span class="sig-paren">)</span> <em class="property">const</em> = 0<a class="headerlink" href="#_CPPv2NK6paddle18ActivationFunction7getNameEv" title="Permalink to this definition"></a></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -125,6 +125,11 @@ var _hmt = _hmt || []; ...@@ -125,6 +125,11 @@ var _hmt = _hmt || [];
<span id="paddle::ActivationFunction::create__ssCR"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1ace0afca87b53041d4e2e603a536bc230"></span><a class="reference internal" href="#_CPPv2N6paddle18ActivationFunctionE" title="paddle::ActivationFunction">ActivationFunction</a> *<code class="descname">create</code><span class="sig-paren">(</span><em class="property">const</em> std::string &amp;<em>type</em><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction6createERKNSt6stringE" title="Permalink to this definition"></a></dt> <span id="paddle::ActivationFunction::create__ssCR"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1ace0afca87b53041d4e2e603a536bc230"></span><a class="reference internal" href="#_CPPv2N6paddle18ActivationFunctionE" title="paddle::ActivationFunction">ActivationFunction</a> *<code class="descname">create</code><span class="sig-paren">(</span><em class="property">const</em> std::string &amp;<em>type</em><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction6createERKNSt6stringE" title="Permalink to this definition"></a></dt>
<dd></dd></dl> <dd></dd></dl>
<dl class="function">
<dt id="_CPPv2N6paddle18ActivationFunction21getAllRegisteredTypesEv">
<span id="paddle::ActivationFunction::getAllRegisteredTypes"></span><span class="target" id="paddleclasspaddle_1_1ActivationFunction_1af651fa503dc54ab52d807803ac00f78b"></span>std::vector&lt;std::string&gt; <code class="descname">getAllRegisteredTypes</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv2N6paddle18ActivationFunction21getAllRegisteredTypesEv" title="Permalink to this definition"></a></dt>
<dd></dd></dl>
</div> </div>
</dd></dl> </dd></dl>
...@@ -184,7 +189,7 @@ var _hmt = _hmt || []; ...@@ -184,7 +189,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -155,7 +155,7 @@ var _hmt = _hmt || []; ...@@ -155,7 +155,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -163,7 +163,7 @@ var _hmt = _hmt || []; ...@@ -163,7 +163,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -146,7 +146,7 @@ var _hmt = _hmt || []; ...@@ -146,7 +146,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -213,6 +213,7 @@ var _hmt = _hmt || []; ...@@ -213,6 +213,7 @@ var _hmt = _hmt || [];
<li class="toctree-l3"><a class="reference internal" href="layer.html#rankingcost">RankingCost</a></li> <li class="toctree-l3"><a class="reference internal" href="layer.html#rankingcost">RankingCost</a></li>
<li class="toctree-l3"><a class="reference internal" href="layer.html#softbinaryclasscrossentropy">SoftBinaryClassCrossEntropy</a></li> <li class="toctree-l3"><a class="reference internal" href="layer.html#softbinaryclasscrossentropy">SoftBinaryClassCrossEntropy</a></li>
<li class="toctree-l3"><a class="reference internal" href="layer.html#sumofsquarescostlayer">SumOfSquaresCostLayer</a></li> <li class="toctree-l3"><a class="reference internal" href="layer.html#sumofsquarescostlayer">SumOfSquaresCostLayer</a></li>
<li class="toctree-l3"><a class="reference internal" href="layer.html#sumcostlayer">SumCostLayer</a></li>
</ul> </ul>
</li> </li>
<li class="toctree-l2"><a class="reference internal" href="layer.html#cossimlayer">CosSimLayer</a></li> <li class="toctree-l2"><a class="reference internal" href="layer.html#cossimlayer">CosSimLayer</a></li>
...@@ -294,7 +295,7 @@ var _hmt = _hmt || []; ...@@ -294,7 +295,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -219,7 +219,7 @@ var _hmt = _hmt || []; ...@@ -219,7 +219,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -137,7 +137,7 @@ var _hmt = _hmt || []; ...@@ -137,7 +137,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -135,7 +135,7 @@ var _hmt = _hmt || []; ...@@ -135,7 +135,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
...@@ -132,7 +132,7 @@ var _hmt = _hmt || []; ...@@ -132,7 +132,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -137,7 +137,7 @@ var _hmt = _hmt || []; ...@@ -137,7 +137,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
...@@ -132,7 +132,7 @@ var _hmt = _hmt || []; ...@@ -132,7 +132,7 @@ var _hmt = _hmt || [];
</div> </div>
<div class="footer" role="contentinfo"> <div class="footer" role="contentinfo">
&#169; Copyright 2016, PaddlePaddle developers. &#169; Copyright 2016, PaddlePaddle developers.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.8. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.4.9.
</div> </div>
</body> </body>
</html> </html>
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册