This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.
This document describes the RNN (Recurrent Neural Network) operator and how it is implemented in PaddlePaddle. The RNN op requires that all instances in a mini-batch have the same length. We will have a more flexible dynamic RNN operator in the future.
## RNN Algorithm Implementation
<paligh="center">
<palign="center">
<imgsrc="./images/rnn.jpg"/>
</p>
The above diagram shows an RNN unrolled into a full network.
There are several important concepts:
There are several important concepts here:
-*step-net*: the sub-graph to run at each step,
-*memory*, $h_t$, the state of the current step,
-*ex-memory*, $h_{t-1}$, the state of the previous step,
-*initial memory value*, the ex-memory of the first step.
-*step-net*: the sub-graph that runs at each step.
-*memory*, $h_t$, the state of the current step.
-*ex-memory*, $h_{t-1}$, the state of the previous step.
-*initial memory value*, the memory of the first (initial) step.
### Step-scope
There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in *step-scopes* -- scopes created for each step.
There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes* which are created for each step.
<paligh="center">
<palign="center">
<imgsrc="./images/rnn.png"/><br/>
Figure 2 the RNN's data flow
Figure 2 illustrates the RNN's data flow
</p>
Please be aware that all steps run the same step-net. Each step
Please be aware that every step runs the same step-net. Each step does the following:
1.creates the step-scope,
2.realizes local variables, including step-outputs, in the step-scope, and
3.runs the step-net, which could use these variables.
1.Creates the step-scope.
2.Initializes the local variables including step-outputs, in the step-scope.
3.Runs the step-net, which uses the above mentioned variables.
The RNN operator will compose its output from step outputs in step scopes.
The RNN operator will compose its output from step outputs in each of the step scopes.
### Memory and Ex-memory
Let's give more details about memory and ex-memory via a simply example:
Let's give more details about memory and ex-memory using a simple example:
$$
h_t = U h_{t-1} + W x_t
$$,
where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$'s respectively.
where $h_t$ and $h_{t-1}$ are the memory and ex-memory (previous memory) of step $t$ respectively.
In the implementation, we can make an ex-memory variable either "refers to" the memory variable of the previous step,
or copy the value of the previous memory value to the current ex-memory variable.
In the implementation, we can make an ex-memory variable either "refer to" the memory variable of the previous step,
or copy the memory value of the previous step to the current ex-memory variable.
### Usage in Python
For more information on Block, please refer to the [design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).
We can define an RNN's step-net using Block:
We can define an RNN's step-net using a Block:
```python
importpaddleaspd
X=some_op()# x is some operator's output, and is a LoDTensor
X=some_op()# x is some operator's output and is a LoDTensor
-`rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
-`rnn.add_memory` creates a variable used as the memory.
-`rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
-`rnn.add_input`: indicates that the parameter is a variable that will be segmented into step-inputs.
-`rnn.add_memory`: creates a variable used as the memory.
-`rnn.add_outputs`: marks the variables that will be concatenated across steps into the RNN output.
### Nested RNN and LoDTensor
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences. Each step of the higher level RNN also receives an input from the corresponding step of the lower level, and additionally the output from the previous time step at the same level.
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text.
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
In the above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is an LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
By default, the `RNNOp` will concatenate the outputs from all the time steps,
if the `output_all_steps` set to False, it will only output the final time step.
By default, the `RNNOp` will concatenate the outputs from all the time steps.
If the `output_all_steps` is set to False, it will only output the final time step.
In tasks such as machine translation and image to text,
a [sequence decoder](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md) is necessary to generate sequences.
In tasks such as machine translation and visual captioning,
a [sequence decoder](https://github.com/PaddlePaddle/book/blob/develop/08.machine_translation/README.md) is necessary to generate sequences, one word at a time.
This documentation describes how to implement the sequence decoder as an operator.
## Beam Search based Decoder
The [beam search algorithm](https://en.wikipedia.org/wiki/Beam_search) is necessary when generating sequences,
it is a heuristic search algorithm that explores the paths by expanding the most promising node in a limited set.
The [beam search algorithm](https://en.wikipedia.org/wiki/Beam_search) is necessary when generating sequences. It is a heuristic search algorithm that explores the paths by expanding the most promising node in a limited set.
In the old version of PaddlePaddle, a C++ class `RecurrentGradientMachine` implements the general sequence decoder based on beam search,
due to the complexity, the implementation relays on a lot of special data structures,
quite trivial and hard to be customized by users.
In the old version of PaddlePaddle, the C++ class `RecurrentGradientMachine` implements the general sequence decoder based on beam search, due to the complexity involved, the implementation relies on a lot of special data structures that are quite trivial and hard to be customized by users.
There are a lot of heuristic tricks in the sequence generation tasks,
so the flexibility of sequence decoder is very important to users.
There are a lot of heuristic tricks in the sequence generation tasks, so the flexibility of sequence decoder is very important to users.
During PaddlePaddle's refactoring work,
some new concept is proposed such as [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor.md) and [TensorArray](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/tensor_array.md) that can better support sequence usage,
and they can help to make the implementation of beam search based sequence decoder **more transparent and modular** .
During the refactoring of PaddlePaddle, some new concepts are proposed such as: [LoDTensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor.md) and [TensorArray](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/tensor_array.md) that can better support the sequence usage, and they can also help make the implementation of beam search based sequence decoder **more transparent and modular** .
For example, the RNN sates, candidates IDs and probabilities of beam search can be represented as `LoDTensors`;
For example, the RNN states, candidates IDs and probabilities of beam search can be represented all as `LoDTensors`;
the selected candidate's IDs in each time step can be stored in a `TensorArray`, and `Packed` to the sentences translated.
## Changing LoD's absolute offset to relative offsets
The current `LoDTensor` is designed to store levels of variable-length sequences,
it stores several arrays of integers each represents a level.
The current `LoDTensor` is designed to store levels of variable-length sequences. It stores several arrays of integers where each represents a level.
The integers in each level represents the begin and end (not inclusive) offset of a sequence **in the underlying tensor**,
let's call this format the **absolute-offset LoD** for clear.
The integers in each level represent the begin and end (not inclusive) offset of a sequence **in the underlying tensor**,
let's call this format the **absolute-offset LoD** for clarity.
The relative-offset LoD can fast retrieve any sequence but fails to represent empty sequences, for example, a two-level LoD is as follows
The relative-offset LoD can retrieve any sequence very quickly but fails to represent empty sequences, for example, a two-level LoD is as follows
```python
[[0,3,9]
[0,2,3,3,3,9]]
...
...
@@ -41,10 +34,9 @@ The first level tells that there are two sequences:
while on the second level, there are several empty sequences that both begin and end at `3`.
It is impossible to tell how many empty second-level sequences exist in the first-level sequences.
There are many scenarios that relay on empty sequence representation,
such as machine translation or image to text, one instance has no translations or the empty candidate set for a prefix.
There are many scenarios that rely on empty sequence representation, for example in machine translation or visual captioning, one instance has no translation or the empty candidate set for a prefix.
So let's introduce another format of LoD,
So let's introduce another format of LoD,
it stores **the offsets of the lower level sequences** and is called **relative-offset** LoD.
For example, to represent the same sequences of the above data
...
...
@@ -54,19 +46,18 @@ For example, to represent the same sequences of the above data
[0,2,3,3,3,9]]
```
the first level represents that there are two sequences,
the first level represents that there are two sequences,
their offsets in the second-level LoD is `[0, 3)` and `[3, 5)`.
The second level is the same with the relative offset example because the lower level is a tensor.
It is easy to find out the second sequence in the first-level LoD has two empty sequences.
The following demos are based on relative-offset LoD.
The following examples are based on relative-offset LoD.
## Usage in a simple machine translation model
Let's start from a simple machine translation model that is simplified from [machine translation chapter](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation) to draw a simple blueprint of what a sequence decoder can do and how to use it.
Let's start from a simple machine translation model that is simplified from the [machine translation chapter](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation) to draw a blueprint of what a sequence decoder can do and how to use it.
The model has an encoder that learns the semantic vector from a sequence,
and a decoder which uses the sequence decoder to generate new sentences.
The model has an encoder that learns the semantic vector from a sequence, and a decoder which uses the sequence encoder to generate new sentences.
**Encoder**
```python
...
...
@@ -117,7 +108,7 @@ def generate():
# which means there are 2 sentences to translate
# - the first sentence has 1 translation prefixes, the offsets are [0, 1)
# - the second sentence has 2 translation prefixes, the offsets are [1, 3) and [3, 6)
# the target_word.lod is
# the target_word.lod is
# [[0, 1, 6]
# [0, 2, 4, 7, 9 12]]
# which means 2 sentences to translate, each has 1 and 5 prefixes
...
...
@@ -154,37 +145,36 @@ def generate():
translation_ids,translation_scores=decoder()
```
The `decoder.beam_search` is a operator that given the candidates and the scores of translations including the candidates,
return the result of the beam search algorithm.
The `decoder.beam_search` is an operator that, given the candidates and the scores of translations including the candidates,
returns the result of the beam search algorithm.
In this way, users can customize anything on the inputs or outputs of beam search, for example, two ways to prune some translation prefixes
In this way, users can customize anything on the input or output of beam search, for example:
1.meke the correspondind elements in `topk_generated_scores` zero or some small values, beam_search will discard this candidate.
2.remove some specific candidate in `selected_ids`
3.get the final `translation_ids`, remove the translation sequence in it.
1.Make the corresponding elements in `topk_generated_scores` zero or some small values, beam_search will discard this candidate.
2.Remove some specific candidate in `selected_ids`.
3.Get the final `translation_ids`, remove the translation sequence in it.
The implementation of sequence decoder can reuse the C++ class [RNNAlgorithm](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/paddle/operators/dynamic_recurrent_op.h#L30),
so the python syntax is quite similar to a[RNN](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/doc/design/block.md#blocks-with-for-and-rnnop).
The implementation of sequence decoder can reuse the C++ class: [RNNAlgorithm](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/paddle/operators/dynamic_recurrent_op.h#L30),
so the python syntax is quite similar to that of an [RNN](https://github.com/Superjom/Paddle/blob/68cac3c0f8451fe62a4cdf156747d6dc0ee000b3/doc/design/block.md#blocks-with-for-and-rnnop).
Both of them are two-level `LoDTensors`
Both of them are two-level `LoDTensors`:
-the first level represents `batch_size` of (source) sentences;
-the second level represents the candidate ID sets for translation prefix.
-The first level represents `batch_size` of (source) sentences.
-The second level represents the candidate ID sets for translation prefix.
for example, 3 source sentences to translate, and has 2, 3, 1 candidates.
For example, 3 source sentences to translate, and has 2, 3, 1 candidates.
Unlike an RNN, in sequence decoder, the previous state and the current state have different LoD and shape,
a `lod_expand` operator is used to expand the LoD of the previous state to fit the current state.
Unlike an RNN, in sequence decoder, the previous state and the current state have different LoD and shape, and an `lod_expand` operator is used to expand the LoD of the previous state to fit the current state.
For example, the previous state
For example, the previous state:
* LoD is `[0, 1, 3][0, 2, 5, 6]`
* content of tensor is `a1 a2 b1 b2 b3 c1`
the current state stored in `encoder_ctx_expanded`
the current state is stored in `encoder_ctx_expanded`:
* LoD is `[0, 2, 7][0 3 5 8 9 11 11]`
* the content is
* the content is
- a1 a1 a1 (a1 has 3 candidates, so the state should be copied 3 times for each candidates)
- a2 a2
- b1 b1 b1
...
...
@@ -192,54 +182,48 @@ the current state stored in `encoder_ctx_expanded`
- b3 b3
- None (c1 has 0 candidates, so c1 is dropped)
Benefit from the relative offset LoD, empty candidate set can be represented naturally.
The benefit from the relative offset LoD is that the empty candidate set can be represented naturally.
the status in each time step can be stored in `TensorArray`, and `Pack`ed to a final LoDTensor, the corresponding syntax is
The status in each time step can be stored in `TensorArray`, and `Pack`ed to a final LoDTensor. The corresponding syntax is:
```python
decoder.output(selected_ids)
decoder.output(selected_generation_scores)
```
the `selected_ids` is the candidate ids for the prefixes,
it will be `Packed` by `TensorArray` to a two-level `LoDTensor`,
the first level represents the source sequences,
the second level represents generated sequences.
The `selected_ids` are the candidate ids for the prefixes, and will be `Packed` by `TensorArray` to a two-level `LoDTensor`, where the first level represents the source sequences and the second level represents generated sequences.
Pack the `selected_scores` will get a `LoDTensor` that stores scores of each candidate of translations.
Packing the `selected_scores` will get a `LoDTensor` that stores scores of each translation candidate.
Pack the `selected_generation_scores` will get a `LoDTensor`, and each tail is the probability of the translation.
Packing the `selected_generation_scores` will get a `LoDTensor`, and each tail is the probability of the translation.
According the image above, the only phrase to change LoD is beam search.
According to the image above, the only phase that changes the LoD is beam search.
## Beam search design
The beam search algorthm will be implemented as one method of the sequence decoder, it has 3 inputs
The beam search algorithm will be implemented as one method of the sequence decoder and has 3 inputs:
1.`topk_ids`, top K candidate ids for each prefix.
1.`topk_ids`, the top K candidate ids for each prefix.
2.`topk_scores`, the corresponding scores for `topk_ids`
3.`generated_scores`, the score of the prefixes.
All of the are LoDTensors, so that the sequence affilication is clear.
Beam search will keep a beam for each prefix and select a smaller candidate set for each prefix.
All of these are LoDTensors, so that the sequence affiliation is clear. Beam search will keep a beam for each prefix and select a smaller candidate set for each prefix.
It will return three variables
It will return three variables:
1.`selected_ids`, the final candidate beam search function selected for the next step.
2.`selected_scores`, the scores for the candidates.
3.`generated_scores`, the updated scores for each prefixes (with the new candidates appended).
3.`generated_scores`, the updated scores for each prefix (with the new candidates appended).
## Introducing the LoD-based `Pack` and `Unpack` methods in `TensorArray`
The `selected_ids`, `selected_scores` and `generated_scores` are LoDTensors,
and they exist in each time step,
The `selected_ids`, `selected_scores` and `generated_scores` are LoDTensors that exist at each time step,
so it is natural to store them in arrays.
Currently, PaddlePaddle has a module called `TensorArray` which can store an array of tensors,
the results of beam search are better to store in a `TensorArray`.
Currently, PaddlePaddle has a module called `TensorArray` which can store an array of tensors. It is better to store the results of beam search in a `TensorArray`.
The `Pack` and `UnPack` in `TensorArray` are used to package tensors in the array to a `LoDTensor` or split the `LoDTensor` to an array of tensors.
It needs some extensions to support pack or unpack an array of `LoDTensors`.
The `Pack` and `UnPack` in `TensorArray` are used to pack tensors in the array to an `LoDTensor` or split the `LoDTensor` to an array of tensors.
It needs some extensions to support the packing or unpacking an array of `LoDTensors`.
We want the building procedure generates Docker images so that we can run PaddlePaddle applications on Kubernetes clusters.
We want to make the building procedures:
We want to build .deb packages so that enterprise users can run PaddlePaddle applications without Docker.
1. Static, can reproduce easily.
1. Generate python `whl` packages that can be widely use cross many distributions.
1. Build different binaries per release to satisfy different environments:
- Binaries for different CUDA and CUDNN versions, like CUDA 7.5, 8.0, 9.0
- Binaries containing only capi
- Binaries for python with wide unicode support or not.
1. Build docker images with PaddlePaddle pre-installed, so that we can run
PaddlePaddle applications directly in docker or on Kubernetes clusters.
We want to minimize the size of generated Docker images and .deb packages so to reduce the download time.
To achieve this, we created a repo: https://github.com/PaddlePaddle/buildtools
which gives several docker images that are `manylinux1` sufficient. Then we
can build PaddlePaddle using these images to generate corresponding `whl`
binaries.
We want to encapsulate building tools and dependencies in a *development* Docker image so to ease the tools installation for developers.
## Run The Build
Developers use various editors (emacs, vim, Eclipse, Jupyter Notebook), so the development Docker image contains only building tools, not editing tools, and developers are supposed to git clone source code into their development computers and map the code into the development container.
### Build Evironments
We want the procedure and tools also work with testing, continuous integration, and releasing.
So we need two Docker images for each version of PaddlePaddle:
1.`paddle:<version>-dev`
This a development image contains only the development tools and standardizes the building procedure. Users include:
### Start Build
- developers -- no longer need to install development tools on the host, and can build their current work on the host (development computer).
- release engineers -- use this to build the official release from certain branch/tag on Github.com.
- document writers / Website developers -- Our documents are in the source repo in the form of .md/.rst files and comments in source code. We need tools to extract the information, typeset, and generate Web pages.
Choose one docker image that suit your environment and run the following
command to start a build:
Of course, developers can install building tools on their development computers. But different versions of PaddlePaddle might require different set or version of building tools. Also, it makes collaborative debugging easier if all developers use a unified development environment.
The development image should include the following tools:
docker run --rm-v$PWD:/paddle -e"WITH_GPU=OFF"-e"WITH_AVX=ON"-e"WITH_TESTING=OFF"-e"RUN_TEST=OFF"-e"PYTHON_ABI=cp27-cp27mu" paddlepaddle/paddle_manylinux_devel /paddle/paddle/scripts/docker/build.sh
```
Many developers work on a remote computer with GPU; they could ssh into the computer and `docker exec` into the development container. However, running `sshd` in the container allows developers to ssh into the container directly.
After the build finishes, you can get output `whl` package under
`build/python/dist`.
1.`paddle:<version>`
This command mounts the source directory on the host into `/paddle` in the container, then run the build script `/paddle/paddle/scripts/docker/build.sh`
in the container. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
This is the production image, generated using the development image. This image might have multiple variants:
### Build Options
- GPU/AVX `paddle:<version>-gpu`
- GPU/no-AVX `paddle:<version>-gpu-noavx`
- no-GPU/AVX `paddle:<version>`
- no-GPU/no-AVX `paddle:<version>-noavx`
Users can specify the following Docker build arguments with either "ON" or "OFF" value:
We allow users to choose between GPU and no-GPU because the GPU version image is much larger than then the no-GPU version.
| Option | Default | Description |
| ------ | -------- | ----------- |
| `WITH_GPU` | OFF | Generates NVIDIA CUDA GPU code and relies on CUDA libraries. |
| `WITH_AVX` | OFF | Set to "ON" to enable AVX support. |
| `WITH_TESTING` | ON | Build unit tests binaries. |
| `WITH_MKLDNN` | ON | Build with [Intel® MKL DNN](https://github.com/01org/mkl-dnn) support. |
| `WITH_MKLML` | ON | Build with [Intel® MKL](https://software.intel.com/en-us/mkl) support. |
| `WITH_GOLANG` | ON | Build fault-tolerant parameter server written in go. |
| `WITH_SWIG_PY` | ON | Build with SWIG python API support. |
| `WITH_C_API` | OFF | Build capi libraries for inference. |
| `WITH_PYTHON` | ON | Build with python support. Turn this off if build is only for capi. |
| `WITH_STYLE_CHECK` | ON | Check the code style when building. |
| `PYTHON_ABI` | "" | Build for different python ABI support, can be cp27-cp27m or cp27-cp27mu |
| `RUN_TEST` | OFF | Run unit test immediently after the build. |
| `WITH_DOC` | OFF | Build docs after build binaries. |
| `WOBOQ` | OFF | Generate WOBOQ code viewer under `build/woboq_out` |
We allow users the choice between AVX and no-AVX, because some cloud providers don't provide AVX-enabled VMs.
## Docker Images
## Development Environment
You can get the latest PaddlePaddle docker images by
`docker pull paddlepaddle/paddle:<version>` or build one by yourself.
Here we describe how to use above two images. We start from considering our daily development environment.
### Official Docker Releases
Developers work on a computer, which is usually a laptop or desktop:
Build PaddlePaddle docker images are quite simple since PaddlePaddle can
be installed by just running `pip install`. A sample `Dockerfile` is:
A principle here is that source code lies on the development computer (host) so that editors like Eclipse can parse the source code to support auto-completion.
```dockerfile
FROM nvidia/cuda:7.5-cudnn5-runtime-centos6
RUN yum install-y centos-release-SCL
RUN yum install-y python27
# This whl package is generated by previous build steps.
RUN pip install /paddlepaddle-0.10.0-cp27-cp27mu-linux_x86_64.whl &&rm-f /*.whl
```
Then build the image by running `docker build -t [REPO]/paddle:[TAG] .` under
the directory containing your own `Dockerfile`.
## Usages
- NOTE: note that you can choose different base images for your environment, you can find all the versions [here](https://hub.docker.com/r/nvidia/cuda/).
### Build the Development Docker Image
### Use Docker Images
The following commands check out the source code to the host and build the development image `paddle:dev`:
Suppose that you have written an application program `train.py` using
PaddlePaddle, we can test and run it using docker:
docker run --rm-it-v$PWD:/work paddlepaddle/paddle /work/a.py
```
The `docker build` command assumes that `Dockerfile` is in the root source tree. Note that in this design, this `Dockerfile` is this only one in our repo.
Users can specify a Ubuntu mirror server for faster downloading:
But this works only if all dependencies of `train.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
### Build PaddlePaddle from Source Code
### Run PaddlePaddle Book In Docker
Given the development image `paddle:dev`, the following command builds PaddlePaddle from the source tree on the development computer (host):
Our [book repo](https://github.com/paddlepaddle/book) also provide a docker
image to start a jupiter notebook inside docker so that you can run this book
using docker:
```bash
docker run --rm-v$PWD:/paddle -e"WITH_GPU=OFF"-e"WITH_AVX=ON"-e"WITH_TESTING=OFF"-e"RUN_TEST=OFF" paddle:dev
docker run -d-p 8888:8888 paddlepaddle/book
```
This command mounts the source directory on the host into `/paddle` in the container, so the default entry point of `paddle:dev`, `build.sh`, could build the source code with possible local changes. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
`build.sh` builds the following:
- PaddlePaddle binaries,
-`$PWD/build/paddle-<version>.deb` for production installation, and
-`$PWD/build/Dockerfile`, which builds the production Docker image.
Please refer to https://github.com/paddlepaddle/book if you want to build this
docker image by your self.
Users can specify the following Docker build arguments with either "ON" or "OFF" value:
-`WITH_GPU`: ***Required***. Generates NVIDIA CUDA GPU code and relies on CUDA libraries.
-`WITH_AVX`: ***Required***. Set to "OFF" prevents from generating AVX instructions. If you don't know what is AVX, you might want to set "ON".
-`WITH_TEST`: ***Optional, default OFF***. Build unit tests binaries. Once you've built the unit tests, you can run these test manually by the following command:
```bash
docker run --rm-v$PWD:/paddle -e"WITH_GPU=OFF"-e"WITH_AVX=ON" paddle:dev sh -c"cd /paddle/build; make coverall"
```
-`RUN_TEST`: ***Optional, default OFF***. Run unit tests after building. You can't run unit tests without building it.
### Run Distributed Applications
### Build the Production Docker Image
In our [API design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/api.md#distributed-training), we proposed an API that starts a distributed training job on a cluster. This API need to build a PaddlePaddle application into a Docker image as above and calls kubectl to run it on the cluster. This API might need to generate a Dockerfile look like above and call `docker build`.
The following command builds the production image:
Of course, we can manually build an application image and launch the job using the kubectl tool:
This production image is minimal -- it includes binary `paddle`, the shared library `libpaddle.so`, and Python runtime.
## Docker Images for Developers
### Run PaddlePaddle Applications
We have a special docker image for developers:
`paddlepaddle/paddle:<version>-dev`. This image is also generated from
https://github.com/PaddlePaddle/buildtools
Again the development happens on the host. Suppose that we have a simple application program in `a.py`, we can test and run it using the production image:
This a development image contains only the
development tools and standardizes the building procedure. Users include:
```bash
docker run --rm-it-v$PWD:/work paddle /work/a.py
```
- developers -- no longer need to install development tools on the host, and can build their current work on the host (development computer).
- release engineers -- use this to build the official release from certain branch/tag on Github.com.
- document writers / Website developers -- Our documents are in the source repo in the form of .md/.rst files and comments in source code. We need tools to extract the information, typeset, and generate Web pages.
But this works only if all dependencies of `a.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
Of course, developers can install building tools on their development computers. But different versions of PaddlePaddle might require different set or version of building tools. Also, it makes collaborative debugging easier if all developers use a unified development environment.
### Build and Run PaddlePaddle Applications
The development image contains the following tools:
We need a Dockerfile in https://github.com/paddlepaddle/book that builds Docker image `paddlepaddle/book:<version>`, basing on the PaddlePaddle production image:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
```
FROM paddlepaddle/paddle:<version>
RUN pip install -U matplotlib jupyter ...
COPY . /book
EXPOSE 8080
CMD ["jupyter"]
```
Many developers work on a remote computer with GPU; they could ssh into the computer and `docker exec` into the development container. However, running `sshd` in the container allows developers to ssh into the container directly.
The book image is an example of PaddlePaddle application image. We can build it
```bash
git clone https://github.com/paddlepaddle/book
cd book
docker build -t book .
```
### Development Workflow
### Build and Run Distributed Applications
Here we describe how the workflow goes on. We start from considering our daily development environment.
In our [API design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/api.md#distributed-training), we proposed an API that starts a distributed training job on a cluster. This API need to build a PaddlePaddle application into a Docker image as above and calls kubectl to run it on the cluster. This API might need to generate a Dockerfile look like above and call `docker build`.
Developers work on a computer, which is usually a laptop or desktop:
Of course, we can manually build an application image and launch the job using the kubectl tool:
A principle here is that source code lies on the development computer (host) so that editors like Eclipse can parse the source code to support auto-completion.
### Reading source code with woboq codebrowser
For developers who are interested in the C++ source code, please use -e "WOBOQ=ON" to enable the building of C++ source code into HTML pages using [Woboq codebrowser](https://github.com/woboq/woboq_codebrowser).
- The following command builds PaddlePaddle, generates HTML pages from C++ source code, and writes HTML pages into `$HOME/woboq_out` on the host:
```bash
docker run -v$PWD:/paddle -v$HOME/woboq_out:/woboq_out -e"WITH_GPU=OFF"-e"WITH_AVX=ON"-e"WITH_TEST=ON"-e"WOBOQ=ON" paddle:dev
docker run -v$PWD:/paddle -v$HOME/woboq_out:/woboq_out -e"WITH_GPU=OFF"-e"WITH_AVX=ON"-e"WITH_TEST=ON"-e"WOBOQ=ON" paddlepaddle/paddle:latest-dev
```
- You can open the generated HTML files in your Web browser. Or, if you want to run a Nginx container to serve them for a wider audience, you can run: