Build from Sources

How To Build

PaddlePaddle mainly uses CMake and GCC, G++ as compile tools. We recommend you to use our pre-built Docker image to run the build to avoid installing dependencies by yourself. We have several build environment Docker images here .

If you choose not to use Docker image for your build, you need to install the below Compile Dependencies before run the build.

Then run:

git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
# run the following command to build a CPU-Only binaries if you are using docker
docker run -it -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_TESTING=OFF" paddlepaddle/paddle_manylinux_devel:cuda8.0_cudnn5 bash -x /paddle/paddle/scripts/docker/build.sh
# else run these commands
mkdir build
cd build
cmake -DWITH_GPU=OFF -DWITH_TESTING=OFF ..
make

When the compile finishes, you can get the output whl package under build/python/dist, then you can choose to install the whl on local machine or copy it to the target machine.

pip install build/python/dist/*.whl

Run Tests

If you wish to run the tests, you may follow the below steps:

When using Docker, set RUN_TEST=ON and WITH_TESTING=ON will run test immediately after the build. Set WITH_GPU=ON Can also run tests on GPU.

docker run -it -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_TESTING=ON" -e "RUN_TEST=ON" paddlepaddle/paddle_manylinux_devel:cuda8.0_cudnn5 bash -x paddle/paddle/scripts/docker/build.sh

If you don’t use Docker, just run ctest will start the tests:

mkdir build
cd build
cmake -DWITH_GPU=OFF -DWITH_TESTING=ON ..
make
ctest
# run a single test like test_mul_op
ctest -R test_mul_op

Compile Dependencies

PaddlePaddle need the following dependencies when compiling, other dependencies will be downloaded automatically.

PaddlePaddle Compile Dependencies
Dependency Version Description
CMake >=3.5  
GCC 4.8.2 Recommend devtools2 for CentOS
Python 2.7.x Need libpython2.7.so
pip >=9.0 
numpy   
SWIG >=2.0  
Go >=1.8 Optional

Build Options

Build options include whether build binaries for CPU or GPU, which BLAS library to use etc. You may pass these settings when running cmake. For detailed cmake tutorial please refer to here

Bool Type Options

You can add -D argument to pass such options, like:

cmake .. -DWITH_GPU=OFF
Bool Type Options
Option Description Default
WITH_GPU Build with GPU support ON
WITH_C_API Build only CAPI OFF
WITH_DOUBLE Build with double precision OFF
WITH_DSO Dynamically load CUDA libraries ON
WITH_AVX Build with AVX support ON
WITH_PYTHON Build with integrated Python interpreter ON
WITH_STYLE_CHECK Check code style when building ON
WITH_TESTING Build unit tests ON
WITH_DOC Build documentations OFF
WITH_SWIG_PY Build Python SWIG interface for V2 API Auto
WITH_GOLANG Build fault-tolerant parameter server written in go ON
WITH_MKL Use MKL as BLAS library, else use OpenBLAS ON

BLAS

PaddlePaddle supports MKL and OpenBlAS as BLAS library。By default it uses MKL. If you are using MKL and your machine supports AVX2, MKL-DNN will also be downloaded and used, for more details .

If you choose not to use MKL, then OpenBlAS will be used.

CUDA/cuDNN

PaddlePaddle will automatically find CUDA and cuDNN when compiling and running. parameter -DCUDA_ARCH_NAME=Auto can be used to detect SM architecture automatically in order to speed up the build.

PaddlePaddle can build with any version later than cuDNN v5.1, and we intend to keep on with latest cuDNN versions. Be sure to run with the same version of cuDNN you built.

Pass Compile Options

You can pass compile options to use intended BLAS/CUDA/Cudnn libraries. When running cmake command, it will search system paths like /usr/lib:/usr/local/lib and then search paths that you passed to cmake, i.e.

cmake .. -DWITH_GPU=ON -DWITH_TESTING=OFF -DCUDNN_ROOT=/opt/cudnnv5

NOTE: These options only take effect when running cmake for the first time, you need to clean the cmake cache or clean the build directory ( rm -rf ) if you want to change it.