## Install and Build ### Download & Install Download the latest C-API development package from CI system and install. You can find the required version in the table below:
Version Tips C-API
cpu_avx_mkl paddle.tgz
cpu_avx_openblas paddle.tgz
cpu_noavx_openblas paddle.tgz
cuda7.5_cudnn5_avx_mkl paddle.tgz
cuda8.0_cudnn5_avx_mkl paddle.tgz
cuda8.0_cudnn7_avx_mkl paddle.tgz
cuda9.0_cudnn7_avx_mkl paddle.tgz
### From source Users can also compile the C-API library from PaddlePaddle source code by compiling with the following compilation options:
Options Value
WITH_C_API ON
WITH_PYTHON OFF(recommended)
WITH_SWIG_PY OFF(recommended)
WITH_GOLANG OFF(recommended)
WITH_GPU ON/OFF
WITH_MKL ON/OFF
It is best to set up with recommended values to avoid linking with unnecessary libraries. Set other compilation options as you need. Pull the latest following code snippet from github, and configure compilation options(replace PADDLE_ROOT with the installation path of the PaddlePaddle C-API inference library): ```shell PADDLE_ROOT=/path/of/capi git clone https://github.com/PaddlePaddle/Paddle.git cd Paddle mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=$PADDLE_ROOT \ -DCMAKE_BUILD_TYPE=Release \ -DWITH_C_API=ON \ -DWITH_SWIG_PY=OFF \ -DWITH_GOLANG=OFF \ -DWITH_PYTHON=OFF \ -DWITH_MKL=OFF \ -DWITH_GPU=OFF \ .. ``` After running the above code to generate Makefile , run: `make && make install`. After successful compilation, the dependencies required by C-API(includes: (1)PaddlePaddle inference library and header files; (2) Third-party libraries and header files) will be stored in the `PADDLE_ROOT` directory. If the compilation is successful, see the following directory structure under `PADDLE_ROOT`(includes PaddlePaddle header files and libraries, and third-party libraries and header files(determined by the link methods if necessary)): ```text ├── include │   └── paddle │   ├── arguments.h │   ├── capi.h │   ├── capi_private.h │   ├── config.h │   ├── error.h │   ├── gradient_machine.h │   ├── main.h │   ├── matrix.h │   ├── paddle_capi.map │   └── vector.h ├── lib │   ├── libpaddle_capi_engine.a │   ├── libpaddle_capi_layers.a │   ├── libpaddle_capi_shared.so │   └── libpaddle_capi_whole.a └── third_party ├── gflags │   ├── include │   │   └── gflags │   │   ├── gflags_completions.h │   │   ├── gflags_declare.h │   │   ... │   └── lib │   └── libgflags.a ├── glog │   ├── include │   │   └── glog │   │   ├── config.h │   │   ... │   └── lib │   └── libglog.a ├── openblas │   ├── include │   │   ├── cblas.h │   │   ... │   └── lib │   ... ├── protobuf │   ├── include │   │   └── google │   │   └── protobuf │   │   ... │   └── lib │   └── libprotobuf-lite.a └── zlib ├── include │   ... └── lib ... ``` ### Linking Description: There are three kinds of linking methods: 1. Linking with dynamic library `libpaddle_capi_shared.so`(This way is much more convenient and easier, **Without special requirements, it is recommended**), refer to the following: 1. Compiling with CPU version and using `OpenBLAS`; only need to link one library named `libpaddle_capi_shared.so` to develop prediction program through C-API. 1. Compiling with CPU version and using `MKL` lib, you need to link MKL library directly to develop prediction program through PaddlePaddle C-API, due to `MKL` has its own dynamic library. 1. Compiling with GPU version, CUDA library will be loaded dynamically on prediction program run-time, and also set CUDA library to  `LD_LIBRARY_PATH` environment variable. 2. Linking with static library `libpaddle_capi_whole.a`,refer to the following: 1. Specify `-Wl,--whole-archive` linking options. 1. Explicitly link third-party libraries such as `gflags`、`glog`、`libz`、`protobuf` .etc, you can find them under `PADDLE_ROOT/third_party` directory. 1. Use OpenBLAS library if compiling C-API,must explicitly link `libopenblas.a`. 1. Use MKL when compiling C-API, must explicitly link MKL dynamic library. 3. Linking with static library `libpaddle_capi_layers.a` and `libpaddle_capi_engine.a`,refer to the following: 1. This linking methods is mainly used for mobile prediction. 1. Split `libpaddle_capi_whole.a` into two static linking library at least to reduce the size of linking libraries. 1. Specify `-Wl,--whole-archive -lpaddle_capi_layers`  and `-Wl,--no-whole-archive -lpaddle_capi_engine` for linking. 1. The third-party dependencies need explicitly link same as method 2 above.