COMPILE.md 8.7 KB
Newer Older
J
Jiawei Wang 已提交
1
# How to compile PaddleServing
D
Dong Daxiang 已提交
2

J
Jiawei Wang 已提交
3 4 5
([简体中文](./COMPILE_CN.md)|English)

## Compilation environment requirements
B
barrierye 已提交
6

B
barrierye 已提交
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
|            module            |              version              |
| :--------------------------: | :-------------------------------: |
|              OS              |             CentOS 7              |
|             gcc              |          4.8.5 and later          |
|           gcc-c++            |          4.8.5 and later          |
|             git              |          3.82 and later           |
|            cmake             |          3.2.0 and later          |
|            Python            |  2.7.2 and later / 3.6 and later  |
|              Go              |          1.9.2 and later          |
|             git              |         2.17.1 and later          |
|         glibc-static         |               2.17                |
|        openssl-devel         |              1.0.2k               |
|         bzip2-devel          |          1.0.6 and later          |
| python-devel / python3-devel | 2.7.5 and later / 3.6.8 and later |
|         sqlite-devel         |         3.7.17 and later          |
|           patchelf           |           0.9 and later           |
|           libXext            |               1.3.3               |
|            libSM             |               1.2.2               |
|          libXrender          |              0.9.10               |
D
Dong Daxiang 已提交
26

B
barrierye 已提交
27
It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you, see [this document](DOCKER_IMAGES.md).
B
barrierye 已提交
28

B
barrierye 已提交
29 30 31 32
This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake:

- Set `DPYTHON_INCLUDE_DIR` to `$PYTHONROOT/include/python3.6m/`
- Set  `DPYTHON_LIBRARIES` to `$PYTHONROOT/lib64/libpython3.6.so`
M
fix doc  
MRXLT 已提交
33
- Set `DPYTHON_EXECUTABLE` to `$PYTHONROOT/bin/python3.6`
B
barrierye 已提交
34

J
Jiawei Wang 已提交
35
## Get Code
D
Dong Daxiang 已提交
36 37 38

``` python
git clone https://github.com/PaddlePaddle/Serving
39
cd Serving && git submodule update --init --recursive
D
Dong Daxiang 已提交
40 41
```

B
barrierye 已提交
42 43


B
barrierye 已提交
44

J
Jiawei Wang 已提交
45
## PYTHONROOT Setting
D
Dong Daxiang 已提交
46

B
barrierye 已提交
47
```shell
J
Jiawei Wang 已提交
48
# for example, the path of python is /usr/bin/python, you can set /usr as PYTHONROOT
D
Dong Daxiang 已提交
49 50 51
export PYTHONROOT=/usr/
```

B
barrierye 已提交
52 53
In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`.

B
barrierye 已提交
54 55 56 57 58 59 60 61 62 63


## Install Python dependencies

```shell
pip install -r python/requirements.txt
```

If Python3 is used, replace `pip` with `pip3`.

B
barriery 已提交
64
## GOPATH Setting
B
barrierye 已提交
65 66


M
bug fix  
MRXLT 已提交
67 68
## Compile Arguments

B
barriery 已提交
69 70 71 72 73 74 75 76 77
The default GOPATH is `$HOME/go`, which you can set to other values.
```shell
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
```

## Get go packages

```shell
M
fix ci  
MRXLT 已提交
78
go env -w GO111MODULE=on
M
MRXLT 已提交
79
go env -w GOPROXY=https://goproxy.cn,direct
M
MRXLT 已提交
80 81 82 83
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2
go get -u github.com/golang/protobuf/protoc-gen-go@v1.4.3
go get -u google.golang.org/grpc@v1.33.0
B
barriery 已提交
84
```
M
bug fix  
MRXLT 已提交
85

B
barrierye 已提交
86

J
Jiawei Wang 已提交
87
## Compile Server
B
barrierye 已提交
88

J
Jiawei Wang 已提交
89
### Integrated CPU version paddle inference library
B
barrierye 已提交
90

D
Dong Daxiang 已提交
91
``` shell
B
barrierye 已提交
92
mkdir server-build-cpu && cd server-build-cpu
M
bug fix  
MRXLT 已提交
93
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
M
MRXLT 已提交
94 95 96
    -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
    -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
    -DSERVER=ON ..
D
Dong Daxiang 已提交
97 98 99
make -j10
```

J
Jiawei Wang 已提交
100
you can execute `make install` to put targets under directory `./output`, you need to add`-DCMAKE_INSTALL_PREFIX=./output`to specify output path to cmake command shown above.
B
barrierye 已提交
101

J
Jiawei Wang 已提交
102
### Integrated GPU version paddle inference library
B
barrierye 已提交
103

D
Dong Daxiang 已提交
104
``` shell
B
barrierye 已提交
105
mkdir server-build-gpu && cd server-build-gpu
M
bug fix  
MRXLT 已提交
106
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
M
MRXLT 已提交
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128
    -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
    -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
    -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
    -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \  
    -DSERVER=ON \
    -DWITH_GPU=ON ..
make -j10
```

### Integrated TRT version paddle inference library

```
mkdir server-build-trt && cd server-build-trt
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
    -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
    -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
    -DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \
    -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
    -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \
    -DSERVER=ON \
    -DWITH_GPU=ON \
    -DWITH_TRT=ON ..
D
Dong Daxiang 已提交
129 130 131
make -j10
```

J
Jiawei Wang 已提交
132
execute `make install` to put targets under directory `./output`
B
barrierye 已提交
133

B
barrierye 已提交
134
**Attention:** After the compilation is successful, you need to set the path of `SERVING_BIN`. See [Note](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md#Note) for details.
M
MRXLT 已提交
135

B
barrierye 已提交
136 137


J
Jiawei Wang 已提交
138
## Compile Client
D
Dong Daxiang 已提交
139 140

``` shell
B
barrierye 已提交
141
mkdir client-build && cd client-build
M
bug fix  
MRXLT 已提交
142 143 144 145
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
      -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
      -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
      -DCLIENT=ON ..
D
Dong Daxiang 已提交
146 147
make -j10
```
D
Dong Daxiang 已提交
148

J
Jiawei Wang 已提交
149
execute `make install` to put targets under directory `./output`
B
barrierye 已提交
150

B
barrierye 已提交
151 152


J
Jiawei Wang 已提交
153
## Compile the App
B
barrierye 已提交
154 155

```bash
B
barrierye 已提交
156
mkdir app-build && cd app-build
M
MRXLT 已提交
157 158 159 160
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
    -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
    -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
    -DAPP=ON ..
B
barrierye 已提交
161 162 163
make
```

B
barrierye 已提交
164 165


J
Jiawei Wang 已提交
166 167
## Install wheel package

B
barrierye 已提交
168
Regardless of the client, server or App part, after compiling, install the whl package in `python/dist/` in the temporary directory(`server-build-cpu`, `server-build-gpu`, `client-build`,`app-build`) of the compilation process.
B
barrierye 已提交
169

B
barrierye 已提交
170 171


J
Jiawei Wang 已提交
172
## Note
B
barrierye 已提交
173

J
Jiawei Wang 已提交
174
When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
B
barrierye 已提交
175

176

B
barrierye 已提交
177

B
barrierye 已提交
178 179 180 181 182 183
## Verify

Please use the example under `python/examples` to verify.



J
Jiawei Wang 已提交
184
## CMake Option Description
B
barrierye 已提交
185

J
Jiawei Wang 已提交
186
| Compile Options  |                    Description             | Default |
B
barrierye 已提交
187 188 189 190
| :--------------: | :----------------------------------------: | :--: |
|     WITH_AVX     | Compile Paddle Serving with AVX intrinsics | OFF  |
|     WITH_MKL     |  Compile Paddle Serving with MKL support   | OFF  |
|     WITH_GPU     |   Compile Paddle Serving with NVIDIA GPU   | OFF  |
M
MRXLT 已提交
191 192 193
|  CUDNN_LIBRARY   |    Define CuDNN library and header path    |      |
| CUDA_TOOLKIT_ROOT_DIR |       Define CUDA PATH                |      |
|   TENSORRT_ROOT  |           Define TensorRT PATH             |      |
B
barrierye 已提交
194 195 196 197 198 199
|      CLIENT      |       Compile Paddle Serving Client        | OFF  |
|      SERVER      |       Compile Paddle Serving Server        | OFF  |
|       APP        |     Compile Paddle Serving App package     | OFF  |
| WITH_ELASTIC_CTR |        Compile ELASITC-CTR solution        | OFF  |
|       PACK       |              Compile for whl               | OFF  |

J
Jiawei Wang 已提交
200
### WITH_GPU Option
B
barrierye 已提交
201

J
Jiawei Wang 已提交
202
Paddle Serving supports prediction on the GPU through the PaddlePaddle inference library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If an appropriate version is detected, the GPU Kernel will be compiled when PaddlePaddle is compiled.
B
barrierye 已提交
203

J
Jiawei Wang 已提交
204
To compile the Paddle Serving GPU version on bare metal, you need to install these basic libraries:
B
barrierye 已提交
205 206 207

- CUDA
- CuDNN
M
MRXLT 已提交
208 209

To compile the TensorRT version, you need to install the TensorRT library.
B
barrierye 已提交
210

J
Jiawei Wang 已提交
211 212 213 214
Note here:

1. The basic library versions such as CUDA/CUDNN installed on the system where Serving is compiled, needs to be compatible with the actual GPU device. For example, the Tesla V100 card requires at least CUDA 9.0. If the version of the basic library such as CUDA used during compilation is too low, the generated GPU code is not compatible with the actual hardware device, which will cause the Serving process to fail to start or serious problems such as coredump.
2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CuDNN version used during compilation. If the version of CUDA/CuDNN installed on the system running Paddle Serving is lower than the version used at compile time, it may cause some cuda function call failures and other problems.
B
barrierye 已提交
215 216


J
Jiawei Wang 已提交
217
The following is the base library version matching relationship used by the PaddlePaddle release version for reference:
B
barrierye 已提交
218

M
MRXLT 已提交
219 220 221 222 223
|          |  CUDA   |          CuDNN           | TensorRT |
| :----:   | :-----: | :----------------------: | :----:   |
| post9    |  9.0    | CuDNN 7.3.1 for CUDA 9.0 |          |
| post10   |  10.0   | CuDNN 7.5.1 for CUDA 10.0|          |
| trt      |  10.1   | CuDNN 7.5.1 for CUDA 10.1| 6.0.1.5  |
B
barrierye 已提交
224

J
Jiawei Wang 已提交
225
### How to make the compiler detect the CuDNN library
B
barrierye 已提交
226

J
Jiawei Wang 已提交
227
Download the corresponding CUDNN version from NVIDIA developer official website and decompressing it, add `-DCUDNN_ROOT` to cmake command, to specify the path of CUDNN.