Compile_EN.md 17.8 KB
Newer Older
J
Jiawei Wang 已提交
1
# How to compile PaddleServing
D
Dong Daxiang 已提交
2

T
TeslaZhao 已提交
3
([简体中文](./Compile_CN.md)|English)
B
bjjwwang 已提交
4 5 6 7 8

## Overview

Compiling Paddle Serving is divided into the following steps

B
bjjwwang 已提交
9 10 11 12 13
- Compilation Environment Preparation: According to the needs of the model and operating environment, select the most suitable image
- Download the Serving Code Repo: Download the Serving code library, and perform initialization operations as needed
- Environment Variable Preparation: According to the needs of the running environment, determine the various environment variables of Python. For example, the GPU environment also needs to determine the environment variables such as Cuda, Cudnn, TensorRT and so on.
- Compilation: Compile `paddle-serving-server`, `paddle-serving-client`, `paddle-serving-app` related whl packages
- Install Related Whl Packages: install the three compiled whl packages, and set the SERVING_BIN environment variable
B
bjjwwang 已提交
14 15

In addition, for some C++ secondary development scenarios, we also provide OPENCV binding solutions.
J
Jiawei Wang 已提交
16

B
bjjwwang 已提交
17
## Compilation Environment Requirements
B
barrierye 已提交
18

B
barrierye 已提交
19 20
|            module            |              version              |
| :--------------------------: | :-------------------------------: |
W
wangjiawei04 已提交
21
|              OS              |     Ubuntu16 and 18/CentOS 7      |
22 23
|             gcc              |          5.4.0(Cuda 10.1) and 8.2.0         |
|           gcc-c++            |          5.4.0(Cuda 10.1) and 8.2.0         |
B
barrierye 已提交
24
|            cmake             |          3.2.0 and later          |
25
|            Python            |          3.6.0 and later          |
B
bjjwwang 已提交
26
|              Go              |          1.17.2 and later          |
B
barrierye 已提交
27 28 29 30
|             git              |         2.17.1 and later          |
|         glibc-static         |               2.17                |
|        openssl-devel         |              1.0.2k               |
|         bzip2-devel          |          1.0.6 and later          |
T
TeslaZhao 已提交
31
|        python3-devel         |         3.6.0 and later |
B
barrierye 已提交
32
|         sqlite-devel         |         3.7.17 and later          |
W
wangjiawei04 已提交
33
|           patchelf           |                0.9                |
B
barrierye 已提交
34 35 36
|           libXext            |               1.3.3               |
|            libSM             |               1.2.2               |
|          libXrender          |              0.9.10               |
D
Dong Daxiang 已提交
37

B
bjjwwang 已提交
38
Docker compilation is recommended. We have prepared the Paddle Serving compilation environment for you and configured the above compilation dependencies. For details, please refer to [this document](DOCKER_IMAGES_CN.md).
B
barrierye 已提交
39

T
TeslaZhao 已提交
40
We provide five environment development images, namely CPU, CUDA10.1 + CUDNN7, CUDA10.2 + CUDNN7, CUDA10.2 + CUDNN8, CUDA11.2 + CUDNN8. We provide a Serving development image to cover the above environment. At the same time, we also support Paddle development mirroring.
D
Dong Daxiang 已提交
41

B
bjjwwang 已提交
42 43 44 45
Serving development mirror is the mirror used to compile and debug prediction services provided by Serving suite in order to support various prediction environments. Paddle development mirror is the mirror used for compilation, development, and training models released by Paddle on the official website. In order to allow Paddle developers to use Serving directly in the same container. For developers who have already used Serving users in the previous version, Serving development image should not be unfamiliar. But for developers who are familiar with the Paddle training framework ecology, they should be more familiar with the existing Paddle development mirrors. In order to adapt to the different habits of all users, we have fully supported both sets of mirrors.

|  Environment           |   Serving Dev Image Tag               |    OS      | Paddle Dev Image Tag       |  OS            |
| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: |
T
TeslaZhao 已提交
46 47 48 49 50
|  CPU                         | 0.8.0-devel                       |  Ubuntu 16.04   | 2.2.2                 | Ubuntu 18.04.       |
|  CUDA10.1 + Cudnn7             | 0.8.0-cuda10.1-cudnn7-devel       |  Ubuntu 16.04   | Nan                     | Nan                 |
|  CUDA10.2 + Cudnn7             | 0.8.0-cuda10.2-cudnn7-devel       |  Ubuntu 16.04   | 2.2.2-gpu-cuda10.2-cudnn7 | Ubuntu 16.04        |
|  CUDA10.2 + Cudnn8             | 0.8.0-cuda10.2-cudnn8-devel       |  Ubuntu 16.04   | Nan                    |  Nan                 |
|  CUDA11.2 + Cudnn8             | 0.8.0-cuda11.2-cudnn8-devel       |  Ubuntu 16.04   | 2.2.2-gpu-cuda11.2-cudnn8 | Ubuntu 18.04        | 
B
bjjwwang 已提交
51 52 53 54

We first need to pull related images for the environment we need. Under the **Environment** column in the above table, except for the CPU, the rest (Cuda**+Cudnn**) belong to the GPU environment.

You can use Serving Dev Images.
D
Dong Daxiang 已提交
55
```
T
TeslaZhao 已提交
56
docker pull registry.baidubce.com/paddlepaddle/serving:${Serving Dev Image Tag}
D
Dong Daxiang 已提交
57

B
bjjwwang 已提交
58
# For GPU Image
T
TeslaZhao 已提交
59
nvidia-docker run --rm -it registry.baidubce.com/paddlepaddle/serving:${Serving Dev Image Tag} bash
D
Dong Daxiang 已提交
60

B
bjjwwang 已提交
61
# For CPU Image
T
TeslaZhao 已提交
62
docker run --rm -it registry.baidubce.com/paddlepaddle/serving:${Serving Dev Image Tag} bash
D
Dong Daxiang 已提交
63 64
```

B
bjjwwang 已提交
65
You can also use Paddle Dev Images.
T
TeslaZhao 已提交
66 67 68 69 70 71 72 73 74
```
docker pull registry.baidubce.com/paddlepaddle/paddle:${Paddle Dev Image Tag}

# For GPU Image
nvidia-docker run --rm -it registry.baidubce.com/paddlepaddle/paddle:${Paddle Dev Image Tag} bash

# For CPU Image
docker run --rm -it registry.baidubce.com/paddlepaddle/paddle:${Paddle Dev Image Tag} bash
```
B
barrierye 已提交
75

B
bjjwwang 已提交
76 77
## Download the Serving Code Repo
**Note: If you are using Paddle to develop the image, you need to manually run `bash env_install.sh` after downloading the code base (as shown in the third line of the code box)**
W
wangjiawei04 已提交
78
```
B
bjjwwang 已提交
79 80
git clone https://github.com/PaddlePaddle/Serving
cd Serving && git submodule update --init --recursive
B
barrierye 已提交
81

B
bjjwwang 已提交
82 83
# Paddle development image needs to run the following commands, Serving development image does not need to run
bash tools/paddle_env_install.sh
W
wangjiawei04 已提交
84
```
B
barrierye 已提交
85

B
bjjwwang 已提交
86
## Environment Variables Preparation
B
barrierye 已提交
87

B
bjjwwang 已提交
88
**Set PYTHON environment variable**
B
barrierye 已提交
89

B
bjjwwang 已提交
90
If you are using a Serving development image, please follow the steps below to determine the Python version that needs to be compiled and set the corresponding environment variables. A total of three environment variables need to be set, namely `PYTHON_INCLUDE_DIR`, `PYTHON_LIBRARIES`, `PYTHON_EXECUTABLE`. Below we take python 3.7 as an example to introduce how to set these three environment variables.
B
barrierye 已提交
91

B
bjjwwang 已提交
92
1) Set `PYTHON_INCLUDE_DIR`
B
barrierye 已提交
93

B
bjjwwang 已提交
94
Search the directory where Python.h is located
B
barriery 已提交
95
```
B
bjjwwang 已提交
96 97 98 99
find / -name Python.h
```
Usually there will be something like `**/include/python3.7/Python.h`, we only need to take its folder directory, for example, find `/usr/include/python3.7/Python.h`, Then we only need `export PYTHON_INCLUDE_DIR=/usr/include/python3.7/`.
If not found. Explanation 1) The development version of Python is not installed and needs to be re-installed. 2) Insufficient permissions cannot view the relevant system directories.
B
barriery 已提交
100

B
bjjwwang 已提交
101
2) Set `PYTHON_LIBRARIES`
B
barriery 已提交
102

T
TeslaZhao 已提交
103
Search for libpython3.7.so or libpython3.7m.so
B
bjjwwang 已提交
104 105
```
find / -name libpython3.7.so
T
TeslaZhao 已提交
106
find / -name libpython3.7m.so
B
barriery 已提交
107
```
B
bjjwwang 已提交
108 109
Usually there will be something similar to `**/lib/libpython3.7.so` or `**/lib/x86_64-linux-gnu/libpython3.7.so`, we only need to take its folder directory, For example, find `/usr/local/lib/libpython3.7.so`, then we only need `export PYTHON_LIBRARIES=/usr/local/lib`.
If it is not found, it means 1) Statically compiling Python, you need to reinstall the dynamically compiled Python 2) The county is not enough to view the relevant system catalogs.
M
bug fix  
MRXLT 已提交
110

B
bjjwwang 已提交
111
3) Set `PYTHON_EXECUTABLE`
B
barrierye 已提交
112

B
bjjwwang 已提交
113 114 115 116 117
View the python3.7 path directly
```
which python3.7
```
If the result is `/usr/local/bin/python3.7`, then directly set `export PYTHON_EXECUTABLE=/usr/local/bin/python3.7`.
B
barrierye 已提交
118

B
bjjwwang 已提交
119
It is very important to set these three environment variables. After the settings are completed, we can perform the following operations (the following is the PYTHON environment of the development image of Paddle Cuda 11.2, if it is another image, please change the corresponding `PYTHON_INCLUDE_DIR`, `PYTHON_LIBRARIES` , `PYTHON_EXECUTABLE`).
B
barrierye 已提交
120

D
Dong Daxiang 已提交
121
```
B
bjjwwang 已提交
122 123 124 125
# The following three environment variables are the environment of Paddle development mirror Cuda11.2, such as other mirrors may need to be modified
export PYTHON_INCLUDE_DIR=/usr/include/python3.7m/
export PYTHON_LIBRARIES=/usr/lib/x86_64-linux-gnu/libpython3.7m.so
export PYTHON_EXECUTABLE=/usr/bin/python3.7
D
Dong Daxiang 已提交
126

B
bjjwwang 已提交
127 128
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
B
barrierye 已提交
129

T
TeslaZhao 已提交
130
python3.7 -m pip install -r python/requirements.txt
B
bjjwwang 已提交
131 132 133 134 135 136 137 138 139
 
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2
go install github.com/golang/protobuf/protoc-gen-go@v1.4.3
go install google.golang.org/grpc@v1.33.0
go env -w GO111MODULE=auto
```
140

B
bjjwwang 已提交
141 142 143 144 145 146 147 148
If you are a GPU user, you need to set additional `CUDA_PATH`, `CUDNN_LIBRARY`, `CUDA_CUDART_LIBRARY` and `TENSORRT_LIBRARY_PATH`.
```
export CUDA_PATH='/usr/local/cuda'
export CUDNN_LIBRARY='/usr/local/cuda/lib64/'
export CUDA_CUDART_LIBRARY="/usr/local/cuda/lib64/"
export TENSORRT_LIBRARY_PATH="/usr/"
```
The meaning of environment variables is shown in the table below.
M
MRXLT 已提交
149

W
wangjiawei04 已提交
150
| cmake environment variable | meaning | GPU environment considerations | whether Docker environment is needed |
T
TeslaZhao 已提交
151
|-----------------------|-------------------------------------|-------------------------------|--------------------|
T
TeslaZhao 已提交
152
| CUDA_TOOLKIT_ROOT_DIR | cuda installation path, usually /usr/local/cuda | Required for all environments | No (/usr/local/cuda) |
W
wangjiawei04 已提交
153 154
| CUDNN_LIBRARY | The directory where libcudnn.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) |
| CUDA_CUDART_LIBRARY | The directory where libcudart.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) |
B
bjjwwang 已提交
155
| TENSORRT_ROOT | The upper level directory of the directory where libnvinfer.so.* is located, depends on the TensorRT installation directory | Required for all environments | No (/usr) |
M
MRXLT 已提交
156

W
wangjiawei04 已提交
157

T
TeslaZhao 已提交
158

B
bjjwwang 已提交
159
## Compilation
160

B
bjjwwang 已提交
161
We need to compile three targets in total, namely `paddle-serving-server`, `paddle-serving-client`, and `paddle-serving-app`, among which `paddle-serving-server` needs to distinguish between CPU or GPU version. If it is a CPU version, please run,
B
barrierye 已提交
162

B
bjjwwang 已提交
163
### Compile paddle-serving-server
T
Thomas Young 已提交
164

B
bjjwwang 已提交
165 166 167 168 169 170 171 172 173 174 175
```
mkdir build_server
cd build_server
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
     -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
     -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
     -DSERVER=ON \
     -DWITH_GPU=OFF ..
make -j20
cd ..
```
T
Thomas Young 已提交
176

B
bjjwwang 已提交
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191
If it is the GPU version, please run,
```
mkdir build_server
cd build_server
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
     -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
     -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
     -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
     -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \
     -DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \
     -DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \
     -DSERVER=ON \
     -DWITH_GPU=ON ..
make -j20
cd ..
T
Thomas Young 已提交
192
```
B
barrierye 已提交
193

B
bjjwwang 已提交
194
### Compile paddle-serving-client and paddle-serving-app
D
Dong Daxiang 已提交
195

B
bjjwwang 已提交
196 197 198 199 200
Next, we can continue to compile the client and app. The compilation commands for these two packages are common on all platforms, and do not distinguish between CPU and GPU versions.
```
# Compile paddle-serving-client
mkdir build_client
cd build_client
201 202 203 204
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
    -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
    -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
    -DCLIENT=ON ..
D
Dong Daxiang 已提交
205
make -j10
B
bjjwwang 已提交
206
cd ..
B
barrierye 已提交
207

B
bjjwwang 已提交
208 209 210
# Compile paddle-serving-app
mkdir build_app
cd build_app
211 212 213
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
    -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
    -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
M
MRXLT 已提交
214
    -DAPP=ON ..
B
bjjwwang 已提交
215 216
make -j10
cd ..
B
barrierye 已提交
217 218
```

B
bjjwwang 已提交
219 220 221 222 223 224 225
## Install Related Whl Packages
```
pip3.7 install -r build_server/python/dist/*.whl
pip3.7 install -r build_client/python/dist/*.whl
pip3.7 install -r build_app/python/dist/*.whl
export SERVING_BIN=${PWD}/build_server/core/general-server/serving
```
B
barrierye 已提交
226

B
bjjwwang 已提交
227
## Precautions
B
barrierye 已提交
228

B
bjjwwang 已提交
229 230 231
Note the last line `export SERVING_BIN` in the previous section. When running the python server, the `SERVING_BIN` environment variable will be checked. If you want to use the binary file compiled by yourself, please set the environment variable to the path of the corresponding binary file. It is `export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
Where BUILD_DIR is the absolute path of `build_server`.
You can cd build_server path and execute `export SERVING_BIN=${PWD}/core/general-server/serving`
B
barrierye 已提交
232

B
bjjwwang 已提交
233
## Enable WITH_OPENCV option to compile C++ Server
B
barrierye 已提交
234

B
bjjwwang 已提交
235
**Note:** You only need to do this when you need to do secondary development on the Paddle Serving C++ part and the newly added code depends on the OpenCV library.
B
barrierye 已提交
236

B
bjjwwang 已提交
237
To compile the Serving C++ Server part, when the WITH_OPENCV option is turned on, the installed OpenCV library is required. If it has not been installed, you can refer to the instructions at the back of this document to compile and install the OpenCV library.
B
barrierye 已提交
238

B
bjjwwang 已提交
239 240 241 242 243 244 245 246 247 248 249 250
Take the WITH_OPENCV option and compile the CPU version Paddle Inference Library as an example. On the basis of the above compilation command, add the `DOPENCV_DIR=${OPENCV_DIR}` and `DWITH_OPENCV=ON` options.
``` shell
OPENCV_DIR=your_opencv_dir #`your_opencv_dir` is the installation path of the opencv library.
mkdir build_server && cd build_server
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
    -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
    -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
    -DOPENCV_DIR=${OPENCV_DIR} \
    -DWITH_OPENCV=ON \
    -DSERVER=ON ..
make -j10
```
251

T
TeslaZhao 已提交
252
**Note:** After the compilation is successful, you need to set the `SERVING_BIN` path.
B
barrierye 已提交
253

B
barrierye 已提交
254 255 256 257




B
bjjwwang 已提交
258
## Attached: CMake option description
B
barrierye 已提交
259

B
bjjwwang 已提交
260
| Compilation Options | Description | Default |
T
TeslaZhao 已提交
261
| :--------------: | :----------------------------------------: | :--: |
B
bjjwwang 已提交
262 263 264 265 266
| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF |
| WITH_MKL | Compile Paddle Serving with MKL support | OFF |
| WITH_GPU | Compile Paddle Serving with NVIDIA GPU | OFF |
| WITH_TRT | Compile Paddle Serving with TensorRT | OFF |
| WITH_OPENCV | Compile Paddle Serving with OPENCV | OFF |
T
TeslaZhao 已提交
267
| CUDNN_LIBRARY | Define CUDNN library and header path | |
B
bjjwwang 已提交
268 269 270 271 272 273
| CUDA_TOOLKIT_ROOT_DIR | Define CUDA PATH | |
| TENSORRT_ROOT | Define TensorRT PATH | |
| CLIENT | Compile Paddle Serving Client | OFF |
| SERVER | Compile Paddle Serving Server | OFF |
| APP | Compile Paddle Serving App package | OFF |
| PACK | Compile for whl | OFF |
B
barrierye 已提交
274

B
bjjwwang 已提交
275
### WITH_GPU option
B
barrierye 已提交
276

B
bjjwwang 已提交
277
Paddle Serving supports prediction on the GPU through the PaddlePaddle prediction library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If a suitable version is detected, the GPU version of the OP Kernel will be compiled when the PaddlePaddle is compiled.
B
barrierye 已提交
278

J
Jiawei Wang 已提交
279
To compile the Paddle Serving GPU version on bare metal, you need to install these basic libraries:
B
barrierye 已提交
280

B
bjjwwang 已提交
281
-CUDA
T
TeslaZhao 已提交
282
-CUDNN
M
MRXLT 已提交
283 284

To compile the TensorRT version, you need to install the TensorRT library.
B
barrierye 已提交
285

B
bjjwwang 已提交
286
The things to note here are:
B
barrierye 已提交
287

B
bjjwwang 已提交
288
1. Compile the basic library versions such as CUDA/CUDNN installed on the system where Serving is located, and need to be compatible with the actual GPU device. For example, Tesla V100 card requires at least CUDA 9.0. If the version of basic libraries such as CUDA used during compilation is too low, the Serving process cannot be started due to the incompatibility between the generated GPU code and the actual hardware device, or serious problems such as coredump may occur.
T
TeslaZhao 已提交
289
2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CUDNN version used during compilation. If the version of CUDA/CUDNN installed on the system running Paddle Serving is lower than the version used during compilation, it may cause strange cuda function call failures and other problems.
B
barrierye 已提交
290

B
bjjwwang 已提交
291
The following is the matching relationship between PaddleServing mirrored Cuda, Cudnn, and TensorRT for reference:
B
barrierye 已提交
292

T
TeslaZhao 已提交
293
| Tag | CUDA | CUDNN | TensorRT |
B
bjjwwang 已提交
294
| :----: | :-----: | :----------: | :----: |
T
TeslaZhao 已提交
295 296 297
| post101 | 10.1 | CUDNN 7.6.5 | 6.0.1 |
| post102 | 10.2 | CUDNN 8.0.5 | 7.1.3 |
| post11 | 11.0 | CUDNN 8.0.4 | 7.1.3 |
B
barrierye 已提交
298

T
TeslaZhao 已提交
299
### Attachment: How to make the Paddle Serving compilation system detect the CUDNN library
B
barrierye 已提交
300

T
TeslaZhao 已提交
301
After downloading the corresponding version of CUDNN from the official website of NVIDIA developer and decompressing it locally, add the `-DCUDNN_LIBRARY` parameter to the cmake compilation command and specify the path of the CUDNN library.
H
HexToString 已提交
302

B
bjjwwang 已提交
303 304
## Attachment: Compile and install OpenCV library
**Note:** You only need to do this when you need to include the OpenCV library in your C++ code.
H
HexToString 已提交
305

B
bjjwwang 已提交
306
* First, you need to download the package compiled from the source code in the Linux environment from the OpenCV official website. Take OpenCV 3.4.7 as an example. The download command is as follows.
H
HexToString 已提交
307 308 309 310 311 312

```
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
tar -xf 3.4.7.tar.gz
```

B
bjjwwang 已提交
313
Finally, you can see the folder `opencv-3.4.7/` in the current directory.
H
HexToString 已提交
314

B
bjjwwang 已提交
315
* Compile OpenCV, set the OpenCV source path (`root_path`) and installation path (`install_path`). Enter the OpenCV source code path and compile in the following way.
H
HexToString 已提交
316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347

```shell
root_path=your_opencv_root_path
install_path=${root_path}/opencv3

rm -rf build
mkdir build
cd build

cmake .. \
    -DCMAKE_INSTALL_PREFIX=${install_path} \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_SHARED_LIBS=OFF \
    -DWITH_IPP=OFF \
    -DBUILD_IPP_IW=OFF \
    -DWITH_LAPACK=OFF \
    -DWITH_EIGEN=OFF \
    -DCMAKE_INSTALL_LIBDIR=lib64 \
    -DWITH_ZLIB=ON \
    -DBUILD_ZLIB=ON \
    -DWITH_JPEG=ON \
    -DBUILD_JPEG=ON \
    -DWITH_PNG=ON \
    -DBUILD_PNG=ON \
    -DWITH_TIFF=ON \
    -DBUILD_TIFF=ON

make -j
make install
```


B
bjjwwang 已提交
348
Among them, `root_path` is the downloaded OpenCV source path, `install_path` is the installation path of OpenCV, after the completion of `make install`, OpenCV header files and library files will be generated in this folder, which are used to compile the code that references the OpenCV library .
H
HexToString 已提交
349

B
bjjwwang 已提交
350
The final file structure under the installation path is as follows.
H
HexToString 已提交
351 352 353 354 355 356 357 358

```
opencv3/
|-- bin
|-- include
|-- lib
|-- lib64
|-- share
T
Thomas Young 已提交
359
```