From ba785f3f6b2c0d35550787656a6bb4db1f9712cb Mon Sep 17 00:00:00 2001 From: bjjwwang Date: Mon, 15 Nov 2021 19:56:09 +0800 Subject: [PATCH] fix compile cn and en --- doc/Compile_CN.md | 2 +- doc/Compile_EN.md | 352 +++++++++++++++++++++++++--------------------- 2 files changed, 189 insertions(+), 165 deletions(-) diff --git a/doc/Compile_CN.md b/doc/Compile_CN.md index cd39ba12..d4326e0f 100644 --- a/doc/Compile_CN.md +++ b/doc/Compile_CN.md @@ -42,7 +42,7 @@ 我们提供了五个环境的开发镜像,分别是CPU, Cuda10.1+Cudnn7, Cuda10.2+Cudnn7,Cuda10.2+Cudnn8, Cuda11.2+Cudnn8。我们提供了Serving开发镜像涵盖以上环境。与此同时,我们也支持Paddle开发镜像。 -其中Serving镜像名是 **paddlepaddle/serving:${Serving开发镜像Tag}**, Paddle开发镜像名是 **paddlepaddle/paddle:${Paddle开发镜像Tag}**。为了防止用户对两套镜像出现混淆,我们分别解释一下两套镜像的由来。 +其中Serving镜像名是 **paddlepaddle/serving:${Serving开发镜像Tag}**(如果网络不佳可以访问**registry.baidubce.com/paddlepaddle/serving:${Serving开发镜像Tag}**), Paddle开发镜像名是 **paddlepaddle/paddle:${Paddle开发镜像Tag}**。为了防止用户对两套镜像出现混淆,我们分别解释一下两套镜像的由来。 Serving开发镜像是Serving套件为了支持各个预测环境提供的用于编译、调试预测服务的镜像,Paddle开发镜像是Paddle在官网发布的用于编译、开发、训练模型使用镜像。为了让Paddle开发者能够在同一个容器内直接使用Serving。对于上个版本就已经使用Serving用户的开发者来说,Serving开发镜像应该不会感到陌生。但对于熟悉Paddle训练框架生态的开发者,目前应该更熟悉已有的Paddle开发镜像。为了适应所有用户的不同习惯,我们对这两套镜像都做了充分的支持。 diff --git a/doc/Compile_EN.md b/doc/Compile_EN.md index c8cb3205..64804e2a 100644 --- a/doc/Compile_EN.md +++ b/doc/Compile_EN.md @@ -6,15 +6,15 @@ Compiling Paddle Serving is divided into the following steps -- Compilation environment preparation: According to the needs of the model and operating environment, select the most suitable image -- Download the code library: Download the Serving code library, and perform initialization operations as needed -- Environment variable preparation: According to the needs of the running environment, determine the various environment variables of Python. For example, the GPU environment also needs to determine the environment variables such as Cuda, Cudnn, TensorRT and so on. -- Official compilation: Compile `paddle-serving-server`, `paddle-serving-client`, `paddle-serving-app` related whl packages -- Install related whl packages: install the three compiled whl packages, and set the SERVING_BIN environment variable +- Compilation Environment Preparation: According to the needs of the model and operating environment, select the most suitable image +- Download the Serving Code Repo: Download the Serving code library, and perform initialization operations as needed +- Environment Variable Preparation: According to the needs of the running environment, determine the various environment variables of Python. For example, the GPU environment also needs to determine the environment variables such as Cuda, Cudnn, TensorRT and so on. +- Compilation: Compile `paddle-serving-server`, `paddle-serving-client`, `paddle-serving-app` related whl packages +- Install Related Whl Packages: install the three compiled whl packages, and set the SERVING_BIN environment variable In addition, for some C++ secondary development scenarios, we also provide OPENCV binding solutions. -## Compilation environment requirements +## Compilation Environment Requirements | module | version | | :--------------------------: | :-------------------------------: | @@ -23,7 +23,7 @@ In addition, for some C++ secondary development scenarios, we also provide OPENC | gcc-c++ | 5.4.0(Cuda 10.1) and 8.2.0 | | cmake | 3.2.0 and later | | Python | 3.6.0 and later | -| Go | 1.9.2 and later | +| Go | 1.17.2 and later | | git | 2.17.1 and later | | glibc-static | 2.17 | | openssl-devel | 1.0.2k | @@ -35,251 +35,276 @@ In addition, for some C++ secondary development scenarios, we also provide OPENC | libSM | 1.2.2 | | libXrender | 0.9.10 | -It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you, see [this document](Docker_Images_EN.md). +Docker compilation is recommended. We have prepared the Paddle Serving compilation environment for you and configured the above compilation dependencies. For details, please refer to [this document](DOCKER_IMAGES_CN.md). -## Get Code +We provide five environment development images, namely CPU, Cuda10.1+Cudnn7, Cuda10.2+Cudnn7, Cuda10.2+Cudnn8, Cuda11.2+Cudnn8. We provide a Serving development image to cover the above environment. At the same time, we also support Paddle development mirroring. -``` python -git clone https://github.com/PaddlePaddle/Serving -cd Serving && git submodule update --init --recursive +The Serving image name is **paddlepaddle/serving:${Serving development image Tag}** (If the network is not good, you can visit **registry.baidubce.com/paddlepaddle/serving:${Serving development image Tag}**), The name of the Paddle development image is **paddlepaddle/paddle:${Paddle Development Image Tag}**. In order to prevent users from confusing the two sets of mirroring, we explain the origin of the two sets of mirroring separately. + +Serving development mirror is the mirror used to compile and debug prediction services provided by Serving suite in order to support various prediction environments. Paddle development mirror is the mirror used for compilation, development, and training models released by Paddle on the official website. In order to allow Paddle developers to use Serving directly in the same container. For developers who have already used Serving users in the previous version, Serving development image should not be unfamiliar. But for developers who are familiar with the Paddle training framework ecology, they should be more familiar with the existing Paddle development mirrors. In order to adapt to the different habits of all users, we have fully supported both sets of mirrors. + +| Environment | Serving Dev Image Tag | OS | Paddle Dev Image Tag | OS | +| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: | +| CPU | 0.7.0-devel | Ubuntu 16.04 | 2.2.0 | Ubuntu 18.04. | +| Cuda10.1+Cudnn7 | 0.7.0-cuda10.1-cudnn7-devel | Ubuntu 16.04 | Nan | Nan | +| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-cuda10.2-cudnn7 | Ubuntu 16.04 | +| Cuda10.2+Cudnn8 | 0.7.0-cuda10.2-cudnn8-devel | Ubuntu 16.04 | Nan | Nan | +| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-cuda11.2-cudnn8 | Ubuntu 18.04 | + +We first need to pull related images for the environment we need. Under the **Environment** column in the above table, except for the CPU, the rest (Cuda**+Cudnn**) belong to the GPU environment. + +You can use Serving Dev Images. ``` +docker pull paddlepaddle/serving:${Serving Dev Image Tag} -## PYTHONROOT settings +# For GPU Image +nvidia-docker run --rm -it paddlepaddle/serving:${Serving Dev Image Tag} bash -```shell -# For example, the path of python is /usr/bin/python, you can set PYTHONROOT -export PYTHONROOT=/usr +# For CPU Image +docker run --rm -it paddlepaddle/serving:${Serving Dev Image Tag} bash ``` -If you are using a Docker development image, please follow the following to determine the Python version to be compiled, and set the corresponding environment variables +You can also use Paddle Dev Images. +## Download the Serving Code Repo +**Note: If you are using Paddle to develop the image, you need to manually run `bash env_install.sh` after downloading the code base (as shown in the third line of the code box)** ``` -#Python3.6 -export PYTHONROOT=/usr/local/ -export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m -export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.6m.so -export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6 - -#Python3.7 -export PYTHONROOT=/usr/local/ -export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.7m -export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.7m.so -export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.7 - -#Python3.8 -export PYTHONROOT=/usr/local/ -export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.8 -export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.8.so -export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.8 +git clone https://github.com/PaddlePaddle/Serving +cd Serving && git submodule update --init --recursive +# Paddle development image needs to run the following commands, Serving development image does not need to run +bash tools/paddle_env_install.sh ``` -## Install Python dependencies +## Environment Variables Preparation -```shell -pip install -r python/requirements.txt -i https://mirror.baidu.com/pypi/simple -``` +**Set PYTHON environment variable** -If you use other Python version, please use the right `pip` accordingly. +If you are using a Serving development image, please follow the steps below to determine the Python version that needs to be compiled and set the corresponding environment variables. A total of three environment variables need to be set, namely `PYTHON_INCLUDE_DIR`, `PYTHON_LIBRARIES`, `PYTHON_EXECUTABLE`. Below we take python 3.7 as an example to introduce how to set these three environment variables. -## GOPATH Setting -The default GOPATH is set to `$HOME/go`, you can also set it to other values. **If it is the Docker environment provided by Serving, you do not need to set up.** +1) Set `PYTHON_INCLUDE_DIR` -```shell -export GOPATH=$HOME/go -export PATH=$PATH:$GOPATH/bin +Search the directory where Python.h is located ``` +find / -name Python.h +``` +Usually there will be something like `**/include/python3.7/Python.h`, we only need to take its folder directory, for example, find `/usr/include/python3.7/Python.h`, Then we only need `export PYTHON_INCLUDE_DIR=/usr/include/python3.7/`. +If not found. Explanation 1) The development version of Python is not installed and needs to be re-installed. 2) Insufficient permissions cannot view the relevant system directories. -## Get go packages +2) Set `PYTHON_LIBRARIES` -```shell -go env -w GO111MODULE=on -go env -w GOPROXY=https://goproxy.cn,direct -go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2 -go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2 -go get -u github.com/golang/protobuf/protoc-gen-go@v1.4.3 -go get -u google.golang.org/grpc@v1.33.0 -go env -w GO111MODULE=auto +Search for libpython3.7.so +``` +find / -name libpython3.7.so ``` +Usually there will be something similar to `**/lib/libpython3.7.so` or `**/lib/x86_64-linux-gnu/libpython3.7.so`, we only need to take its folder directory, For example, find `/usr/local/lib/libpython3.7.so`, then we only need `export PYTHON_LIBRARIES=/usr/local/lib`. +If it is not found, it means 1) Statically compiling Python, you need to reinstall the dynamically compiled Python 2) The county is not enough to view the relevant system catalogs. +3) Set `PYTHON_EXECUTABLE` -## Compile Server +View the python3.7 path directly +``` +which python3.7 +``` +If the result is `/usr/local/bin/python3.7`, then directly set `export PYTHON_EXECUTABLE=/usr/local/bin/python3.7`. -### Integrated CPU version paddle inference library +It is very important to set these three environment variables. After the settings are completed, we can perform the following operations (the following is the PYTHON environment of the development image of Paddle Cuda 11.2, if it is another image, please change the corresponding `PYTHON_INCLUDE_DIR`, `PYTHON_LIBRARIES` , `PYTHON_EXECUTABLE`). -``` shell -mkdir server-build-cpu && cd server-build-cpu -cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ - -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ - -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ - -DSERVER=ON .. -make -j10 ``` +# The following three environment variables are the environment of Paddle development mirror Cuda11.2, such as other mirrors may need to be modified +export PYTHON_INCLUDE_DIR=/usr/include/python3.7m/ +export PYTHON_LIBRARIES=/usr/lib/x86_64-linux-gnu/libpython3.7m.so +export PYTHON_EXECUTABLE=/usr/bin/python3.7 -you can execute `make install` to put targets under directory `./output`, you need to add`-DCMAKE_INSTALL_PREFIX=./output`to specify output path to cmake command shown above. +export GOPATH=$HOME/go +export PATH=$PATH:$GOPATH/bin -### Integrated GPU version paddle inference library +python -m install -r python/requirements.txt + +go env -w GO111MODULE=on +go env -w GOPROXY=https://goproxy.cn,direct +go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2 +go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2 +go install github.com/golang/protobuf/protoc-gen-go@v1.4.3 +go install google.golang.org/grpc@v1.33.0 +go env -w GO111MODULE=auto +``` -Compared with CPU environment, GPU environment needs to refer to the following table, -**It should be noted that the following table is used as a reference for non-Docker compilation environment. The Docker compilation environment has been configured with relevant parameters and does not need to be specified in cmake process. ** +If you are a GPU user, you need to set additional `CUDA_PATH`, `CUDNN_LIBRARY`, `CUDA_CUDART_LIBRARY` and `TENSORRT_LIBRARY_PATH`. +``` +export CUDA_PATH='/usr/local/cuda' +export CUDNN_LIBRARY='/usr/local/cuda/lib64/' +export CUDA_CUDART_LIBRARY="/usr/local/cuda/lib64/" +export TENSORRT_LIBRARY_PATH="/usr/" +``` +The meaning of environment variables is shown in the table below. | cmake environment variable | meaning | GPU environment considerations | whether Docker environment is needed | |-----------------------|-------------------------------------|-------------------------------|--------------------| | CUDA_TOOLKIT_ROOT_DIR | cuda installation path, usually /usr/local/cuda | Required for all environments | No (/usr/local/cuda) | | CUDNN_LIBRARY | The directory where libcudnn.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) | | CUDA_CUDART_LIBRARY | The directory where libcudart.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) | -| TENSORRT_ROOT | The upper level directory of the directory where libnvinfer.so.* is located, depends on the TensorRT installation directory | Cuda 9.0/10.0 does not need, other needs | No (/usr) | +| TENSORRT_ROOT | The upper level directory of the directory where libnvinfer.so.* is located, depends on the TensorRT installation directory | Required for all environments | No (/usr) | -If not in Docker environment, users can refer to the following execution methods. The specific path is subject to the current environment, and the code is only for reference.TENSORRT_LIBRARY_PATH is related to the TensorRT version and should be set according to the actual situation。For example, in the cuda10.1 environment, the TensorRT version is 6.0 (/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/),In the cuda10.2 and cuda11.0 environment, the TensorRT version is 7.1 (/usr/local/TensorRT-7.1.3.4/targets/x86_64-linux-gnu/). -``` shell -export CUDA_PATH='/usr/local/cuda' -export CUDNN_LIBRARY='/usr/local/cuda/lib64/' -export CUDA_CUDART_LIBRARY="/usr/local/cuda/lib64/" -export TENSORRT_LIBRARY_PATH="/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/" +## Compilation -mkdir server-build-gpu && cd server-build-gpu -cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ - -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ - -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ - -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \ - -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \ - -DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \ - -DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \ - -DSERVER=ON \ - -DWITH_GPU=ON .. -make -j10 -``` - -Execute `make install` to put the target output in the `./output` directory. +We need to compile three targets in total, namely `paddle-serving-server`, `paddle-serving-client`, and `paddle-serving-app`, among which `paddle-serving-server` needs to distinguish between CPU or GPU version. If it is a CPU version, please run, -### Compile C++ Server under the condition of WITH_OPENCV=ON -**Note:** Only when you need to redevelop the paddle serving C + + part, and the new code depends on the OpenCV library, you need to do so. +### Compile paddle-serving-server -First of all , OpenCV library should be installed, if not, please refer to the `Compile and install OpenCV` section later in this article. +``` +mkdir build_server +cd build_server +cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ + -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ + -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ + -DSERVER=ON \ + -DWITH_GPU=OFF .. +make -j20 +cd .. +``` -In the compile command, add `DOPENCV_DIR=${OPENCV_DIR}` and `DWITH_OPENCV=ON`,for example: -``` shell -OPENCV_DIR=your_opencv_dir #`your_opencv_dir` is the installation path of OpenCV library。 -mkdir server-build-cpu && cd server-build-cpu -cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \ - -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ - -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ - -DOPENCV_DIR=${OPENCV_DIR} \ - -DWITH_OPENCV=ON \ - -DSERVER=ON .. -make -j10 +If it is the GPU version, please run, +``` +mkdir build_server +cd build_server +cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ + -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ + -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ + -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \ + -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \ + -DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \ + -DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \ + -DSERVER=ON \ + -DWITH_GPU=ON .. +make -j20 +cd .. ``` -**Note:** After the compilation is successful, you need to set the `SERVING_BIN` path, see the following [Notes](Compile_EN.md#Notes). -## Compile Client +### Compile paddle-serving-client and paddle-serving-app -``` shell -mkdir client-build && cd client-build +Next, we can continue to compile the client and app. The compilation commands for these two packages are common on all platforms, and do not distinguish between CPU and GPU versions. +``` +# Compile paddle-serving-client +mkdir build_client +cd build_client cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ -DCLIENT=ON .. make -j10 -``` - -execute `make install` to put targets under directory `./output` - +cd .. - -## Compile the App - -```bash -mkdir app-build && cd app-build +# Compile paddle-serving-app +mkdir build_app +cd build_app cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ -DAPP=ON .. -make +make -j10 +cd .. ``` +## Install Related Whl Packages +``` +pip3.7 install -r build_server/python/dist/*.whl +pip3.7 install -r build_client/python/dist/*.whl +pip3.7 install -r build_app/python/dist/*.whl +export SERVING_BIN=${PWD}/build_server/core/general-server/serving +``` +## Precautions -## Install wheel package - -Regardless of the client, server or App part, after compiling, install the whl package in `python/dist/` in the temporary directory(`server-build-cpu`, `server-build-gpu`, `client-build`,`app-build`) of the compilation process. -for example:cd server-build-cpu/python/dist && pip install -U xxxxx.whl +Note the last line `export SERVING_BIN` in the previous section. When running the python server, the `SERVING_BIN` environment variable will be checked. If you want to use the binary file compiled by yourself, please set the environment variable to the path of the corresponding binary file. It is `export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`. +Where BUILD_DIR is the absolute path of `build_server`. +You can cd build_server path and execute `export SERVING_BIN=${PWD}/core/general-server/serving` +## Enable WITH_OPENCV option to compile C++ Server -## Note +**Note:** You only need to do this when you need to do secondary development on the Paddle Serving C++ part and the newly added code depends on the OpenCV library. -When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`. -BUILD_DIR is the absolute path of server build CPU or server build GPU。 -for example: cd server-build-cpu && export SERVING_BIN=${PWD}/core/general-server/serving +To compile the Serving C++ Server part, when the WITH_OPENCV option is turned on, the installed OpenCV library is required. If it has not been installed, you can refer to the instructions at the back of this document to compile and install the OpenCV library. +Take the WITH_OPENCV option and compile the CPU version Paddle Inference Library as an example. On the basis of the above compilation command, add the `DOPENCV_DIR=${OPENCV_DIR}` and `DWITH_OPENCV=ON` options. +``` shell +OPENCV_DIR=your_opencv_dir #`your_opencv_dir` is the installation path of the opencv library. +mkdir build_server && cd build_server +cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \ + -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ + -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ + -DOPENCV_DIR=${OPENCV_DIR} \ + -DWITH_OPENCV=ON \ + -DSERVER=ON .. +make -j10 +``` +**Note:** After the compilation is successful, you need to set the `SERVING_BIN` path, see the following [Notes](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#Notes) ). -## Verify -Please use the example under `python/examples` to verify. -## CMake Option Description +## Attached: CMake option description -| Compile Options | Description | Default | -| :--------------: | :----------------------------------------: | :--: | -| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF | -| WITH_MKL | Compile Paddle Serving with MKL support | OFF | -| WITH_GPU | Compile Paddle Serving with NVIDIA GPU | OFF | -| WITH_OPENCV | Compile Paddle Serving with OPENCV | OFF | -| CUDNN_LIBRARY | Define CuDNN library and header path | | -| CUDA_TOOLKIT_ROOT_DIR | Define CUDA PATH | | -| TENSORRT_ROOT | Define TensorRT PATH | | -| CLIENT | Compile Paddle Serving Client | OFF | -| SERVER | Compile Paddle Serving Server | OFF | -| APP | Compile Paddle Serving App package | OFF | -| PACK | Compile for whl | OFF | +| Compilation Options | Description | Default | +| :--------------: | :------------------------------- ---------: | :--: | +| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF | +| WITH_MKL | Compile Paddle Serving with MKL support | OFF | +| WITH_GPU | Compile Paddle Serving with NVIDIA GPU | OFF | +| WITH_TRT | Compile Paddle Serving with TensorRT | OFF | +| WITH_OPENCV | Compile Paddle Serving with OPENCV | OFF | +| CUDNN_LIBRARY | Define CuDNN library and header path | | +| CUDA_TOOLKIT_ROOT_DIR | Define CUDA PATH | | +| TENSORRT_ROOT | Define TensorRT PATH | | +| CLIENT | Compile Paddle Serving Client | OFF | +| SERVER | Compile Paddle Serving Server | OFF | +| APP | Compile Paddle Serving App package | OFF | +| PACK | Compile for whl | OFF | -### WITH_GPU Option +### WITH_GPU option -Paddle Serving supports prediction on the GPU through the PaddlePaddle inference library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If an appropriate version is detected, the GPU Kernel will be compiled when PaddlePaddle is compiled. +Paddle Serving supports prediction on the GPU through the PaddlePaddle prediction library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If a suitable version is detected, the GPU version of the OP Kernel will be compiled when the PaddlePaddle is compiled. To compile the Paddle Serving GPU version on bare metal, you need to install these basic libraries: -- CUDA -- CuDNN +-CUDA +-CuDNN To compile the TensorRT version, you need to install the TensorRT library. -Note here: - -1. The basic library versions such as CUDA/CUDNN installed on the system where Serving is compiled, needs to be compatible with the actual GPU device. For example, the Tesla V100 card requires at least CUDA 9.0. If the version of the basic library such as CUDA used during compilation is too low, the generated GPU code is not compatible with the actual hardware device, which will cause the Serving process to fail to start or serious problems such as coredump. -2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CuDNN version used during compilation. If the version of CUDA/CuDNN installed on the system running Paddle Serving is lower than the version used at compile time, it may cause some cuda function call failures and other problems. +The things to note here are: +1. Compile the basic library versions such as CUDA/CUDNN installed on the system where Serving is located, and need to be compatible with the actual GPU device. For example, Tesla V100 card requires at least CUDA 9.0. If the version of basic libraries such as CUDA used during compilation is too low, the Serving process cannot be started due to the incompatibility between the generated GPU code and the actual hardware device, or serious problems such as coredump may occur. +2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CuDNN version used during compilation. If the version of CUDA/CuDNN installed on the system running Paddle Serving is lower than the version used during compilation, it may cause strange cuda function call failures and other problems. -The following is the base library version matching relationship used by the PaddlePaddle release version for reference: +The following is the matching relationship between PaddleServing mirrored Cuda, Cudnn, and TensorRT for reference: -| | CUDA | CuDNN | TensorRT | -| :----: | :-----: | :----------: | :----: | -| post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 | -| post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 | -| post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 | +| | CUDA | CuDNN | TensorRT | +| :----: | :-----: | :----------: | :----: | +| post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 | +| post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 | +| post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 | -### How to make the compiler detect the CuDNN library +### Attachment: How to make the Paddle Serving compilation system detect the CuDNN library -Download the corresponding CUDNN version from NVIDIA developer official website and decompressing it, add `-DCUDNN_ROOT` to cmake command, to specify the path of CUDNN. +After downloading the corresponding version of CuDNN from the official website of NVIDIA developer and decompressing it locally, add the `-DCUDNN_LIBRARY` parameter to the cmake compilation command and specify the path of the CuDNN library. -## Compile and install OpenCV -**Note:** You need to do this only if you need to import the opencv library into your C + + code. +## Attachment: Compile and install OpenCV library +**Note:** You only need to do this when you need to include the OpenCV library in your C++ code. -* First of all, you need to download the source code compiled package in the Linux environment from the OpenCV official website. Taking OpenCV3.4.7 as an example, the download command is as follows. +* First, you need to download the package compiled from the source code in the Linux environment from the OpenCV official website. Take OpenCV 3.4.7 as an example. The download command is as follows. ``` wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz tar -xf 3.4.7.tar.gz ``` -Finally, you can see the folder of `opencv-3.4.7/` in the current directory. - -* Compile OpenCV, the OpenCV source path (`root_path`) and installation path (`install_path`) should be set by yourself. Enter the OpenCV source code path and compile it in the following way. +Finally, you can see the folder `opencv-3.4.7/` in the current directory. +* Compile OpenCV, set the OpenCV source path (`root_path`) and installation path (`install_path`). Enter the OpenCV source code path and compile in the following way. ```shell root_path=your_opencv_root_path @@ -311,11 +336,10 @@ make -j make install ``` -Among them, `root_path` is the downloaded OpenCV source code path, and `install_path` is the installation path of OpenCV. After `make install` is completed, the OpenCV header file and library file will be generated in this folder for later source code compilation. - +Among them, `root_path` is the downloaded OpenCV source path, `install_path` is the installation path of OpenCV, after the completion of `make install`, OpenCV header files and library files will be generated in this folder, which are used to compile the code that references the OpenCV library . -The final file structure under the OpenCV installation path is as follows. +The final file structure under the installation path is as follows. ``` opencv3/ -- GitLab