**Strongly recommend** you build **Paddle Serving** in Docker. For more images, please refer to [Docker Image List](Docker_Images_CN.md).
**Strongly recommend** you build **Paddle Serving** in Docker. For more images, please refer to [Docker Image List](Docker_Images_CN.md).
**Tip-1**: This project only supports <mark>**Python3.6/3.7/3.8/3.9**</mark>, all subsequent operations related to Python/Pip need to select the correct Python version.
**Tip-1**: This project only supports <mark>**Python3.6/3.7/3.8/3.9**</mark>, all subsequent operations related to Python/Pip need to select the correct Python version.
**Tip-2**: The GPU environments in the following examples are all cuda11.2-cudnn8. If you use Python Pipeline to deploy and need Nvidia TensorRT to optimize prediction performance, please refer to [Supported Mirroring Environment and Instructions](#4.-Supported-Docker-Images-and-Instruction) to choose other versions.
**Tip-2**: The GPU environments in the following examples are all cuda11.2-cudnn8. If you use Python Pipeline to deploy and need Nvidia TensorRT to optimize prediction performance, please refer to [Supported Mirroring Environment and Instructions](#4.-Supported-Docker-Images-and-Instruction) to choose other versions.
## 1. Start the Docker Container
<aname="1"></a>
## 1.Use devel docker
<mark>**Both Serving Dev Image and Paddle Dev Image are supported at the same time. You can choose 1 from the operation 2 in chapters 1.1 and 1.2.**</mark>Deploying the Serving service on the Paddle docker image requires the installation of additional dependency libraries. Therefore, we directly use the Serving development image.
<mark>**Both Serving Dev Image and Paddle Dev Image are supported at the same time. You can choose 1 from the operation 2 in chapters 1.1 and 1.2.**</mark>Deploying the Serving service on the Paddle docker image requires the installation of additional dependency libraries. Therefore, we directly use the Serving development image.
### 1.1 Serving Dev Images (CPU/GPU 2 choose 1)
| Environment | Serving Development Image Tag | Operating System | Paddle Development Image Tag | Operating System |
Install the service whl package. There are three types of client, app and server. The server is divided into CPU and GPU. Choose one installation according to the environment.
Install the service whl package. There are three types of client, app and server. The server is divided into CPU and GPU. Choose one installation according to the environment.
@@ -87,7 +119,6 @@ The paddle-serving-server and paddle-serving-server-gpu installation packages su
...
@@ -87,7 +119,6 @@ The paddle-serving-server and paddle-serving-server-gpu installation packages su
The paddle-serving-client and paddle-serving-app installation packages support Linux and Windows, and paddle-serving-client only supports python3.6/3.7/3.8/3.9.
The paddle-serving-client and paddle-serving-app installation packages support Linux and Windows, and paddle-serving-client only supports python3.6/3.7/3.8/3.9.
## 3. Install Paddle related Python libraries
**You only need to install it when you use the `paddle_serving_client.convert` command or the `Python Pipeline framework`. **
**You only need to install it when you use the `paddle_serving_client.convert` command or the `Python Pipeline framework`. **
| Environment | Serving Development Image Tag | Operating System | Paddle Development Image Tag | Operating System |
The independent dependencies of the Serving and Paddle Wheel packages are downloaded in `serving_dependent_wheels/` and `paddle_dependent_wheels/` under the `py3x_offline_whls` directory.
For **Windows 10 users**, please refer to the document [Paddle Serving Guide for Windows Platform](Windows_Tutorial_CN.md).
The Serving and Paddle Wheel packages can be installed locally by running the `install.py` script. The parameter list for the `install.py` script is as follows:
```
python3 install.py
--python_version : Python version for installing wheels, one of [py36, py37, py38, py39], py37 default.
--device : Type of devices, one of [cpu, gpu], cpu default.
--cuda_version : CUDA version for GPU, one of [101, 102, 112, empty], empty default.
--serving_version : Verson of Serving, one of [0.8.3, no_install], 0.8.3 default.
--paddle_version Verson of Paddle, one of [2.2.2, no_install], 2.2.2 default.
```
**2.Specify the `SERVING_BIN` path in the environment variable**
After completing step 1 of the installation, you can ignore this step if you only use the python pipeline mode.
If you use C++ Serving to start the service using the command line, the example is as follows. Then you need to export the environment variable `SERVING_BIN` in the command line window or service launcher, and use the local serving binary to run the service.
Since the binary package for all versions has 20 GB, it is very large. Therefore, multiple versions of download links are provided. Manually `wget` downloads the specified version to the `serving_bin` directory, decompresses it and exports it to the environment variable.
2.Only install the py39 version of the Serving CPU wheel package, set `--paddle_version="no_install"` to not install the Paddle prediction library, set `--device="cpu"` to indicate the cpu version
When the above steps are completed, you can use the command line to run the environment check function to automatically run the Paddle Serving related examples to verify the environment-related configuration.
When the above steps are completed, you can use the command line to run the environment check function to automatically run the Paddle Serving related examples to verify the environment-related configuration.