未验证 提交 567dc666 编写于 作者: T TeslaZhao 提交者: GitHub

Merge branch 'develop' into develop

...@@ -7,10 +7,10 @@ ...@@ -7,10 +7,10 @@
| module | version | | module | version |
| :--------------------------: | :-------------------------------: | | :--------------------------: | :-------------------------------: |
| OS | Ubuntu16 and 18/CentOS 7 | | OS | Ubuntu16 and 18/CentOS 7 |
| gcc | 4.8.5(Cuda 9.0 and 10.0) and 8.2(Others) | | gcc | 5.4.0(Cuda 10.1) and 8.2.0 |
| gcc-c++ | 4.8.5(Cuda 9.0 and 10.0) and 8.2(Others) | | gcc-c++ | 5.4.0(Cuda 10.1) and 8.2.0 |
| cmake | 3.2.0 and later | | cmake | 3.2.0 and later |
| Python | 2.7.2 and later / 3.5.1 and later | | Python | 3.6.0 and later |
| Go | 1.9.2 and later | | Go | 1.9.2 and later |
| git | 2.17.1 and later | | git | 2.17.1 and later |
| glibc-static | 2.17 | | glibc-static | 2.17 |
...@@ -42,18 +42,6 @@ export PYTHONROOT=/usr ...@@ -42,18 +42,6 @@ export PYTHONROOT=/usr
If you are using a Docker development image, please follow the following to determine the Python version to be compiled, and set the corresponding environment variables If you are using a Docker development image, please follow the following to determine the Python version to be compiled, and set the corresponding environment variables
``` ```
#Python 2.7
export PYTHONROOT=/usr/local/python2.7.15/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python2.7
#Python 3.5
export PYTHONROOT=/usr/local/python3.5.1
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.5m
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.5m.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.5
#Python3.6 #Python3.6
export PYTHONROOT=/usr/local/ export PYTHONROOT=/usr/local/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m
...@@ -108,9 +96,9 @@ go get -u google.golang.org/grpc@v1.33.0 ...@@ -108,9 +96,9 @@ go get -u google.golang.org/grpc@v1.33.0
``` shell ``` shell
mkdir server-build-cpu && cd server-build-cpu mkdir server-build-cpu && cd server-build-cpu
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \ -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \ -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DSERVER=ON .. -DSERVER=ON ..
make -j10 make -j10
``` ```
...@@ -176,9 +164,9 @@ make -j10 ...@@ -176,9 +164,9 @@ make -j10
``` shell ``` shell
mkdir client-build && cd client-build mkdir client-build && cd client-build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \ -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \ -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DCLIENT=ON .. -DCLIENT=ON ..
make -j10 make -j10
``` ```
...@@ -191,9 +179,9 @@ execute `make install` to put targets under directory `./output` ...@@ -191,9 +179,9 @@ execute `make install` to put targets under directory `./output`
```bash ```bash
mkdir app-build && cd app-build mkdir app-build && cd app-build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \ -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \ -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DAPP=ON .. -DAPP=ON ..
make make
``` ```
...@@ -257,8 +245,6 @@ The following is the base library version matching relationship used by the Padd ...@@ -257,8 +245,6 @@ The following is the base library version matching relationship used by the Padd
| | CUDA | CuDNN | TensorRT | | | CUDA | CuDNN | TensorRT |
| :----: | :-----: | :----------: | :----: | | :----: | :-----: | :----------: | :----: |
| post9 | 9.0 | CuDNN 7.6.4 | |
| post10 | 10.0 | CuDNN 7.6.5 | |
| post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 | | post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 |
| post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 | | post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 |
| post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 | | post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 |
......
...@@ -7,10 +7,10 @@ ...@@ -7,10 +7,10 @@
| 组件 | 版本要求 | | 组件 | 版本要求 |
| :--------------------------: | :-------------------------------: | | :--------------------------: | :-------------------------------: |
| OS | Ubuntu16 and 18/CentOS 7 | | OS | Ubuntu16 and 18/CentOS 7 |
| gcc | 4.8.5(Cuda 9.0 and 10.0) and 8.2(Others) | | gcc | 5.4.0(Cuda 10.1) and 8.2.0 |
| gcc-c++ | 4.8.5(Cuda 9.0 and 10.0) and 8.2(Others) | | gcc-c++ | 5.4.0(Cuda 10.1) and 8.2.0 |
| cmake | 3.2.0 and later | | cmake | 3.2.0 and later |
| Python | 2.7.2 and later / 3.5.1 and later | | Python | 3.6.0 and later |
| Go | 1.9.2 and later | | Go | 1.9.2 and later |
| git | 2.17.1 and later | | git | 2.17.1 and later |
| glibc-static | 2.17 | | glibc-static | 2.17 |
...@@ -41,18 +41,6 @@ export PYTHONROOT=/usr ...@@ -41,18 +41,6 @@ export PYTHONROOT=/usr
如果您使用的是Docker开发镜像,请按照如下,确定好需要编译的Python版本,设置对应的环境变量 如果您使用的是Docker开发镜像,请按照如下,确定好需要编译的Python版本,设置对应的环境变量
``` ```
#Python 2.7
export PYTHONROOT=/usr/local/python2.7.15/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python2.7
#Python 3.5
export PYTHONROOT=/usr/local/python3.5.1
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.5m
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.5m.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.5
#Python3.6 #Python3.6
export PYTHONROOT=/usr/local/ export PYTHONROOT=/usr/local/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m
...@@ -259,8 +247,6 @@ Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选 ...@@ -259,8 +247,6 @@ Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选
| | CUDA | CuDNN | TensorRT | | | CUDA | CuDNN | TensorRT |
| :----: | :-----: | :----------: | :----: | | :----: | :-----: | :----------: | :----: |
| post9 | 9.0 | CuDNN 7.6.4 | |
| post10 | 10.0 | CuDNN 7.6.5 | |
| post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 | | post101 | 10.1 | CuDNN 7.6.5 | 6.0.1 |
| post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 | | post102 | 10.2 | CuDNN 8.0.5 | 7.1.3 |
| post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 | | post11 | 11.0 | CuDNN 8.0.4 | 7.1.3 |
......
...@@ -31,17 +31,10 @@ If you want to customize your Serving based on source code, use the version with ...@@ -31,17 +31,10 @@ If you want to customize your Serving based on source code, use the version with
| Description | OS | TAG | Dockerfile | | Description | OS | TAG | Dockerfile |
| :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: | | :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: |
| CPU runtime | CentOS7 | latest | [Dockerfile](../tools/Dockerfile) | | CPU development | Ubuntu16 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) |
| CPU development | CentOS7 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) | | GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda9.0-cudnn7) runtime | CentOS7 | latest-cuda9.0-cudnn7 | [Dockerfile.cuda9.0-cudnn7](../tools/Dockerfile.cuda9.0-cudnn7) |
| GPU (cuda9.0-cudnn7) development | CentOS7 | latest-cuda9.0-cudnn7-devel | [Dockerfile.cuda9.0-cudnn7.devel](../tools/Dockerfile.cuda9.0-cudnn7.devel) |
| GPU (cuda10.0-cudnn7) runtime | CentOS7 | latest-cuda10.0-cudnn7 | [Dockerfile.cuda10.0-cudnn7](../tools/Dockerfile.cuda10.0-cudnn7) |
| GPU (cuda10.0-cudnn7) development | CentOS7 | latest-cuda10.0-cudnn7-devel | [Dockerfile.cuda10.0-cudnn7.devel](../tools/Dockerfile.cuda10.0-cudnn7.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6) runtime | Ubuntu16 | latest-cuda10.1-cudnn7 | [Dockerfile.cuda10.1-cudnn7](../tools/Dockerfile.cuda10.1-cudnn7) |
| GPU (cuda10.1-cudnn7-tensorRT6) development | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) | | GPU (cuda10.1-cudnn7-tensorRT6) development | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) |
| GPU (cuda10.2-cudnn8-tensorRT7) runtime | Ubuntu16| latest-cuda10.2-cudnn8 | [Dockerfile.cuda10.2-cudnn8](../tools/Dockerfile.cuda10.2-cudnn8) |
| GPU (cuda10.2-cudnn8-tensorRT7) development | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) | | GPU (cuda10.2-cudnn8-tensorRT7) development | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) |
| GPU (cuda11-cudnn8-tensorRT7) runtime | Ubuntu18| latest-cuda11-cudnn8 | [Dockerfile.cuda11-cudnn8](../tools/Dockerfile.cuda11-cudnn8) |
| GPU (cuda11-cudnn8-tensorRT7) development | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) | | GPU (cuda11-cudnn8-tensorRT7) development | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) |
**Java Client:** **Java Client:**
...@@ -68,34 +61,18 @@ Develop Images: ...@@ -68,34 +61,18 @@ Develop Images:
| Env | Version | Docker images tag | OS | Gcc Version | | Env | Version | Docker images tag | OS | Gcc Version |
|----------|---------|------------------------------|-----------|-------------| |----------|---------|------------------------------|-----------|-------------|
| CPU | 0.5.0 | 0.5.0-devel | Ubuntu 16 | 8.2.0 | | CPU | >=0.5.0 | 0.6.0-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 | | | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 |
| Cuda9.0 | 0.5.0 | 0.5.0-cuda9.0-cudnn7-devel | Ubuntu 16 | 4.8.5 | | Cuda10.1 | >=0.5.0 | 0.6.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda9.0-cudnn7-devel | CentOS 7 | 4.8.5 | | | 0.6.0 | 0.5.0-cuda10.1-cudnn7-gcc54-devel | Ubuntu 16 | 5.4.0 |
| Cuda10.0 | 0.5.0 | 0.5.0-cuda10.0-cudnn7-devel | Ubuntu 16 | 4.8.5 | | | <=0.4.0 | 0.6.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda10.0-cudnn7-devel | CentOS 7 | 4.8.5 | | Cuda10.2 | >=0.5.0 | 0.5.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| Cuda10.1 | 0.5.0 | 0.5.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | 0.5.0 | 0.5.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan | | | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | 0.5.0 | 0.5.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 | | Cuda11.0 | >=0.5.0 | 0.6.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan | | | <=0.4.0 | Nan | Nan | Nan |
Running Images: Running Images:
| Env | Version | Docker images tag | OS | Gcc Version | Running Images is lighter than Develop Images, and Running Images are too many due to multiple combinations of python, device environment. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](PADDLE_SERVING_ON_KUBERNETES.md).
|----------|---------|-----------------------|-----------|-------------|
| CPU | 0.5.0 | 0.5.0 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0 | CentOS 7 | 4.8.5 |
| Cuda9.0 | 0.5.0 | 0.5.0-cuda9.0-cudnn7 | Ubuntu 16 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda9.0-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.0 | 0.5.0 | 0.5.0-cuda10.0-cudnn7 | Ubuntu 16 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda10.0-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.1 | 0.5.0 | 0.5.0-cuda10.1-cudnn7 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda10.1-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.2 | 0.5.0 | 0.5.0-cuda10.2-cudnn8 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | 0.5.0 | 0.5.0-cuda11.0-cudnn8 | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
**Tips:** If you want to use CPU server and GPU server (version>=0.5.0) at the same time, you should check the gcc version, only Cuda10.1/10.2/11 can run with CPU server owing to the same gcc version(8.2). **Tips:** If you want to use CPU server and GPU server (version>=0.5.0) at the same time, you should check the gcc version, only Cuda10.1/10.2/11 can run with CPU server owing to the same gcc version(8.2).
...@@ -34,18 +34,11 @@ ...@@ -34,18 +34,11 @@
| 镜像选择 | 操作系统 | TAG | Dockerfile | | 镜像选择 | 操作系统 | TAG | Dockerfile |
| :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: | | :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: |
| CPU 运行镜像 | CentOS7 | latest | [Dockerfile](../tools/Dockerfile) | | CPU development | Ubuntu16 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) |
| CPU 开发镜像 | CentOS7 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) | | GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda9.0-cudnn7) 运行镜像 | CentOS7 | latest-cuda9.0-cudnn7 | [Dockerfile.cuda9.0-cudnn7](../tools/Dockerfile.cuda9.0-cudnn7) | | GPU (cuda10.1-cudnn7-tensorRT6) development | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) |
| GPU (cuda9.0-cudnn7) 开发镜像 | CentOS7 | latest-cuda9.0-cudnn7-devel | [Dockerfile.cuda9.0-cudnn7.devel](../tools/Dockerfile.cuda9.0-cudnn7.devel) | | GPU (cuda10.2-cudnn8-tensorRT7) development | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) |
| GPU (cuda10.0-cudnn7) 运行镜像 | CentOS7 | latest-cuda10.0-cudnn7 | [Dockerfile.cuda10.0-cudnn7](../tools/Dockerfile.cuda10.0-cudnn7) | | GPU (cuda11-cudnn8-tensorRT7) development | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) |
| GPU (cuda10.0-cudnn7) 开发镜像 | CentOS7 | latest-cuda10.0-cudnn7-devel | [Dockerfile.cuda10.0-cudnn7.devel](../tools/Dockerfile.cuda10.0-cudnn7.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6) 运行镜像 | Ubuntu16 | latest-cuda10.1-cudnn7 | [Dockerfile.cuda10.1-cudnn7](../tools/Dockerfile.cuda10.1-cudnn7) |
| GPU (cuda10.1-cudnn7-tensorRT6) 开发镜像 | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) |
| GPU (cuda10.2-cudnn8-tensorRT7) 运行镜像 | Ubuntu16| latest-cuda10.2-cudnn8 | [Dockerfile.cuda10.2-cudnn8](../tools/Dockerfile.cuda10.2-cudnn8) |
| GPU (cuda10.2-cudnn8-tensorRT7) 开发镜像 | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) |
| GPU (cuda11-cudnn8-tensorRT7) 运行镜像 | Ubuntu18| latest-cuda11-cudnn8 | [Dockerfile.cuda11-cudnn8](../tools/Dockerfile.cuda11-cudnn8) |
| GPU (cuda11-cudnn8-tensorRT7) 开发镜像 | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) |
**Java镜像:** **Java镜像:**
``` ```
...@@ -70,36 +63,22 @@ registry.baidubce.com/paddlepaddle/serving:xpu-beta ...@@ -70,36 +63,22 @@ registry.baidubce.com/paddlepaddle/serving:xpu-beta
编译镜像: 编译镜像:
开发镜像:
| Env | Version | Docker images tag | OS | Gcc Version | | Env | Version | Docker images tag | OS | Gcc Version |
|----------|---------|------------------------------|-----------|-------------| |----------|---------|------------------------------|-----------|-------------|
| CPU | 0.5.0 | 0.5.0-devel | Ubuntu 16 | 8.2.0 | | CPU | >=0.5.0 | 0.6.0-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 | | | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 |
| Cuda9.0 | 0.5.0 | 0.5.0-cuda9.0-cudnn7-devel | Ubuntu 16 | 4.8.5 | | Cuda10.1 | >=0.5.0 | 0.6.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda9.0-cudnn7-devel | CentOS 7 | 4.8.5 | | | 0.6.0 | 0.5.0-cuda10.1-cudnn7-gcc54-devel | Ubuntu 16 | 5.4.0 |
| Cuda10.0 | 0.5.0 | 0.5.0-cuda10.0-cudnn7-devel | Ubuntu 16 | 4.8.5 | | | <=0.4.0 | 0.6.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda10.0-cudnn7-devel | CentOS 7 | 4.8.5 | | Cuda10.2 | >=0.5.0 | 0.5.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| Cuda10.1 | 0.5.0 | 0.5.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | 0.5.0 | 0.5.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan | | | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | 0.5.0 | 0.5.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 | | Cuda11.0 | >=0.5.0 | 0.6.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan | | | <=0.4.0 | Nan | Nan | Nan |
运行镜像: 运行镜像:
| Env | Version | Docker images tag | OS | Gcc Version | 运行镜像比开发镜像更加轻量化, 且由于python,运行环境的多种组合,进而导致运行镜像种类过多。 如果您想了解有关信息,请检查文档[在Kubernetes上使用Paddle Serving](PADDLE_SERVING_ON_KUBERNETES.md)
|----------|---------|-----------------------|-----------|-------------|
| CPU | 0.5.0 | 0.5.0 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0 | CentOS 7 | 4.8.5 |
| Cuda9.0 | 0.5.0 | 0.5.0-cuda9.0-cudnn7 | Ubuntu 16 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda9.0-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.0 | 0.5.0 | 0.5.0-cuda10.0-cudnn7 | Ubuntu 16 | 4.8.5 |
| | <=0.4.0 | 0.4.0-cuda10.0-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.1 | 0.5.0 | 0.5.0-cuda10.1-cudnn7 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-cuda10.1-cudnn7 | CentOS 7 | 4.8.5 |
| Cuda10.2 | 0.5.0 | 0.5.0-cuda10.2-cudnn8 | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | 0.5.0 | 0.5.0-cuda11.0-cudnn8 | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
**注意事项:** 如果您在0.5.0及以上版本需要在一个容器当中同时运行CPU server和GPU server,需要选择Cuda10.1/10.2/11的镜像,因为他们和CPU环境有着相同版本的gcc。 **注意事项:** 如果您在0.5.0及以上版本需要在一个容器当中同时运行CPU server和GPU server,需要选择Cuda10.1/10.2/11的镜像,因为他们和CPU环境有着相同版本的gcc。
...@@ -186,7 +186,7 @@ wget https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar && \ ...@@ -186,7 +186,7 @@ wget https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar && \
(2)Cuda和Cudnn动态库:文件名通常为 `libcudart.so.$CUDA_VERSION`,和 `libcudnn.so.$CUDNN_VERSION`。例如Cuda9就是 `libcudart.so.9.0`,Cudnn7就是 `libcudnn.so.7`。Cuda和Cudnn与Serving的版本匹配参见[Serving所有镜像列表](DOCKER_IMAGES_CN.md#%E9%99%84%E5%BD%95%E6%89%80%E6%9C%89%E9%95%9C%E5%83%8F%E5%88%97%E8%A1%A8). (2)Cuda和Cudnn动态库:文件名通常为 `libcudart.so.$CUDA_VERSION`,和 `libcudnn.so.$CUDNN_VERSION`。例如Cuda9就是 `libcudart.so.9.0`,Cudnn7就是 `libcudnn.so.7`。Cuda和Cudnn与Serving的版本匹配参见[Serving所有镜像列表](DOCKER_IMAGES_CN.md#%E9%99%84%E5%BD%95%E6%89%80%E6%9C%89%E9%95%9C%E5%83%8F%E5%88%97%E8%A1%A8).
(3) Cuda10.1及更高版本需要TensorRT。安装TensorRT相关文件的脚本参考 [install_trt.sh](../tools/dockerfile/build_scripts/install_trt.sh). (3) Cuda10.1及更高版本需要TensorRT。安装TensorRT相关文件的脚本参考 [install_trt.sh](../tools/dockerfiles/build_scripts/install_trt.sh).
## 部署问题 ## 部署问题
......
# 搭建预测服务集群 # 搭建预测服务集群
[客户端配置](CLIENT_CONFIGURE.md)中我们已经知道,通过在客户端SDK的配置文件predictors.prototxt适当配置,可以搭建多副本和多Variant的预测集群。以下以图像分类任务为例,在单机上模拟搭建单Variant的多副本、和多Variant的预测集群 [客户端配置](../CLIENT_CONFIGURE.md)中我们已经知道,通过在客户端SDK的配置文件predictors.prototxt适当配置,可以搭建多副本和多Variant的预测集群。以下以图像分类任务为例,在单机上模拟搭建单Variant的多副本、和多Variant的预测集群
## 1. 单Variant多副本的预测集群 ## 1. 单Variant多副本的预测集群
......
...@@ -75,7 +75,7 @@ service ImageClassifyService { ...@@ -75,7 +75,7 @@ service ImageClassifyService {
#### 2.2.2 示例配置 #### 2.2.2 示例配置
关于Serving端的配置的详细信息,可以参考[Serving端配置](SERVING_CONFIGURE.md) 关于Serving端的配置的详细信息,可以参考[Serving端配置](../SERVING_CONFIGURE.md)
以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](DESIGN.md)) 以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](DESIGN.md))
...@@ -392,4 +392,4 @@ predictors { ...@@ -392,4 +392,4 @@ predictors {
} }
} }
``` ```
关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](CLIENT_CONFIGURE.md) 关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](../CLIENT_CONFIGURE.md)
...@@ -126,7 +126,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic ...@@ -126,7 +126,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic
![调用层级关系](../multi-variants.png) ![调用层级关系](../multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现: 一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。 同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
![Client端proxy功能](../client-side-proxy.png) ![Client端proxy功能](../client-side-proxy.png)
...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic ...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic
### 5.1 数据压缩方法 ### 5.1 数据压缩方法
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍) Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
### 5.2 C++ SDK API接口 ### 5.2 C++ SDK API接口
......
# A image for building paddle binaries
# Use cuda devel base image for both cpu and gpu environment
# When you modify it, please be aware of cudnn-runtime version
FROM hub.baidubce.com/ctr/cuda:10.1-cudnn7-devel-ubuntu16.04
MAINTAINER PaddlePaddle Authors <paddle-dev@baidu.com>
# ENV variables
ARG WITH_GPU
ARG WITH_AVX
ENV WITH_GPU=${WITH_GPU:-ON}
ENV WITH_AVX=${WITH_AVX:-ON}
ENV HOME /root
# Add bash enhancements
COPY tools/dockerfiles/root/ /root/
# Prepare packages for Python
RUN apt-get update && \
apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev
RUN apt-get update && \
apt-get install -y --allow-downgrades --allow-change-held-packages \
patchelf git python-pip python-dev python-opencv openssh-server bison \
wget unzip unrar tar xz-utils bzip2 gzip coreutils ntp \
curl sed grep graphviz libjpeg-dev zlib1g-dev \
python-matplotlib unzip \
automake locales clang-format swig \
liblapack-dev liblapacke-dev libcurl4-openssl-dev \
net-tools libtool module-init-tools vim && \
apt-get clean -y
RUN ln -s /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/libssl.so.10 && \
ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so /usr/lib/libcrypto.so.10
RUN wget https://github.com/koalaman/shellcheck/releases/download/v0.7.1/shellcheck-v0.7.1.linux.x86_64.tar.xz -O shellcheck-v0.7.1.linux.x86_64.tar.xz && \
tar -xf shellcheck-v0.7.1.linux.x86_64.tar.xz && cp shellcheck-v0.7.1/shellcheck /usr/bin/shellcheck && \
rm -rf shellcheck-v0.7.1.linux.x86_64.tar.xz shellcheck-v0.7.1
# install cmake
WORKDIR /home
RUN wget -q https://cmake.org/files/v3.16/cmake-3.16.0-Linux-x86_64.tar.gz && tar -zxvf cmake-3.16.0-Linux-x86_64.tar.gz && rm cmake-3.16.0-Linux-x86_64.tar.gz
ENV PATH=/home/cmake-3.16.0-Linux-x86_64/bin:$PATH
# Install Python3.6
RUN mkdir -p /root/python_build/ && wget -q https://www.sqlite.org/2018/sqlite-autoconf-3250300.tar.gz && \
tar -zxf sqlite-autoconf-3250300.tar.gz && cd sqlite-autoconf-3250300 && \
./configure -prefix=/usr/local && make -j8 && make install && cd ../ && rm sqlite-autoconf-3250300.tar.gz
RUN wget -q https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tgz && \
tar -xzf Python-3.6.0.tgz && cd Python-3.6.0 && \
CFLAGS="-Wformat" ./configure --prefix=/usr/local/ --enable-shared > /dev/null && \
make -j8 > /dev/null && make altinstall > /dev/null && ldconfig && cd .. && rm -rf Python-3.6.0*
# Install Python3.7
RUN wget -q https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tgz && \
tar -xzf Python-3.7.0.tgz && cd Python-3.7.0 && \
CFLAGS="-Wformat" ./configure --prefix=/usr/local/ --enable-shared > /dev/null && \
make -j8 > /dev/null && make altinstall > /dev/null && ldconfig && cd .. && rm -rf Python-3.7.0*
# Install Python3.8
RUN wget -q https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz && \
tar -xzf Python-3.8.0.tgz && cd Python-3.8.0 && \
CFLAGS="-Wformat" ./configure --prefix=/usr/local/ --enable-shared > /dev/null && \
make -j8 > /dev/null && make altinstall > /dev/null && ldconfig && cd .. && rm -rf Python-3.8.0*
ENV LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}
RUN ln -sf /usr/local/bin/python3.6 /usr/local/bin/python3 && ln -sf /usr/local/bin/python3.6 /usr/bin/python3 && ln -sf /usr/local/bin/pip3.6 /usr/local/bin/pip3 && ln -sf /usr/local/bin/pip3.6 /usr/bin/pip3
RUN rm -r /root/python_build
# Install Go and glide
RUN wget -qO- https://dl.google.com/go/go1.14.linux-amd64.tar.gz | \
tar -xz -C /usr/local && \
mkdir /root/go && \
mkdir /root/go/bin && \
mkdir /root/go/src && \
echo "GOROOT=/usr/local/go" >> /root/.bashrc && \
echo "GOPATH=/root/go" >> /root/.bashrc && \
echo "PATH=/usr/local/go/bin:/root/go/bin:$PATH" >> /root/.bashrc
ENV GOROOT=/usr/local/go GOPATH=/root/go
# should not be in the same line with GOROOT definition, otherwise docker build could not find GOROOT.
ENV PATH=usr/local/go/bin:/root/go/bin:${PATH}
# Install TensorRT
# following TensorRT.tar.gz is not the default official one, we do two miny changes:
# 1. Remove the unnecessary files to make the library small. TensorRT.tar.gz only contains include and lib now,
# and its size is only one-third of the official one.
# 2. Manually add ~IPluginFactory() in IPluginFactory class of NvInfer.h, otherwise, it couldn't work in paddle.
# See https://github.com/PaddlePaddle/Paddle/issues/10129 for details.
# Downgrade TensorRT
COPY tools/dockerfiles/build_scripts /build_scripts
RUN bash /build_scripts/install_trt.sh
RUN rm -rf /build_scripts
# git credential to skip password typing
RUN git config --global credential.helper store
# Fix locales to en_US.UTF-8
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN apt-get install libprotobuf-dev -y
# Older versions of patchelf limited the size of the files being processed and were fixed in this pr.
# https://github.com/NixOS/patchelf/commit/ba2695a8110abbc8cc6baf0eea819922ee5007fa
# So install a newer version here.
RUN wget -q https://paddle-ci.cdn.bcebos.com/patchelf_0.10-2_amd64.deb && \
dpkg -i patchelf_0.10-2_amd64.deb
# Configure OpenSSH server. c.f. https://docs.docker.com/engine/examples/running_ssh_service
RUN mkdir /var/run/sshd && echo 'root:root' | chpasswd && sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config && sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
CMD source ~/.bashrc
# ccache 3.7.9
RUN wget https://paddle-ci.gz.bcebos.com/ccache-3.7.9.tar.gz && \
tar xf ccache-3.7.9.tar.gz && mkdir /usr/local/ccache-3.7.9 && cd ccache-3.7.9 && \
./configure -prefix=/usr/local/ccache-3.7.9 && \
make -j8 && make install && \
ln -s /usr/local/ccache-3.7.9/bin/ccache /usr/local/bin/ccache
RUN python3.8 -m pip install --upgrade pip requests && \
python3.7 -m pip install --upgrade pip requests && \
python3.6 -m pip install --upgrade pip requests
RUN wget https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar && \
tar xf centos_ssl.tar && rm -rf centos_ssl.tar && \
mv libcrypto.so.1.0.2k /usr/lib/libcrypto.so.1.0.2k && mv libssl.so.1.0.2k /usr/lib/libssl.so.1.0.2k && \
ln -sf /usr/lib/libcrypto.so.1.0.2k /usr/lib/libcrypto.so.10 && \
ln -sf /usr/lib/libssl.so.1.0.2k /usr/lib/libssl.so.10 && \
ln -sf /usr/lib/libcrypto.so.10 /usr/lib/libcrypto.so && \
ln -sf /usr/lib/libssl.so.10 /usr/lib/libssl.so
EXPOSE 22
...@@ -825,7 +825,7 @@ function main() { ...@@ -825,7 +825,7 @@ function main() {
if [ -f ${log_dir}error_models.txt ]; then if [ -f ${log_dir}error_models.txt ]; then
cat ${log_dir}error_models.txt cat ${log_dir}error_models.txt
echo "error occurred!" echo "error occurred!"
# exit 1 exit 1
fi fi
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册