提交 2df69769 编写于 作者: T TeslaZhao

pick #1527

上级 0ae8d87e
......@@ -55,12 +55,12 @@ The goal of Paddle Serving is to provide high-performance, flexible and easy-to-
This chapter guides you through the installation and deployment steps. It is strongly recommended to use Docker to deploy Paddle Serving. If you do not use docker, ignore the docker-related steps. Paddle Serving can be deployed on cloud servers using Kubernetes, running on many commonly hardwares such as ARM CPU, Intel CPU, Nvidia GPU, Kunlun XPU. The latest development kit of the develop branch is compiled and generated every day for developers to use.
- [Install Paddle Serving using docker(stable wheel packages)](doc/Install_EN.md)
- [Install Paddle Serving using docker](doc/Install_EN.md)
- [Build Paddle Serving from Source with Docker](doc/Compile_EN.md)
- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md)
- [Deploy Paddle Serving with Security gateway(Chinese)](doc/Serving_Auth_Docker_CN.md)
- [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md)
- [Latest Wheel packages(not stable)](doc/Latest_Packages_CN.md)
- [Latest Wheel packages](doc/Latest_Packages_CN.md)
> Use
......
......@@ -51,12 +51,12 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发
> 部署
此章节引导您完成安装和部署步骤,强烈推荐使用Docker部署Paddle Serving,如您不使用docker,省略docker相关步骤。在云服务器上可以使用Kubernetes部署Paddle Serving。在异构硬件如ARM CPU、昆仑XPU上编译或使用Paddle Serving可阅读以下文档。每天编译生成develop分支的最新开发包供开发者使用。
- [使用docker安装Paddle Serving(稳定wheel包)](doc/Install_CN.md)
- [使用docker安装Paddle Serving](doc/Install_CN.md)
- [源码编译安装Paddle Serving](doc/Compile_CN.md)
- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md)
- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md)
- [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md)
- [最新Wheel开发包(不稳定)](doc/Latest_Packages_CN.md)
- [最新Wheel开发包](doc/Latest_Packages_CN.md)
> 使用
......
......@@ -51,9 +51,9 @@ Serving开发镜像是Serving套件为了支持各个预测环境提供的用于
| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: |
| CPU | 0.7.0-devel | Ubuntu 16.04 | 2.2.0 | Ubuntu 18.04. |
| Cuda10.1+Cudnn7 | 0.7.0-cuda10.1-cudnn7-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn8 | 0.7.0-cuda10.2-cudnn8-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-cuda11.2-cudnn8 | Ubuntu 18.04 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda11.2-cudnn8 | Ubuntu 18.04 |
我们首先要针对自己所需的环境拉取相关镜像。上表**环境**一列下,除了CPU,其余(Cuda**+Cudnn**)都属于GPU环境。
您可以使用Serving开发镜像。
......
......@@ -47,9 +47,9 @@ Serving development mirror is the mirror used to compile and debug prediction se
| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: |
| CPU | 0.7.0-devel | Ubuntu 16.04 | 2.2.0 | Ubuntu 18.04. |
| Cuda10.1+Cudnn7 | 0.7.0-cuda10.1-cudnn7-devel | Ubuntu 16.04 | Nan | Nan |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn8 | 0.7.0-cuda10.2-cudnn8-devel | Ubuntu 16.04 | Nan | Nan |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-cuda11.2-cudnn8 | Ubuntu 18.04 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda11.2-cudnn8 | Ubuntu 18.04 |
We first need to pull related images for the environment we need. Under the **Environment** column in the above table, except for the CPU, the rest (Cuda**+Cudnn**) belong to the GPU environment.
......
......@@ -43,8 +43,8 @@ bash Serving/tools/paddle_env_install.sh
**GPU:**
```
# 启动 GPU Docker
docker pull paddlepaddle/paddle:2.2.0-cuda10.2-cudnn7
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/paddle:2.2.0-cuda10.2-cudnn7 bash
docker pull paddlepaddle/paddle:2.2.0-gpu-cuda10.2-cudnn7
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/paddle:2.2.0-gpu-cuda10.2-cudnn7 bash
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
......@@ -97,8 +97,8 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.2.0/python/Linux/GPU/x
| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: |
| CPU | 0.7.0-devel | Ubuntu 16.04 | 2.2.0 | Ubuntu 18.04. |
| Cuda10.1+Cudnn7 | 0.7.0-cuda10.1-cudnn7-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn8 | 0.7.0-cuda10.2-cudnn8-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-cuda11.2-cudnn8 | Ubuntu 18.04 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda11.2-cudnn8 | Ubuntu 18.04 |
对于**Windows 10 用户**,请参考文档[Windows平台使用Paddle Serving指导](Windows_Tutorial_CN.md)
......@@ -43,8 +43,8 @@ bash Serving/tools/paddle_env_install.sh
**GPU:**
```
# Start GPU Docker
docker pull paddlepaddle/paddle:2.2.0-cuda10.2-cudnn7
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/paddle:2.2.0-cuda10.2-cudnn7 bash
docker pull paddlepaddle/paddle:2.2.0-gpu-cuda10.2-cudnn7
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/paddle:2.2.0-gpu-cuda10.2-cudnn7 bash
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
......@@ -100,8 +100,8 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.2.0/python/Linux/GPU/x
| :--------------------------: | :-------------------------------: | :-------------: | :-------------------: | :----------------: |
| CPU | 0.7.0-devel | Ubuntu 16.04 | 2.2.0 | Ubuntu 18.04. |
| Cuda10.1+Cudnn7 | 0.7.0-cuda10.1-cudnn7-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn7 | 0.7.0-cuda10.2-cudnn7-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda10.2-cudnn7 | Ubuntu 16.04 |
| Cuda10.2+Cudnn8 | 0.7.0-cuda10.2-cudnn8-devel | Ubuntu 16.04 | 无 | 无 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-cuda11.2-cudnn8 | Ubuntu 18.04 |
| Cuda11.2+Cudnn8 | 0.7.0-cuda11.2-cudnn8-devel | Ubuntu 16.04 | 2.2.0-gpu-cuda11.2-cudnn8 | Ubuntu 18.04 |
For **Windows 10 users**, please refer to the document [Paddle Serving Guide for Windows Platform](Windows_Tutorial_CN.md).
# Latest Wheel Packages
## CPU server
### Python 3
```
# Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl
```
## GPU server
### Python 3
```
#cuda10.1 Cudnn 7 with TensorRT 6, Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl
#cuda10.2 Cudnn 7 with TensorRT 6, Compile by gcc5.4
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl
#cuda10.2 Cudnn 8 with TensorRT 7, Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post1028-py3-none-any.whl
#cuda11.2 Cudnn 8 with TensorRT 8 (beta), Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post112-py3-none-any.whl
```
## Client
### Python 3.6
```
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp36-none-any.whl
```
### Python 3.8
```
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp38-none-any.whl
```
### Python 3.7
```
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
```
## App
### Python 3
```
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-none-any.whl
```
## Binary Package
## Paddle-Serving-Server (x86 CPU/GPU)
| | develop whl | develop bin | stable whl | stable bin |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
| cpu-avx-mkl | [paddle_serving_server-0.0.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl) | [serving-cpu-avx-mkl-0.0.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.0.0.tar.gz) | [paddle_serving_server-0.7.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.7.0-py3-none-any.whl) | [serving-cpu-avx-mkl-0.7.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.7.0.tar.gz) |
| cpu-avx-openblas | [paddle_serving_server-0.0.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl) | [serving-cpu-avx-openblas-0.0.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-openblas-0.0.0.tar.gz) | [paddle_serving_server-0.7.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.7.0-py3-none-any.whl) | [serving-cpu-avx-openblas-0.7.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-openblas-0.7.0.tar.gz) |
| cpu-noavx-openblas | [paddle_serving_server-0.0.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl) | [ serving-cpu-noavx-openblas-0.0.0.tar.gz ]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz) | [paddle_serving_server-0.7.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.7.0-py3-none-any.whl) | [ serving-cpu-noavx-openblas-0.7.0.tar.gz ]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-noavx-openblas-0.7.0.tar.gz) |
| cuda10.1-cudnn7-TensorRT6 | [paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl) | [serving-gpu-101-0.0.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-101-0.0.0.tar.gz) | [paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl) | [serving-gpu-101-0.7.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-101-0.7.0.tar.gz) |
| cuda10.2-cudnn7-TensorRT6 | [paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl) | [serving-gpu-102-0.0.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-102-0.0.0.tar.gz) | [paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl) | [serving-gpu-102-0.7.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-102-0.7.0.tar.gz) |
| cuda10.2-cudnn8-TensorRT7 | [paddle_serving_server_gpu-0.0.0.post1028-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl) | [ serving-gpu-1028-0.0.0.tar.gz]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-1028-0.0.0.tar.gz ) | [paddle_serving_server_gpu-0.7.0.post1028-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl) | [ serving-gpu-1028-0.7.0.tar.gz]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-1028-0.7.0.tar.gz ) |
| cuda11.2-cudnn8-TensorRT8 | [paddle_serving_server_gpu-0.0.0.post112-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post112-py3-none-any.whl) | [ serving-gpu-112-0.0.0.tar.gz]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-112-0.0.0.tar.gz ) | [paddle_serving_server_gpu-0.7.0.post112-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post112-py3-none-any.whl) | [ serving-gpu-112-0.7.0.tar.gz]( https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-112-0.7.0.tar.gz ) |
### Binary Package
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.
### Bin links
```
# CPU AVX MKL
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.0.0.tar.gz
# CPU AVX OPENBLAS
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-openblas-0.0.0.tar.gz
# CPU NOAVX OPENBLAS
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz
# Cuda 10.1
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-101-0.0.0.tar.gz
# Cuda 10.2 + Cudnn 7
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-102-0.0.0.tar.gz
# Cuda 10.2 + Cudnn 8
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-1028-0.0.0.tar.gz
# Cuda 11.2
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-112-0.0.0.tar.gz
```
### How to setup SERVING_BIN offline?
- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
- `pip install ` the serving and `tar xf ` the binary package, then `export SERVING_BIN=$PWD/serving-gpu-cuda11-0.0.0/serving` (take Cuda 11 as the example)
## paddle-serving-client
| | develop whl | stable whl |
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| Python3.6 | [paddle_serving_client-0.0.0-cp36-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp36-none-any.whl) | [paddle_serving_client-0.7.0-cp36-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.7.0-cp36-none-any.whl)) |
| Python3.7 | [paddle_serving_client-0.0.0-cp37-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl) | [paddle_serving_client-0.7.0-cp37-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.7.0-cp37-none-any.whl) |
| Python3.8 | [paddle_serving_client-0.0.0-cp38-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp38-none-any.whl) | [paddle_serving_client-0.7.0-cp38-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.7.0-cp38-none-any.whl) |
## paddle-serving-app
| | develop whl | stable whl |
|---------|------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| Python3 | [paddle_serving_app-0.0.0-py3-none-any.whl](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-none-any.whl) | [ paddle_serving_app-0.7.0-py3-none-any.whl ]( https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.7.0-py3-none-any.whl) |
## Baidu Kunlun user
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册