提交 9abdde1f 编写于 作者: T TeslaZhao

Update doc

上级 668ddfad
...@@ -4,10 +4,16 @@ ...@@ -4,10 +4,16 @@
- [1.使用开发镜像](#1) - [1.使用开发镜像](#1)
- [Serving 开发镜像](#1.1) - [Serving 开发镜像](#1.1)
- [CPU 镜像](#1.1.1)
- [GPU 镜像](#1.1.2)
- [ARM & XPU 镜像](#1.1.3)
- [Paddle 开发镜像](#1.2) - [Paddle 开发镜像](#1.2)
- [CPU 镜像](#1.2.1)
- [GPU 镜像](#1.2.2)
- [2.安装 Wheel 包](#2) - [2.安装 Wheel 包](#2)
- [在线安装](#2.1) - [在线安装](#2.1)
- [离线安装](#2.2) - [离线安装](#2.2)
- [ARM & XPU 包安装](#2.3)
- [3.环境检查](#3) - [3.环境检查](#3)
...@@ -33,33 +39,52 @@ ...@@ -33,33 +39,52 @@
| CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18 | CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18
| CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 无 | Ubuntu 18 | | CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 无 | Ubuntu 18 |
| CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 | | CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 |
| ARM + XPU | xpu-arm | CentOS 8.3 | 无 | 无 |
对于**Windows 10 用户**,请参考文档[Windows平台使用Paddle Serving指导](Windows_Tutorial_CN.md) 对于**Windows 10 用户**,请参考文档[Windows平台使用Paddle Serving指导](Windows_Tutorial_CN.md)
<a name="1.1"></a> <a name="1.1"></a>
### 1.1 Serving开发镜像(CPU/GPU 2选1) ### 1.1 Serving开发镜像(CPU/GPU 2选1)
<a name="1.1.1"></a>
**CPU:** **CPU:**
``` ```
# 启动 CPU Docker # 启动 CPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test bash docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
``` ```
<a name="1.1.2"></a>
**GPU:** **GPU:**
``` ```
# 启动 GPU Docker # 启动 GPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel bash nvidia-docker run -p 9292:9292 --name test_gpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel bash
nvidia-docker exec -it test bash nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving
```
<a name="1.1.3"></a>
**ARM & XPU: **
```
docker pull registry.baidubce.com/paddlepaddle/serving:xpu-arm
docker run -p 9292:9292 --name test_arm_xpu -dit registry.baidubce.com/paddlepaddle/serving:xpu-arm bash
docker exec -it test_arm_xpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
``` ```
<a name="1.2"></a> <a name="1.2"></a>
### 1.2 Paddle开发镜像(CPU/GPU 2选1) ### 1.2 Paddle开发镜像(CPU/GPU 2选1)
<a name="1.2.1"></a>
**CPU:** **CPU:**
``` ```
### 启动 CPU Docker ### 启动 CPU Docker
...@@ -71,6 +96,9 @@ git clone https://github.com/PaddlePaddle/Serving ...@@ -71,6 +96,9 @@ git clone https://github.com/PaddlePaddle/Serving
### Paddle开发镜像需要执行以下脚本增加Serving所需依赖项 ### Paddle开发镜像需要执行以下脚本增加Serving所需依赖项
bash Serving/tools/paddle_env_install.sh bash Serving/tools/paddle_env_install.sh
``` ```
<a name="1.2.2"></a>
**GPU:** **GPU:**
``` ```
### 启动 GPU Docker ### 启动 GPU Docker
...@@ -103,6 +131,7 @@ pip3 install -r python/requirements.txt ...@@ -103,6 +131,7 @@ pip3 install -r python/requirements.txt
<a name="2.1"></a> <a name="2.1"></a>
### 2.1 在线安装 ### 2.1 在线安装
在线安装采用 `pypi` 下载并安装的方式。
```shell ```shell
pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
...@@ -158,6 +187,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x ...@@ -158,6 +187,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x
<a name="2.2"></a> <a name="2.2"></a>
### 2.2 离线安装 ### 2.2 离线安装
离线安装是指所有的 Paddle 和 Serving 包和依赖库,传入到无网或弱网环境下安装。
**1.安装离线 Wheel 包** **1.安装离线 Wheel 包**
...@@ -223,6 +253,24 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se ...@@ -223,6 +253,24 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se
python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0" python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0"
``` ```
<a name="2.3"></a>
### 2.3 ARM & XPU 安装 wheel 包
由于使用 ARM 和 XPU 的用户较少,安装此环境的 Wheel 单独提供如下,其中 `paddle_serving_client` 仅提供 `py36` 的版本,如需其他版本请与我们联系。
```
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```
二进制包地址:
```
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```
<a name="3"></a> <a name="3"></a>
## 3.环境检查 ## 3.环境检查
......
...@@ -4,10 +4,16 @@ ...@@ -4,10 +4,16 @@
- [1.Use devel docker](#1) - [1.Use devel docker](#1)
- [Serving devel images](#1.1) - [Serving devel images](#1.1)
- [CPU images](#1.1.1)
- [GPU images](#1.1.2)
- [ARM & XPU images](#1.1.3)
- [Paddle devel images](#1.2) - [Paddle devel images](#1.2)
- [CPU images](#1.2.1)
- [GPU images](#1.2.2)
- [2.Install Wheel Packages](#2) - [2.Install Wheel Packages](#2)
- [Online Install](#2.1) - [Online Install](#2.1)
- [Offline Install](#2.2) - [Offline Install](#2.2)
- [ARM & XPU Install](#2.3)
- [3.Installation Check](#3) - [3.Installation Check](#3)
**Strongly recommend** you build **Paddle Serving** in Docker. For more images, please refer to [Docker Image List](Docker_Images_CN.md). **Strongly recommend** you build **Paddle Serving** in Docker. For more images, please refer to [Docker Image List](Docker_Images_CN.md).
...@@ -28,6 +34,7 @@ ...@@ -28,6 +34,7 @@
| CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18 | CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18
| CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 | None | None | | CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 | None | None |
| CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 | | CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 |
| ARM + XPU | xpu-arm | CentOS 8.3 | None | None |
For **Windows 10 users**, please refer to the document [Paddle Serving Guide for Windows Platform](Windows_Tutorial_CN.md). For **Windows 10 users**, please refer to the document [Paddle Serving Guide for Windows Platform](Windows_Tutorial_CN.md).
...@@ -36,46 +43,68 @@ For **Windows 10 users**, please refer to the document [Paddle Serving Guide for ...@@ -36,46 +43,68 @@ For **Windows 10 users**, please refer to the document [Paddle Serving Guide for
### 1.1 Serving Devel Images (CPU/GPU 2 choose 1) ### 1.1 Serving Devel Images (CPU/GPU 2 choose 1)
<a name="1.1.1"></a>
**CPU:** **CPU:**
``` ```
# Start CPU Docker Container # Start CPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test bash docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
``` ```
<a name="1.1.2"></a>
**GPU:** **GPU:**
``` ```
# Start GPU Docker Container # Start GPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel bash nvidia-docker run -p 9292:9292 --name test_gpu -dit docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel bash
nvidia-docker exec -it test bash nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving
```
<a name="1.1.3"></a>
**ARM & XPU: **
```
docker pull registry.baidubce.com/paddlepaddle/serving:xpu-arm
docker run -p 9292:9292 --name test_arm_xpu -dit registry.baidubce.com/paddlepaddle/serving:xpu-arm bash
docker exec -it test_arm_xpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
``` ```
<a name="1.2"></a> <a name="1.2"></a>
### 1.2 Paddle Devel Images (choose any codeblock of CPU/GPU) ### 1.2 Paddle Devel Images (choose any codeblock of CPU/GPU)
<a name="1.2.1"></a>
**CPU:** **CPU:**
``` ```shell
# Start CPU Docker Container # Start CPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0 docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0 bash docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0 bash
docker exec -it test bash docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
# Paddle dev image needs to run the following script to increase the dependencies required by Serving ### Paddle dev image needs to run the following script to increase the dependencies required by Serving
bash Serving/tools/paddle_env_install.sh bash Serving/tools/paddle_env_install.sh
``` ```
<a name="1.2.2"></a>
**GPU:** **GPU:**
```
# Start GPU Docker ```shell
### Start GPU Docker
nvidia-docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8 nvidia-docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8 bash nvidia-docker run -p 9292:9292 --name test_gpu -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8 bash
nvidia-docker exec -it test bash nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving git clone https://github.com/PaddlePaddle/Serving
# Paddle development image needs to execute the following script to increase the dependencies required by Serving ### Paddle development image needs to execute the following script to increase the dependencies required by Serving
bash Serving/tools/paddle_env_install.sh bash Serving/tools/paddle_env_install.sh
``` ```
...@@ -98,6 +127,7 @@ Install the service whl package. There are three types of client, app and server ...@@ -98,6 +127,7 @@ Install the service whl package. There are three types of client, app and server
<a name="2.1"></a> <a name="2.1"></a>
### 2.1 Online Install ### 2.1 Online Install
Online installation uses `pypi` to download and install.
```shell ```shell
pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
...@@ -152,6 +182,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x ...@@ -152,6 +182,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x
<a name="2.2"></a> <a name="2.2"></a>
### 2.2 Offline Install ### 2.2 Offline Install
Offline installation is to download all Paddle and Serving packages and dependent libraries, and install them in a no-network or weak-network environment.
**1.Install offline wheel packages** **1.Install offline wheel packages**
...@@ -210,6 +241,22 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se ...@@ -210,6 +241,22 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se
python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0" python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0"
``` ```
<a name="2.3"></a>
### 2.3 ARM & XPU Install
Since there are few users using ARM and XPU, the Wheel for installing this environment is provided separately as follows, among which `paddle_serving_client` only provides the `py36` version, if you need other versions, please contact us.
```
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```
Download binary package address:
```
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```
<a name="3"></a> <a name="3"></a>
## 3.Installation Check ## 3.Installation Check
......
...@@ -48,22 +48,23 @@ ...@@ -48,22 +48,23 @@
### Wheel包链接 ### Wheel包链接
适用ARM CPU环境的昆仑Wheel包: 适用ARM CPU环境的昆仑Wheel包:
```
```shell
# paddle-serving-server # paddle-serving-server
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_server_xpu-0.0.0.post2-py3-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
# paddle-serving-client # paddle-serving-client
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_client-0.0.0-cp36-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
# paddle-serving-app # paddle-serving-app
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_app-0.0.0-py3-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
# SERVING BIN # SERVING BIN
https://paddle-serving.bj.bcebos.com/bin/serving-xpu-aarch64-0.0.0.tar.gz wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
``` ```
适用于x86 CPU环境的昆仑Wheel包: 适用于ARM & XPU 的 v0.9.0 版本 Wheel包:
``` ```shell
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
``` ```
...@@ -46,23 +46,22 @@ for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as f ...@@ -46,23 +46,22 @@ for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as f
### Wheel Package Links ### Wheel Package Links
for arm kunlun user for arm kunlun user,
``` ```shell
# paddle-serving-server # paddle-serving-server
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_server_xpu-0.0.0.post2-py3-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
# paddle-serving-client # paddle-serving-client
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_client-0.0.0-cp36-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
# paddle-serving-app # paddle-serving-app
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_app-0.0.0-py3-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
# SERVING BIN # SERVING BIN
https://paddle-serving.bj.bcebos.com/bin/serving-xpu-aarch64-0.0.0.tar.gz wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
``` ```
for x86 kunlun user for x86 xpu user, the wheel packages are here.
``` ```shell
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
``` ```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册