提交 9f4e7fe6 编写于 作者: H HydrogenSulfate

refine serving docs

上级 b574a47d
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
- [1. Introduction](#1) - [1. Introduction](#1)
- [2. Installation of Serving](#2) - [2. Installation of Serving](#2)
- [3. Service Deployment for Image Classification](#3) - [3. Service Deployment for Image Classification](#3)
- [3.1 Model Transformation](#3.1) - [3.1 Model conversion](#3.1)
- [3.2 Service Deployment and Request](#3.2) - [3.2 Service Deployment and Request](#3.2)
- [4. Service Deployment for Image Recognition](#4) - [4. Service Deployment for Image Recognition](#4)
- [4.1 Model Transformation](#4.1) - [4.1 Model conversion](#4.1)
- [4.2 Service Deployment and Request](#4.2) - [4.2 Service Deployment and Request](#4.2)
- [5. FAQ](#5) - [5. FAQ](#5)
...@@ -24,7 +24,7 @@ This section, exemplified by HTTP deployment of prediction service, describes ho ...@@ -24,7 +24,7 @@ This section, exemplified by HTTP deployment of prediction service, describes ho
It is officially recommended to use docker for the installation and environment deployment of Serving. First, pull the docker and create a Serving-based one. It is officially recommended to use docker for the installation and environment deployment of Serving. First, pull the docker and create a Serving-based one.
``` ```shell
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash nvidia-docker exec -it test bash
...@@ -32,22 +32,22 @@ nvidia-docker exec -it test bash ...@@ -32,22 +32,22 @@ nvidia-docker exec -it test bash
Once you are in docker, install the Serving-related python packages. Once you are in docker, install the Serving-related python packages.
``` ```shell
pip3 install paddle-serving-client==0.7.0 python3.7 -m pip install paddle-serving-client==0.7.0
pip3 install paddle-serving-server==0.7.0 # CPU python3.7 -m pip install paddle-serving-server==0.7.0 # CPU
pip3 install paddle-serving-app==0.7.0 python3.7 -m pip install paddle-serving-app==0.7.0
pip3 install paddle-serving-server-gpu==0.7.0.post102 #GPU with CUDA10.2 + TensorRT6 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 #GPU with CUDA10.2 + TensorRT6
# For other GPU environemnt, confirm the environment before choosing which one to execute # For other GPU environemnt, confirm the environment before choosing which one to execute
pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
``` ```
- Speed up the installation process by replacing the source with `-i https://pypi.tuna.tsinghua.edu.cn/simple`. - Speed up the installation process by replacing the source with `-i https://pypi.tuna.tsinghua.edu.cn/simple`.
- For other environment configuration and installation, please refer to [Install Paddle Serving using docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md) - For other environment configuration and installation, please refer to [Install Paddle Serving using docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md)
- To deploy CPU services, please install the CPU version of serving-server with the following command. - To deploy CPU services, please install the CPU version of serving-server with the following command.
``` ```shell
pip install paddle-serving-server python3.7 -m pip install paddle-serving-server
``` ```
<a name="3"></a> <a name="3"></a>
...@@ -85,7 +85,7 @@ When using PaddleServing for service deployment, you need to convert the saved i ...@@ -85,7 +85,7 @@ When using PaddleServing for service deployment, you need to convert the saved i
``` ```
The specific meaning of the parameters in the above command is shown in the following table The specific meaning of the parameters in the above command is shown in the following table
| parameter | type | default value | description | | parameter | type | default value | description |
| --------- | ---- | ------------- | ----------- | |--- | | --------- | ---- | ------------- | ----------- |
| `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory. | | `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory. |
| `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename | | `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename |
| `params_filename` | str | None | File name where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None | | `params_filename` | str | None | File name where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None |
...@@ -156,6 +156,7 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests ...@@ -156,6 +156,7 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests
```log ```log
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors ': []} {'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors ': []}
``` ```
- turn off the service - turn off the service
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program: If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
```bash ```bash
...@@ -175,7 +176,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -175,7 +176,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
# One-click compile and install Serving server, set SERVING_BIN # One-click compile and install Serving server, set SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**Note: The path set by **[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. **Note: The path set by **[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; if you encounter a non-network error during the execution of build_server.sh, you can manually copy the commands in the script to the terminal for execution.
- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields. - Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields.
```log ```log
...@@ -187,6 +188,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -187,6 +188,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
shape: 1 shape: 1
} }
``` ```
- Modify part of the code of [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py) - Modify part of the code of [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py)
1. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` . 1. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
2. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment. 2. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
...@@ -250,7 +252,7 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -250,7 +252,7 @@ When using PaddleServing for image recognition service deployment, **need to con
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
``` ```
The meaning of the parameters of the above command is the same as [#4.1 Model conversion](#4.1) The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
After the recognition inference model is converted, there will be additional folders `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` in the current folder. Modify the name of `alias` in `serving_server_conf.prototxt` in `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` directories respectively: Change `alias_name` in `fetch_var` to `features`. The content of the modified `serving_server_conf.prototxt` is as follows After the recognition inference model is converted, there will be additional folders `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` in the current folder. Modify the name of `alias` in `serving_server_conf.prototxt` in `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` directories respectively: Change `alias_name` in `fetch_var` to `features`. The content of the modified `serving_server_conf.prototxt` is as follows
...@@ -294,7 +296,7 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -294,7 +296,7 @@ When using PaddleServing for image recognition service deployment, **need to con
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
``` ```
The meaning of the parameters of the above command is the same as [#4.1 Model conversion](#4.1) The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
After the conversion of the general detection inference model is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure: After the conversion of the general detection inference model is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure:
```shell ```shell
...@@ -361,7 +363,7 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -361,7 +363,7 @@ When using PaddleServing for image recognition service deployment, **need to con
``` ```
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows: After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
```log ```log
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.79903316}]"], 'tensors': []} {'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []}
``` ```
<a name="4.2.2"></a> <a name="4.2.2"></a>
...@@ -411,7 +413,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -411,7 +413,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
WARNING: Logging before InitGoogleLogging() is written to STDERR WARNING: Logging before InitGoogleLogging() is written to STDERR
I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1 I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1
I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms. I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms.
[{'bbox': [345, 95, 524, 585], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.8073724}] [{'bbox': [345, 95, 524, 585], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8073724}]
``` ```
- close the service: - close the service:
......
...@@ -4,10 +4,10 @@ ...@@ -4,10 +4,10 @@
- [1. 简介](#1) - [1. 简介](#1)
- [2. Serving 安装](#2) - [2. Serving 安装](#2)
- [3. 图像分类服务部署](#3) - [3. 图像分类服务部署](#3)
- [3.1 模型转换](#3.1) - [3.1 模型转换](#3.1)
- [3.2 服务部署和请求](#3.2) - [3.2 服务部署和请求](#3.2)
- [3.2.1 Python Serving](#3.2.1) - [3.2.1 Python Serving](#3.2.1)
- [3.2.2 C++ Serving](#3.2.2) - [3.2.2 C++ Serving](#3.2.2)
- [4. 图像识别服务部署](#4) - [4. 图像识别服务部署](#4)
- [4.1 模型转换](#4.1) - [4.1 模型转换](#4.1)
- [4.2 服务部署和请求](#4.2) - [4.2 服务部署和请求](#4.2)
...@@ -39,21 +39,21 @@ docker exec -it test bash ...@@ -39,21 +39,21 @@ docker exec -it test bash
进入 docker 后,需要安装 Serving 相关的 python 包。 进入 docker 后,需要安装 Serving 相关的 python 包。
```shell ```shell
pip3 install paddle-serving-client==0.7.0 python3.7 -m pip install paddle-serving-client==0.7.0
pip3 install paddle-serving-app==0.7.0 python3.7 -m pip install paddle-serving-app==0.7.0
pip3 install faiss-cpu==1.7.1post2 python3.7 -m pip install faiss-cpu==1.7.1post2
#若为CPU部署环境: #若为CPU部署环境:
pip3 install paddle-serving-server==0.7.0 # CPU python3.7 -m pip install paddle-serving-server==0.7.0 # CPU
pip3 install paddlepaddle==2.2.0 # CPU python3.7 -m pip install paddlepaddle==2.2.0 # CPU
#若为GPU部署环境 #若为GPU部署环境
pip3 install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
pip3 install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#其他GPU环境需要确认环境再选择执行哪一条 #其他GPU环境需要确认环境再选择执行哪一条
pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
``` ```
* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。 * 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。
...@@ -180,7 +180,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -180,7 +180,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
# 一键编译安装Serving server、设置 SERVING_BIN # 一键编译安装Serving server、设置 SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 **注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译;如果执行build_server.sh过程中遇到非网络原因的报错,则可以手动将脚本中的命令逐条复制到终端执行
- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。 - 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。
```log ```log
...@@ -256,7 +256,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -256,7 +256,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
``` ```
上述命令的参数含义与[#4.1 模型转换](#4.1)相同 上述命令的参数含义与[#3.1 模型转换](#3.1)相同
通用识别 inference 模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹,具备如下结构: 通用识别 inference 模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹,具备如下结构:
```shell ```shell
├── general_PPLCNet_x2_5_lite_v1.0_serving/ ├── general_PPLCNet_x2_5_lite_v1.0_serving/
...@@ -278,7 +278,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -278,7 +278,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
``` ```
上述命令的参数含义与[#4.1 模型转换](#4.1)相同 上述命令的参数含义与[#3.1 模型转换](#3.1)相同
识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 `serving_server_conf.prototxt` 中的 `alias` 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 修改后的 `serving_server_conf.prototxt` 内容如下 识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 `serving_server_conf.prototxt` 中的 `alias` 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 修改后的 `serving_server_conf.prototxt` 内容如下
...@@ -381,7 +381,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -381,7 +381,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
``` ```
**注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 **注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。 - C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。
```shell ```shell
# 进入PaddleClas/deploy目录 # 进入PaddleClas/deploy目录
cd PaddleClas/deploy/ cd PaddleClas/deploy/
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册