未验证 提交 d486dee2 编写于 作者: H HydrogenSulfate 提交者: GitHub

Merge pull request #1975 from HydrogenSulfate/polish_serving_docs

Polish serving docs
......@@ -78,7 +78,7 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick
- 推理部署
- [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#1)
- [基于C++预测引擎推理](docs/zh_CN/inference_deployment/cpp_deploy.md)
- [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md)
- [服务化部署](docs/zh_CN/inference_deployment/classification_serving_deploy.md)
- [端侧部署](docs/zh_CN/inference_deployment/paddle_lite_deploy.md)
- [Paddle2ONNX模型转化与预测](deploy/paddle2onnx/readme.md)
- [模型压缩](deploy/slim/README.md)
......@@ -93,7 +93,7 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick
- 推理部署
- [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#2)
- [基于C++预测引擎推理](deploy/cpp_shitu/readme.md)
- [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md)
- [服务化部署](docs/zh_CN/inference_deployment/recognition_serving_deploy.md)
- [端侧部署](deploy/lite_shitu/README.md)
- PP系列骨干网络模型
- [PP-HGNet](docs/zh_CN/models/PP-HGNet.md)
......
[English](readme_en.md) | 简体中文
简体中文 | [English](readme_en.md)
# 基于PaddleHub Serving的服务部署
# 基于 PaddleHub Serving 的服务部署
hubserving服务部署配置服务包`clas`下包含3个必选文件,目录如下:
```
hubserving/clas/
└─ __init__.py 空文件,必选
└─ config.json 配置文件,可选,使用配置启动服务时作为参数传入
└─ module.py 主模块,必选,包含服务的完整逻辑
└─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数
PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。
## 目录
- [1. 简介](#1-简介)
- [2. 准备环境](#2-准备环境)
- [3. 下载推理模型](#3-下载推理模型)
- [4. 安装服务模块](#4-安装服务模块)
- [5. 启动服务](#5-启动服务)
- [5.1 命令行启动](#51-命令行启动)
- [5.2 配置文件启动](#52-配置文件启动)
- [6. 发送预测请求](#6-发送预测请求)
- [7. 自定义修改服务模块](#7-自定义修改服务模块)
<a name="1"></a>
## 1. 简介
hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下:
```shell
deploy/hubserving/clas/
├── __init__.py # 空文件,必选
├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入
├── module.py # 主模块,必选,包含服务的完整逻辑
└── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数
```
## 快速启动服务
### 1. 准备环境
<a name="2"></a>
## 2. 准备环境
```shell
# 安装paddlehub,请安装2.0版本
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
# 安装 paddlehub,建议安装 2.1.0 版本
python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### 2. 下载推理模型
<a name="3"></a>
## 3. 下载推理模型
安装服务模块前,需要准备推理模型并放到正确路径,默认模型路径为:
```
分类推理模型结构文件:PaddleClas/inference/inference.pdmodel
分类推理模型权重文件:PaddleClas/inference/inference.pdiparams
```
* 分类推理模型结构文件:`PaddleClas/inference/inference.pdmodel`
* 分类推理模型权重文件:`PaddleClas/inference/inference.pdiparams`
**注意**
* 模型文件路径可在`PaddleClas/deploy/hubserving/clas/params.py`中查看和修改:
* 模型文件路径可在 `PaddleClas/deploy/hubserving/clas/params.py` 中查看和修改:
```python
"inference_model_dir": "../inference/"
```
需要注意,模型文件(包括.pdmodel与.pdiparams)名称必须为`inference`
* 我们也提供了大量基于ImageNet-1k数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/models/models_intro.md),也可以使用自己训练转换好的模型。
* 模型文件(包括 `.pdmodel``.pdiparams`)的名称必须为 `inference`
* 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
### 3. 安装服务模块
针对Linux环境和Windows环境,安装命令如下。
* 在Linux环境下,安装示例如下:
```shell
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
<a name="4"></a>
## 4. 安装服务模块
* 在 Linux 环境下,安装示例如下:
```shell
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
* 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下:
```shell
cd PaddleClas\deploy
# 安装服务模块:
hub install hubserving\clas\
```
* 在Windows环境下(文件夹的分隔符为`\`),安装示例如下:
<a name="5"></a>
## 5. 启动服务
<a name="5.1"></a>
### 5.1 命令行启动
该方式仅支持使用 CPU 预测。启动命令:
```shell
cd PaddleClas\deploy
# 安装服务模块:
hub install hubserving\clas\
hub serving start \
--modules clas_system
--port 8866
```
这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
### 4. 启动服务
#### 方式1. 命令行命令启动(仅支持CPU)
**启动命令:**
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
**参数说明**:
| 参数 | 用途 |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------- |
| --modules/-m | [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`* |
| --port/-p | [**可选**] 服务端口,默认为 8866 |
| --use_multiprocess | [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式<br>*`Windows 操作系统只支持单进程方式`* |
| --workers | [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数 |
更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
**参数:**
|参数|用途|
|-|-|
|--modules/-m| [**必选**] PaddleHub Serving预安装模型,以多个Module==Version键值对的形式列出<br>*`当不指定Version时,默认选择最新版本`*|
|--port/-p| [**可选**] 服务端口,默认为8866|
|--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*|
|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为`2*cpu_count-1`,其中`cpu_count`为CPU核数|
<a name="5.2"></a>
### 5.2 配置文件启动
如按默认参数启动服务: ```hub serving start -m clas_system```
该方式仅支持使用 CPU 或 GPU 预测。启动命令:
这样就完成了一个服务化API的部署,使用默认端口号8866。
```shell
hub serving start -c config.json
```
#### 方式2. 配置文件启动(支持CPU、GPU)
**启动命令:**
```hub serving start -c config.json```
其中,`config.json` 格式如下:
其中,`config.json`格式如下:
```json
{
"modules_info": {
......@@ -97,92 +131,109 @@ $ hub serving start --modules Module1==Version1 \
}
```
- `init_args`中的可配参数与`module.py`中的`_initialize`函数接口一致。其中,
- 当`use_gpu`为`true`时,表示使用GPU启动服务。
- 当`enable_mkldnn`为`true`时,表示使用MKL-DNN加速。
- `predict_args`中的可配参数与`module.py`中的`predict`函数接口一致。
**参数说明**:
* `init_args` 中的可配参数与 `module.py` 中的 `_initialize` 函数接口一致。其中,
- 当 `use_gpu` 为 `true` 时,表示使用 GPU 启动服务。
- 当 `enable_mkldnn` 为 `true` 时,表示使用 MKL-DNN 加速。
* `predict_args` 中的可配参数与 `module.py` 中的 `predict` 函数接口一致。
**注意:**
- 使用配置文件启动服务时,其他参数会被忽略。
- 如果使用GPU预测(即,`use_gpu`置为`true`),则需要在启动服务之前,设置CUDA_VISIBLE_DEVICES环境变量,如:```export CUDA_VISIBLE_DEVICES=0```,否则不用设置。
- **`use_gpu`不可与`use_multiprocess`同时为`true`**。
- **`use_gpu`与`enable_mkldnn`同时为`true`时,将忽略`enable_mkldnn`,而使用GPU**。
**注意**:
* 使用配置文件启动服务时,将使用配置文件中的参数设置,其他命令行参数将被忽略;
* 如果使用 GPU 预测(即,`use_gpu` 置为 `true`),则需要在启动服务之前,设置 `CUDA_VISIBLE_DEVICES` 环境变量来指定所使用的 GPU 卡号,如:`export CUDA_VISIBLE_DEVICES=0`;
* **`use_gpu` 不可与 `use_multiprocess` 同时为 `true`**;
* **`use_gpu` 与 `enable_mkldnn` 同时为 `true` 时,将忽略 `enable_mkldnn`,而使用 GPU**。
如使用 GPU 3 号卡启动服务:
如,使用GPU 3号卡启动串联服务:
```shell
cd PaddleClas/deploy
export CUDA_VISIBLE_DEVICES=3
hub serving start -c hubserving/clas/config.json
```
```
## 发送预测请求
配置好服务端,可使用以下命令发送预测请求,获取预测结果:
<a name="6"></a>
## 6. 发送预测请求
配置好服务端后,可使用以下命令发送预测请求,获取预测结果:
```shell
cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path
```
需要给脚本传递2个必须参数:
- **server_url**:服务地址,格式为
`http://[ip_address]:[port]/predict/[module_name]`
- **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。
- **batch_size**:[**可选**] 以`batch_size`大小为单位进行预测,默认为`1`。
- **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为`256`。
- **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为`224`。
- **normalize**:[**可选**] 预处理时,是否进行`normalize`,默认为`True`。
- **to_chw**:[**可选**] 预处理时,是否调整为`CHW`顺序,默认为`True`。
python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**预测输出**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
```
**注意**:如果使用`Transformer`系列模型,如`DeiT_***_384`, `ViT_***_384`等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
**脚本参数说明**:
* **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。
* **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。
* **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。
* **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。
* **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为 `224`。
* **normalize**:[**可选**] 预处理时,是否进行 `normalize`,默认为 `True`。
* **to_chw**:[**可选**] 预处理时,是否调整为 `CHW` 顺序,默认为 `True`。
**注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
访问示例:
**返回结果格式说明**:
返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
```
### 返回结果格式说明
返回结果为列表(list),包含top-k个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
```
list: 返回结果
└─ list: 第一张图片结果
└─ list: 前k个分类结果,依score递减排序
└─ list: 前k个分类结果对应的score,依score递减排序
└─ float: 该图分类耗时,单位秒
└─list: 第一张图片结果
├── list: 前 k 个分类结果,依 score 递减排序
├── list: 前 k 个分类结果对应的 score,依 score 递减排序
└─ float: 该图分类耗时,单位秒
```
**说明:** 如果需要增加、删除、修改返回字段,可对相应模块进行修改,完整流程参考下一节自定义修改服务模块。
## 自定义修改服务模块
如果需要修改服务逻辑,你一般需要操作以下步骤:
- 1、 停止服务
```hub serving stop --port/-p XXXX```
<a name="7"></a>
## 7. 自定义修改服务模块
- 2、 到相应的`module.py`和`params.py`等文件中根据实际需求修改代码。`module.py`修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过`python hubserving/clas/module.py`测试已安装服务模块。
如果需要修改服务逻辑,需要进行以下操作:
- 3、 卸载旧服务包
```hub uninstall clas_system```
1. 停止服务
```shell
hub serving stop --port/-p XXXX
```
- 4、 安装修改后的新服务包
```hub install hubserving/clas/```
2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码。
- 5、重新启动服务
```hub serving start -m clas_system```
3. 卸载旧服务包
```shell
hub uninstall clas_system
```
4. 安装修改后的新服务包
```shell
hub install hubserving/clas/
```
5. 重新启动服务
```shell
hub serving start -m clas_system
```
**注意**:
常用参数可在[params.py](./clas/params.py)中修改:
常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改:
* 更换模型,需要修改模型文件路径参数:
```python
"inference_model_dir":
```
* 更改后处理时返回的`top-k`结果数量:
* 更改后处理时返回的 `top-k` 结果数量:
```python
'topk':
```
* 更改后处理时的lable与class id对应映射文件:
* 更改后处理时的 lable 与 class id 对应映射文件:
```python
'class_id_map_file':
```
为了避免不必要的延时以及能够以batch_size进行预测,数据预处理逻辑(包括resize、crop等操作)在客户端完成,因此需要在[test_hubserving.py](./test_hubserving.py#L35-L52)中修改
为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L41-L47](./test_hubserving.py#L41-L47) 以及 [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76) 中修改数据预处理逻辑相关代码
English | [简体中文](readme.md)
# Service deployment based on PaddleHub Serving
# Service deployment based on PaddleHub Serving
PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition.
## Catalogue
- [1 Introduction](#1-introduction)
- [2. Prepare the environment](#2-prepare-the-environment)
- [3. Download the inference model](#3-download-the-inference-model)
- [4. Install the service module](#4-install-the-service-module)
- [5. Start service](#5-start-service)
- [5.1 Start with command line parameters](#51-start-with-command-line-parameters)
- [5.2 Start with configuration file](#52-start-with-configuration-file)
- [6. Send prediction requests](#6-send-prediction-requests)
- [7. User defined service module modification](#7-user-defined-service-module-modification)
HubServing service pack contains 3 files, the directory is as follows:
```
hubserving/clas/
└─ __init__.py Empty file, required
└─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service
└─ module.py Main module file, required, contains the complete logic of the service
└─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters
```
## Quick start service
### 1. Prepare the environment
<a name="1"></a>
## 1 Introduction
The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows:
```shell
# Install version 2.0 of PaddleHub
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
deploy/hubserving/clas/
├── __init__.py # Empty file, required
├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration
├── module.py # The main module, required, contains the complete logic of the service
└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters
```
### 2. Download inference model
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
```
Model structure file: PaddleClas/inference/inference.pdmodel
Model parameters file: PaddleClas/inference/inference.pdiparams
<a name="2"></a>
## 2. Prepare the environment
```shell
# Install paddlehub, version 2.1.0 is recommended
python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
```
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`.
It should be noted that the prefix of model structure file and model parameters file must be `inference`.
<a name="3"></a>
## 3. Download the inference model
* More models provided by PaddleClas can be obtained from the [model library](../../docs/en/models/models_intro_en.md). You can also use models trained by yourself.
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
### 3. Install Service Module
* Classification inference model structure file: `PaddleClas/inference/inference.pdmodel`
* Classification inference model weight file: `PaddleClas/inference/inference.pdiparams`
* On Linux platform, the examples are as follows.
```shell
cd PaddleClas/deploy
hub install hubserving/clas/
```
**Notice**:
* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`:
```python
"inference_model_dir": "../inference/"
```
* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`.
* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../../docs/en/algorithm_introduction/ImageNet_models_en.md), or you can use your own trained and converted models.
<a name="4"></a>
## 4. Install the service module
* In the Linux environment, the installation example is as follows:
```shell
cd PaddleClas/deploy
# Install the service module:
hub install hubserving/clas/
```
* In the Windows environment (the folder separator is `\`), the installation example is as follows:
```shell
cd PaddleClas\deploy
# Install the service module:
hub install hubserving\clas\
```
<a name="5"></a>
## 5. Start service
<a name="5.1"></a>
### 5.1 Start with command line parameters
This method only supports prediction using CPU. Start command:
* On Windows platform, the examples are as follows.
```shell
cd PaddleClas\deploy
hub install hubserving\clas\
hub serving start \
--modules clas_system
--port 8866
```
This completes the deployment of a serviced API, using the default port number 8866.
### 4. Start service
#### Way 1. Start with command line parameters (CPU only)
**Parameter Description**:
| parameters | uses |
| ------------------ | ------------------- |
| --modules/-m | [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When no Version is specified, the latest is selected by default version`* |
| --port/-p | [**OPTIONAL**] Service port, default is 8866 |
| --use_multiprocess | [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`* |
| --workers | [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores |
For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
**start command:**
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
**parameters:**
|parameters|usage|
|-|-|
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866|
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For example, start the 2-stage series service:
```shell
hub serving start -m clas_system
```
<a name="5.2"></a>
### 5.2 Start with configuration file
This completes the deployment of a service API, using the default port number 8866.
This method only supports prediction using CPU or GPU. Start command:
#### Way 2. Start with configuration file(CPU、GPU)
**start command:**
```shell
hub serving start --config/-c config.json
```
Wherein, the format of `config.json` is as follows:
hub serving start -c config.json
```
Among them, the format of `config.json` is as follows:
```json
{
"modules_info": {
......@@ -96,104 +129,110 @@ Wherein, the format of `config.json` is as follows:
"workers": 2
}
```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them,
- when `use_gpu` is `true`, it means that the GPU is used to start the service.
- when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate.
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Note:**
- When using the configuration file to start the service, other parameters will be ignored.
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it.
- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.**
- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.**
For example, use GPU card No. 3 to start the 2-stage series service:
**Parameter Description**:
* The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in,
- When `use_gpu` is `true`, it means to use GPU to start the service.
- When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration.
* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Notice**:
* When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored;
* If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`;
* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`;
* ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used.
If you use GPU No. 3 card to start the service:
```shell
cd PaddleClas/deploy
export CUDA_VISIBLE_DEVICES=3
hub serving start -c hubserving/clas/config.json
```
## Send prediction requests
After the service starts, you can use the following command to send a prediction request to obtain the prediction result:
```shell
cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path
```
Two required parameters need to be passed to the script:
- **server_url**: service address,format of which is
`http://[ip_address]:[port]/predict/[module_name]`
- **image_path**: Test image path, can be a single image path or an image directory path
- **batch_size**: [**Optional**] batch_size. Default by `1`.
- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`
- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`
- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`
- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`
<a name="6"></a>
## 6. Send prediction requests
**Notice**:
If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`.
After configuring the server, you can use the following command to send a prediction request to get the prediction result:
**Eg.**
```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
```
### Returned result format
The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows.
```
list: The returned results
└─ list: The result of first picture
└─ list: The top-k classification results, sorted in descending order of score
└─ list: The scores corresponding to the top-k classification results, sorted in descending order of score
└─ float: The time cost of predicting the picture, unit second
cd PaddleClas/deploy
python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**Predicted output**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
```
**Script parameter description**:
* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`.
* **image_path**: The test image path, which can be a single image path or an image collection directory path.
* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`.
* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`.
* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`.
* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`.
* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`.
**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`.
**Return result format description**:
The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows:
```shell
list: return result
└──list: first image result
├── list: the top k classification results, sorted in descending order of score
├── list: the scores corresponding to the first k classification results, sorted in descending order of score
└── float: The image classification time, in seconds
```
**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section.
## User defined service module modification
If you need to modify the service logic, the following steps are generally required:
1. Stop service
```shell
hub serving stop --port/-p XXXX
```
<a name="7"></a>
## 7. User defined service module modification
2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`.
After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module.
If you need to modify the service logic, you need to do the following:
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation.
3. Uninstall old service module
```shell
hub uninstall clas_system
```
1. Stop the service
```shell
hub serving stop --port/-p XXXX
```
4. Install modified service module
```shell
hub install hubserving/clas/
```
2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment.
5. Restart service
```shell
hub serving start -m clas_system
```
3. Uninstall the old service pack
```shell
hub uninstall clas_system
```
**Note**:
4. Install the new modified service pack
```shell
hub install hubserving/clas/
```
Common parameters can be modified in params.py:
* Directory of model files(include model structure file and model parameters file):
```python
"inference_model_dir":
```
* The number of Top-k results returned during post-processing:
```python
'topk':
```
* Mapping file corresponding to label and class ID during post-processing:
```python
'class_id_map_file':
```
5. Restart the service
```shell
hub serving start -m clas_system
```
In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](./test_hubserving.py#L35-L52) if necessary.
**Notice**:
Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* To replace the model, you need to modify the model file path parameters:
```python
"inference_model_dir":
```
* Change the number of `top-k` results returned when postprocessing:
```python
'topk':
```
* The mapping file corresponding to the lable and class id when changing the post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to modify data preprocessing logic related code in [PaddleClas/deploy/hubserving/test_hubserving.py# L41-L47](./test_hubserving.py#L41-L47) and [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76).
../../docs/zh_CN/inference_deployment/paddle_serving_deploy.md
\ No newline at end of file
../../docs/zh_CN/inference_deployment/classification_serving_deploy.md
\ No newline at end of file
../../docs/en/inference_deployment/classification_serving_deploy_en.md
\ No newline at end of file
../../../docs/zh_CN/inference_deployment/recognition_serving_deploy.md
\ No newline at end of file
../../../docs/en/inference_deployment/recognition_serving_deploy_en.md
\ No newline at end of file
nohup python3 -m paddle_serving_server.serve \
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving \
--port 9293 >>log_mainbody_detection.txt 1&>2 &
gpu_id=$1
nohup python3 -m paddle_serving_server.serve \
--model ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \
--port 9294 >>log_feature_extraction.txt 1&>2 &
# PP-ShiTu CPP serving script
if [[ -n "${gpu_id}" ]]; then
nohup python3.7 -m paddle_serving_server.serve \
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \
--op GeneralPicodetOp GeneralFeatureExtractOp \
--port 9400 --gpu_id="${gpu_id}" > log_PPShiTu.txt 2>&1 &
else
nohup python3.7 -m paddle_serving_server.serve \
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \
--op GeneralPicodetOp GeneralFeatureExtractOp \
--port 9400 > log_PPShiTu.txt 2>&1 &
fi
#run cls server:
nohup python3 -m paddle_serving_server.serve --model ResNet50_vd_serving --port 9292 &
gpu_id=$1
# ResNet50_vd CPP serving script
if [[ -n "${gpu_id}" ]]; then
nohup python3.7 -m paddle_serving_server.serve \
--model ./ResNet50_vd_serving \
--op GeneralClasOp \
--port 9292 &
else
nohup python3.7 -m paddle_serving_server.serve \
--model ./ResNet50_vd_serving \
--op GeneralClasOp \
--port 9292 --gpu_id="${gpu_id}" &
fi
English | [简体中文](../../zh_CN/inference_deployment/classification_serving_deploy.md)
# Classification model service deployment
## Table of contents
- [1 Introduction](#1-introduction)
- [2. Serving installation](#2-serving-installation)
- [3. Image Classification Service Deployment](#3-image-classification-service-deployment)
- [3.1 Model conversion](#31-model-conversion)
- [3.2 Service deployment and request](#32-service-deployment-and-request)
- [3.2.1 Python Serving](#321-python-serving)
- [3.2.2 C++ Serving](#322-c-serving)
<a name="1"></a>
## 1 Introduction
[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep learning developers easily deploy online prediction services, support one-click deployment of industrial-grade service capabilities, high concurrency between client and server Efficient communication and support for developing clients in multiple programming languages.
This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported.
<a name="2"></a>
## 2. Serving installation
The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker.
```shell
# start GPU docker
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
# start CPU docker
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash
docker exec -it test bash
```
After entering docker, you need to install Serving-related python packages.
```shell
python3.7 -m pip install paddle-serving-client==0.7.0
python3.7 -m pip install paddle-serving-app==0.7.0
python3.7 -m pip install faiss-cpu==1.7.1post2
#If it is a CPU deployment environment:
python3.7 -m pip install paddle-serving-server==0.7.0 #CPU
python3.7 -m pip install paddlepaddle==2.2.0 # CPU
#If it is a GPU deployment environment
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#Other GPU environments need to confirm the environment and then choose which one to execute
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* If the installation speed is too slow, you can change the source through `-i https://pypi.tuna.tsinghua.edu.cn/simple` to speed up the installation process.
* For other environment configuration installation, please refer to: [Install Paddle Serving with Docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md)
<a name="3"></a>
## 3. Image Classification Service Deployment
The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service.
<a name="3.1"></a>
### 3.1 Model conversion
When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model.
- Go to the working directory:
```shell
cd deploy/paddleserving
```
- Download and unzip the inference model for ResNet50_vd:
```shell
# Download ResNet50_vd inference model
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
# Decompress the ResNet50_vd inference model
tar xf ResNet50_vd_infer.tar
```
- Use the paddle_serving_client command to convert the downloaded inference model into a model format for easy server deployment:
```shell
# Convert ResNet50_vd model
python3.7 -m paddle_serving_client.convert \
--dirname ./ResNet50_vd_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ResNet50_vd_serving/ \
--serving_client ./ResNet50_vd_client/
```
The specific meaning of the parameters in the above command is shown in the following table
| parameter | type | default value | description |
| --------- | ---- | ------------- | ----------- | |--- |
| `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory. |
| `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename |
| `params_filename` | str | None | File name where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None |
| `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server |
| `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is serving_client |
After the ResNet50_vd inference model conversion is completed, there will be additional `ResNet50_vd_serving` and `ResNet50_vd_client` folders in the current folder, with the following structure:
```shell
├── ResNet50_vd_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── ResNet50_vd_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
- Serving provides the function of input and output renaming in order to be compatible with the deployment of different models. When different models are deployed in inference, you only need to modify the `alias_name` of the configuration file, and the inference deployment can be completed without modifying the code. Therefore, after the conversion, you need to modify the alias names in the files `serving_server_conf.prototxt` under `ResNet50_vd_serving` and `ResNet50_vd_client` respectively, and change the `alias_name` in `fetch_var` to `prediction`, the modified serving_server_conf.prototxt is as follows Show:
```log
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "prediction"
is_lod_tensor: false
fetch_type: 1
shape: 1000
}
```
<a name="3.2"></a>
### 3.2 Service deployment and request
The paddleserving directory contains the code for starting the pipeline service, the C++ serving service and sending the prediction request, mainly including:
```shell
__init__.py
classification_web_service.py # Script to start the pipeline server
config.yml # Configuration file to start the pipeline service
pipeline_http_client.py # Script for sending pipeline prediction requests in http mode
pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode
readme.md # Classification model service deployment document
run_cpp_serving.sh # Start the C++ Serving departmentscript
test_cpp_serving_client.py # Script for sending C++ serving prediction requests in rpc mode
```
<a name="3.2.1"></a>
#### 3.2.1 Python Serving
- Start the service:
```shell
# Start the service and save the running log in log.txt
python3.7 classification_web_service.py &>log.txt &
```
- send request:
```shell
# send service request
python3.7 pipeline_http_client.py
```
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
```log
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors ': []}
```
- turn off the service
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
```bash
python3.7 -m paddle_serving_server.serve stop
```
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
<a name="3.2.2"></a>
#### 3.2.2 C++ Serving
Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`.
- Compile and install the Serving server package
```shell
# Enter the working directory
cd PaddleClas/deploy/paddleserving
# One-click compile and install Serving server, set SERVING_BIN
source ./build_server.sh python3.7
```
**Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields.
```log
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 20
shape: 1
}
```
- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py)
1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
- Start the service:
```shell
# Start the service, the service runs in the background, and the running log is saved in nohup.txt
# CPU deployment
sh run_cpp_serving.sh
# GPU deployment and specify card 0
sh run_cpp_serving.sh 0
```
- send request:
```shell
# send service request
python3.7 test_cpp_serving_client.py
```
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
```log
prediction: daisy, probability: 0.9341399073600769
```
- close the service:
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
```bash
python3.7 -m paddle_serving_server.serve stop
```
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
##4.FAQ
**Q1**: No result is returned after the request is sent or an output decoding error is prompted
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and sending the request. The command to close the proxy is:
```shell
unset https_proxy
unset http_proxy
```
**Q2**: nothing happens after starting the service
**A2**: You can check whether the path corresponding to `model_config` in `config.yml` exists, and whether the folder name is correct
For more service deployment types, such as `RPC prediction service`, you can refer to Serving's [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples)
# Service deployment based on PaddleHub Serving
English | [简体中文](../../zh_CN/inference_deployment/paddle_hub_serving_deploy.md)
PaddleClas supports rapid service deployment through Paddlehub. At present, it supports the deployment of image classification. Please look forward to the deployment of image recognition.
# Service deployment based on PaddleHub Serving
---
PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition.
## Catalogue
- [1. Introduction](#1)
- [2. Prepare the environment](#2)
- [3. Download inference model](#3)
......@@ -16,97 +15,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s
- [6. Send prediction requests](#6)
- [7. User defined service module modification](#7)
<a name="1"></a>
## 1. Introduction
## 1 Introduction
HubServing service pack contains 3 files, the directory is as follows:
The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows:
```shell
deploy/hubserving/clas/
├── __init__.py # Empty file, required
├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration
├── module.py # The main module, required, contains the complete logic of the service
└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters
```
hubserving/clas/
└─ __init__.py Empty file, required
└─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service
└─ module.py Main module file, required, contains the complete logic of the service
└─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters
```
<a name="2"></a>
## 2. Prepare the environment
```shell
# Install version 2.0 of PaddleHub
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
# Install paddlehub, version 2.1.0 is recommended
python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
```
<a name="3"></a>
## 3. Download inference model
## 3. Download the inference model
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
* Model structure file: `PaddleClas/inference/inference.pdmodel`
* Model parameters file: `PaddleClas/inference/inference.pdiparams`
* Classification inference model structure file: `PaddleClas/inference/inference.pdmodel`
* Classification inference model weight file: `PaddleClas/inference/inference.pdiparams`
**Notice**:
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`.
* It should be noted that the prefix of model structure file and model parameters file must be `inference`.
* More models provided by PaddleClas can be obtained from the [model library](../algorithm_introduction/ImageNet_models_en.md). You can also use models trained by yourself.
* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`:
```python
"inference_model_dir": "../inference/"
```
* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`.
* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models.md), or you can use your own trained and converted models.
<a name="4"></a>
## 4. Install Service Module
## 4. Install the service module
* On Linux platform, the examples are as follows.
```shell
cd PaddleClas/deploy
hub install hubserving/clas/
```
* In the Linux environment, the installation example is as follows:
```shell
cd PaddleClas/deploy
# Install the service module:
hub install hubserving/clas/
```
* In the Windows environment (the folder separator is `\`), the installation example is as follows:
```shell
cd PaddleClas\deploy
# Install the service module:
hub install hubserving\clas\
```
* On Windows platform, the examples are as follows.
```shell
cd PaddleClas\deploy
hub install hubserving\clas\
```
<a name="5"></a>
## 5. Start service
<a name="5.1"></a>
### 5.1 Start with command line parameters
This method only supports CPU. Command as follow:
This method only supports prediction using CPU. Start command:
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
**parameters:**
|parameters|usage|
|-|-|
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866|
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For example, start service:
```shell
hub serving start -m clas_system
```
hub serving start \
--modules clas_system
--port 8866
```
This completes the deployment of a serviced API, using the default port number 8866.
This completes the deployment of a service API, using the default port number 8866.
**Parameter Description**:
|parameters|uses|
|-|-|
|--modules/-m| [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When no Version is specified, the latest is selected by default version `*|
|--port/-p| [**OPTIONAL**] Service port, default is 8866|
|--use_multiprocess| [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers| [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
<a name="5.2"></a>
### 5.2 Start with configuration file
This method supports CPU and GPU. Command as follow:
This method only supports prediction using CPU or GPU. Start command:
```shell
hub serving start --config/-c config.json
```
hub serving start -c config.json
```
Wherein, the format of `config.json` is as follows:
Among them, the format of `config.json` is as follows:
```json
{
......@@ -127,18 +130,19 @@ Wherein, the format of `config.json` is as follows:
}
```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them,
- when `use_gpu` is `true`, it means that the GPU is used to start the service.
- when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate.
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Parameter Description**:
* The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in,
- When `use_gpu` is `true`, it means to use GPU to start the service.
- When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration.
* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Note:**
- When using the configuration file to start the service, other parameters will be ignored.
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it.
- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.**
- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.**
**Notice**:
* When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored;
* If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`;
* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`;
* ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used.
For example, use GPU card No. 3 to start the 2-stage series service:
If you use GPU No. 3 card to start the service:
```shell
cd PaddleClas/deploy
......@@ -149,88 +153,86 @@ hub serving start -c hubserving/clas/config.json
<a name="6"></a>
## 6. Send prediction requests
After the service starting, you can use the following command to send a prediction request to obtain the prediction result:
After configuring the server, you can use the following command to send a prediction request to get the prediction result:
```shell
cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path
python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**Predicted output**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
```
Two required parameters need to be passed to the script:
- **server_url**: service address,format of which is
`http://[ip_address]:[port]/predict/[module_name]`
- **image_path**: Test image path, can be a single image path or an image directory path
- **batch_size**: [**Optional**] batch_size. Default by `1`.
- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`
- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`
- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`
- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`
**Script parameter description**:
* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`.
* **image_path**: The test image path, which can be a single image path or an image collection directory path.
* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`.
* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`.
* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`.
* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`.
* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`.
**Notice**:
If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`.
**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`.
**Eg.**
**Return result format description**:
The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows:
```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
list: return result
└──list: first image result
├── list: the top k classification results, sorted in descending order of score
├── list: the scores corresponding to the first k classification results, sorted in descending order of score
└── float: The image classification time, in seconds
```
The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows.
```
list: The returned results
└─ list: The result of first picture
└─ list: The top-k classification results, sorted in descending order of score
└─ list: The scores corresponding to the top-k classification results, sorted in descending order of score
└─ float: The time cost of predicting the picture, unit second
```
**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section.
<a name="7"></a>
## 7. User defined service module modification
If you need to modify the service logic, the following steps are generally required:
1. Stop service
```shell
hub serving stop --port/-p XXXX
```
2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`.
After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module.
If you need to modify the service logic, you need to do the following:
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation.
3. Uninstall old service module
```shell
hub uninstall clas_system
```
1. Stop the service
```shell
hub serving stop --port/-p XXXX
```
4. Install modified service module
```shell
hub install hubserving/clas/
```
2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment.
5. Restart service
```shell
hub serving start -m clas_system
```
3. Uninstall the old service pack
```shell
hub uninstall clas_system
```
**Note**:
4. Install the new modified service pack
```shell
hub install hubserving/clas/
```
Common parameters can be modified in params.py:
* Directory of model files(include model structure file and model parameters file):
```python
"inference_model_dir":
```
* The number of Top-k results returned during post-processing:
```python
'topk':
```
* Mapping file corresponding to label and class ID during post-processing:
```python
'class_id_map_file':
```
5. Restart the service
```shell
hub serving start -m clas_system
```
In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](../../../deploy/hubserving/test_hubserving.py#L35-L52) if necessary.
**Notice**:
Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* To replace the model, you need to modify the model file path parameters:
```python
"inference_model_dir":
```
* Change the number of `top-k` results returned when postprocessing:
```python
'topk':
```
* The mapping file corresponding to the lable and class id when changing the post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to be in [PaddleClas/deploy/hubserving/test_hubserving.py# L41-L47](../../../deploy/hubserving/test_hubserving.py#L41-L47) and [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](../../../deploy/hubserving/test_hubserving.py#L51-L76) Modify the data preprocessing logic related code.
# Model Service Deployment
## Catalogue
- [1. Introduction](#1)
- [2. Installation of Serving](#2)
- [3. Service Deployment for Image Classification](#3)
- [3.1 Model Transformation](#3.1)
- [3.2 Service Deployment and Request](#3.2)
- [4. Service Deployment for Image Recognition](#4)
- [4.1 Model Transformation](#4.1)
- [4.2 Service Deployment and Request](#4.2)
- [5. FAQ](#5)
<a name="1"></a>
## 1. Introduction
[Paddle Serving](https://github.com/PaddlePaddle/Serving) is designed to provide easy deployment of on-line prediction services for deep learning developers, it supports one-click deployment of industrial-grade services, highly concurrent and efficient communication between client and server, and multiple programming languages for client development.
This section, exemplified by HTTP deployment of prediction service, describes how to deploy model services in PaddleClas with PaddleServing. Currently, only deployment on Linux platform is supported. Windows platform is not supported.
<a name="2"></a>
## 2. Installation of Serving
It is officially recommended to use docker for the installation and environment deployment of Serving. First, pull the docker and create a Serving-based one.
```
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
```
Once you are in docker, install the Serving-related python packages.
```
pip3 install paddle-serving-client==0.7.0
pip3 install paddle-serving-server==0.7.0 # CPU
pip3 install paddle-serving-app==0.7.0
pip3 install paddle-serving-server-gpu==0.7.0.post102 #GPU with CUDA10.2 + TensorRT6
# For other GPU environemnt, confirm the environment before choosing which one to execute
pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
- Speed up the installation process by replacing the source with `-i https://pypi.tuna.tsinghua.edu.cn/simple`.
- For other environment configuration and installation, please refer to [Install Paddle Serving using docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md)
- To deploy CPU services, please install the CPU version of serving-server with the following command.
```
pip install paddle-serving-server
```
<a name="3"></a>
## 3. Service Deployment for Image Classification
<a name="3.1"></a>
### 3.1 Model Transformation
When adopting PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part takes the classic ResNet50_vd model as an example to introduce the deployment of image classification service.
- Enter the working directory:
```
cd deploy/paddleserving
```
- Download the inference model of ResNet50_vd:
```
# Download and decompress the ResNet50_vd model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
```
- Convert the downloaded inference model into a format that is readily deployable by Server with the help of paddle_serving_client.
```
# Convert the ResNet50_vd model
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ResNet50_vd_serving/ \
--serving_client ./ResNet50_vd_client/
```
After the transformation, `ResNet50_vd_serving` and `ResNet50_vd_client` will be added to the current folder in the following format:
```
|- ResNet50_vd_server/
|- __model__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- ResNet50_vd_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
```
Having obtained the model file, modify the alias name in `serving_server_conf.prototxt` under directory `ResNet50_vd_server` by changing `alias_name` in `fetch_var` to `prediction`.
**Notes**: Serving supports input and output renaming to ensure its compatibility with the deployment of different models. In this case, modifying the alias_name of the configuration file is the only step needed to complete the inference and deployment of all kinds of models. The modified serving_server_conf.prototxt is shown below:
```
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "prediction"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
```
<a name="3.2"></a>
### 3.2 Service Deployment and Request
Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including:
```
__init__.py
config.yml # Configuration file for starting the service
pipeline_http_client.py # Script for sending pipeline prediction requests by http
pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc
classification_web_service.py # Script for starting the pipeline server
```
- Start the service:
```
# Start the service and the run log is saved in log.txt
python3 classification_web_service.py &>log.txt &
```
Once the service is successfully started, a log will be printed in log.txt similar to the following ![img](../../../deploy/paddleserving/imgs/start_server.png)
- Send request:
```
# Send service request
python3 pipeline_http_client.py
```
Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example:![img](../../../deploy/paddleserving/imgs/results.png)
<a name="4"></a>
## 4. Service Deployment for Image Recognition
When using PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part, exemplified by the ultra-lightweight model for image recognition in PP-ShiTu, details the deployment of image recognition service.
<a name="4.1"></a>
## 4.1 Model Transformation
- Download inference models for general detection and general recognition
```
cd deploy
# Download and decompress general recogntion models
wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
cd models
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
# Download and decompress general detection models
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
```
- Convert the inference model for recognition into a Serving model:
```
# Convert the recognition model
python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
```
After the transformation, `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_serving/` will be added to the current folder. Modify the alias name in serving_server_conf.prototxt under the directory `general_PPLCNet_x2_5_lite_v1.0_serving/` by changing `alias_name` to `features` in `fetch_var`. The modified serving_server_conf.prototxt is similar to the following:
```
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
```
- Convert the inference model for detection into a Serving model:
```
# Convert the general detection model
python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
```
After the transformation, `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_ mainbody_lite_v1.0_client/` will be added to the current folder.
**Note:** The alias name in the serving_server_conf.prototxt under the directory`picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` requires no modification.
- Download and decompress the constructed search library index
```
cd ../
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
```
<a name="4.2"></a>
## 4.2 Service Deployment and Request
**Note:** Since the recognition service involves multiple models, PipeLine is adopted for better performance. This deployment method does not support the windows platform for now.
- Enter the working directory
```
cd ./deploy/paddleserving/recognition
```
Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including:
```
__init__.py
config.yml # Configuration file for starting the service
pipeline_http_client.py # Script for sending pipeline prediction requests by http
pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc
recognition_web_service.py # Script for starting the pipeline server
```
- Start the service:
```
# Start the service and the run log is saved in log.txt
python3 recognition_web_service.py &>log.txt &
```
Once the service is successfully started, a log will be printed in log.txt similar to the following ![img](../../../deploy/paddleserving/imgs/start_server_shitu.png)
- Send request:
```
python3 pipeline_http_client.py
```
Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example: ![img](../../../deploy/paddleserving/imgs/results_shitu.png)
<a name="5"></a>
## 5.FAQ
**Q1**: After sending a request, no result is returned or the output is prompted with a decoding error.
**A1**: Please turn off the proxy before starting the service and sending requests, try the following command:
```
unset https_proxy
unset http_proxy
```
For more types of service deployment, such as `RPC prediction services`, you can refer to the [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.7.0/examples) of Serving.
English | [简体中文](../../zh_CN/inference_deployment/recognition_serving_deploy.md)
# Recognition model service deployment
## Table of contents
- [1 Introduction](#1-introduction)
- [2. Serving installation](#2-serving-installation)
- [3. Image recognition service deployment](#3-image-recognition-service-deployment)
- [3.1 Model conversion](#31-model-conversion)
- [3.2 Service deployment and request](#32-service-deployment-and-request)
- [3.2.1 Python Serving](#321-python-serving)
- [3.2.2 C++ Serving](#322-c-serving)
- [4. FAQ](#4-faq)
<a name="1"></a>
## 1 Introduction
[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep learning developers easily deploy online prediction services, support one-click deployment of industrial-grade service capabilities, high concurrency between client and server Efficient communication and support for developing clients in multiple programming languages.
This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported.
<a name="2"></a>
## 2. Serving installation
The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker.
```shell
# start GPU docker
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
# start CPU docker
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash
docker exec -it test bash
```
After entering docker, you need to install Serving-related python packages.
```shell
python3.7 -m pip install paddle-serving-client==0.7.0
python3.7 -m pip install paddle-serving-app==0.7.0
python3.7 -m pip install faiss-cpu==1.7.1post2
#If it is a CPU deployment environment:
python3.7 -m pip install paddle-serving-server==0.7.0 #CPU
python3.7 -m pip install paddlepaddle==2.2.0 # CPU
#If it is a GPU deployment environment
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#Other GPU environments need to confirm the environment and then choose which one to execute
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* If the installation speed is too slow, you can change the source through `-i https://pypi.tuna.tsinghua.edu.cn/simple` to speed up the installation process.
* For other environment configuration installation, please refer to: [Install Paddle Serving with Docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md)
<a name="3"></a>
## 3. Image recognition service deployment
When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services.
<a name="3.1"></a>
### 3.1 Model conversion
- Go to the working directory:
```shell
cd deploy/
```
- Download generic detection inference model and generic recognition inference model
```shell
# Create and enter the models folder
mkdir models
cd models
# Download and unzip the generic recognition model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
# Download and unzip the generic detection model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
```
- Convert the generic recognition inference model to the Serving model:
```shell
# Convert the generic recognition model
python3.7 -m paddle_serving_client.convert \
--dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
```
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
After the recognition inference model is converted, there will be additional folders `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` in the current folder. Modify the name of `alias` in `serving_server_conf.prototxt` in `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` directories respectively: Change `alias_name` in `fetch_var` to `features`. The content of the modified `serving_server_conf.prototxt` is as follows
```log
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
```
After the conversion of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure:
```shell
├── general_PPLCNet_x2_5_lite_v1.0_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── general_PPLCNet_x2_5_lite_v1.0_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
- Convert general detection inference model to Serving model:
```shell
# Convert generic detection model
python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
```
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
After the conversion of the general detection inference model is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure:
```shell
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
The specific meaning of the parameters in the above command is shown in the following table
| parameter | type | default value | description |
| ----------------- | ---- | ------------------ | ----------------------------------------------------- |
| `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory.|
| `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename |
| `params_filename` | str | None | The name of the file that stores all parameters of the model that need to be transformed. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None |
| `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server |
| `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is |
- Download and unzip the index of the retrieval library that has been built
```shell
# Go back to the deploy directory
cd ../
# Download the built retrieval library index
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar
# Decompress the built retrieval library index
tar -xf drink_dataset_v1.0.tar
```
<a name="3.2"></a>
### 3.2 Service deployment and request
**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform.
- go to the working directory
```shell
cd ./deploy/paddleserving/recognition
```
The paddleserving directory contains code to start the Python Pipeline service, the C++ Serving service, and send prediction requests, including:
```shell
__init__.py
config.yml # The configuration file to start the python pipeline service
pipeline_http_client.py # Script for sending pipeline prediction requests in http mode
pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode
recognition_web_service.py # Script to start the pipeline server
readme.md # Recognition model service deployment documents
run_cpp_serving.sh # Script to start C++ Pipeline Serving deployment
test_cpp_serving_client.py # Script for sending C++ Pipeline serving prediction requests by rpc
```
<a name="3.2.1"></a>
#### 3.2.1 Python Serving
- Start the service:
```shell
# Start the service and save the running log in log.txt
python3.7 recognition_web_service.py &>log.txt &
```
- send request:
```shell
python3.7 pipeline_http_client.py
```
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
```log
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.79903316}]"], 'tensors': []}
```
<a name="3.2.2"></a>
#### 3.2.2 C++ Serving
Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`.
- Compile and install the Serving server package
```shell
# Enter the working directory
cd PaddleClas/deploy/paddleserving
# One-click compile and install Serving server, set SERVING_BIN
source ./build_server.sh python3.7
```
**Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
- The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder.
```shell
# Enter PaddleClas/deploy directory
cd PaddleClas/deploy/
# Overwrite prototxt file
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
```
- Start the service:
```shell
# Enter the working directory
cd PaddleClas/deploy/paddleserving/recognition
# The default port number is 9400; the running log is saved in log_PPShiTu.txt by default
# CPU deployment
sh run_cpp_serving.sh
# GPU deployment, and specify card 0
sh run_cpp_serving.sh 0
```
- send request:
```shell
# send service request
python3.7 test_cpp_serving_client.py
```
After a successful run, the results of the model predictions are printed in the client's terminal window as follows:
```log
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1
I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms.
[{'bbox': [345, 95, 524, 585], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.8073724}]
```
- close the service:
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
```bash
python3.7 -m paddle_serving_server.serve stop
```
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
<a name="4"></a>
## 4. FAQ
**Q1**: No result is returned after the request is sent or an output decoding error is prompted
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and sending the request. The command to close the proxy is:
```shell
unset https_proxy
unset http_proxy
```
**Q2**: nothing happens after starting the service
**A2**: You can check whether the path corresponding to `model_config` in `config.yml` exists, and whether the folder name is correct
For more service deployment types, such as `RPC prediction service`, you can refer to Serving's [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples)
简体中文 | [English](../../en/inference_deployment/classification_serving_deploy_en.md)
# 分类模型服务化部署
## 目录
- [1. 简介](#1-简介)
- [2. Serving 安装](#2-serving-安装)
- [3. 图像分类服务部署](#3-图像分类服务部署)
- [3.1 模型转换](#31-模型转换)
- [3.2 服务部署和请求](#32-服务部署和请求)
- [3.2.1 Python Serving](#321-python-serving)
- [3.2.2 C++ Serving](#322-c-serving)
- [4.FAQ](#4faq)
<a name="1"></a>
## 1. 简介
[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。
该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。
<a name="2"></a>
## 2. Serving 安装
Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。
```shell
# 启动GPU docker
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
# 启动CPU docker
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash
docker exec -it test bash
```
进入 docker 后,需要安装 Serving 相关的 python 包。
```shell
python3.7 -m pip install paddle-serving-client==0.7.0
python3.7 -m pip install paddle-serving-app==0.7.0
python3.7 -m pip install faiss-cpu==1.7.1post2
#若为CPU部署环境:
python3.7 -m pip install paddle-serving-server==0.7.0 # CPU
python3.7 -m pip install paddlepaddle==2.2.0 # CPU
#若为GPU部署环境
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#其他GPU环境需要确认环境再选择执行哪一条
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。
* 其他环境配置安装请参考:[使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md)
<a name="3"></a>
## 3. 图像分类服务部署
下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。
<a name="3.1"></a>
### 3.1 模型转换
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。
- 进入工作目录:
```shell
cd deploy/paddleserving
```
- 下载并解压 ResNet50_vd 的 inference 模型:
```shell
# 下载 ResNet50_vd inference 模型
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
# 解压 ResNet50_vd inference 模型
tar xf ResNet50_vd_infer.tar
```
- 用 paddle_serving_client 命令把下载的 inference 模型转换成易于 Server 部署的模型格式:
```shell
# 转换 ResNet50_vd 模型
python3.7 -m paddle_serving_client.convert \
--dirname ./ResNet50_vd_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ResNet50_vd_serving/ \
--serving_client ./ResNet50_vd_client/
```
上述命令中参数具体含义如下表所示
| 参数 | 类型 | 默认值 | 描述 |
| ----------------- | ---- | ------------------ | ------------------------------------------------------------ |
| `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 |
| `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 |
| `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None |
| `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server |
| `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client |
ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 的文件夹,具备如下结构:
```shell
├── ResNet50_vd_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── ResNet50_vd_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
- Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。让不同的模型在推理部署时,只需要修改配置文件的 `alias_name` 即可,无需修改代码即可完成推理部署。因此在转换完毕后需要分别修改 `ResNet50_vd_serving` 下的文件 `serving_server_conf.prototxt``ResNet50_vd_client` 下的文件 `serving_client_conf.prototxt`,将 `fetch_var``alias_name:` 后的字段改为 `prediction`,修改后的 `serving_server_conf.prototxt``serving_client_conf.prototxt` 如下所示:
```log
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "prediction"
is_lod_tensor: false
fetch_type: 1
shape: 1000
}
```
<a name="3.2"></a>
### 3.2 服务部署和请求
paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,主要包括:
```shell
__init__.py
classification_web_service.py # 启动pipeline服务端的脚本
config.yml # 启动pipeline服务的配置文件
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
readme.md # 分类模型服务化部署文档
run_cpp_serving.sh # 启动C++ Serving部署的脚本
test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
```
<a name="3.2.1"></a>
#### 3.2.1 Python Serving
- 启动服务:
```shell
# 启动服务,运行日志保存在 log.txt
python3.7 classification_web_service.py &>log.txt &
```
- 发送请求:
```shell
# 发送服务请求
python3.7 pipeline_http_client.py
```
成功运行后,模型预测的结果会打印在客户端中,如下所示:
```log
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []}
```
- 关闭服务
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
```bash
python3.7 -m paddle_serving_server.serve stop
```
执行完毕后出现`Process stopped`信息表示成功关闭服务。
<a name="3.2.2"></a>
#### 3.2.2 C++ Serving
与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`
- 编译并安装Serving server包
```shell
# 进入工作目录
cd PaddleClas/deploy/paddleserving
# 一键编译安装Serving server、设置 SERVING_BIN
source ./build_server.sh python3.7
```
**注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。
```log
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 20
shape: 1
}
```
- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码
1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt`
2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt``feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name``x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。
- 启动服务:
```shell
# 启动服务, 服务在后台运行,运行日志保存在 nohup.txt
# CPU部署
bash run_cpp_serving.sh
# GPU部署并指定0号卡
bash run_cpp_serving.sh 0
```
- 发送请求:
```shell
# 发送服务请求
python3.7 test_cpp_serving_client.py
```
成功运行后,模型预测的结果会打印在客户端中,如下所示:
```log
prediction: daisy, probability: 0.9341399073600769
```
- 关闭服务:
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
```bash
python3.7 -m paddle_serving_server.serve stop
```
执行完毕后出现`Process stopped`信息表示成功关闭服务。
## 4.FAQ
**Q1**: 发送请求后没有结果返回或者提示输出解码报错
**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
```shell
unset https_proxy
unset http_proxy
```
**Q2**: 启动服务后没有任何反应
**A2**: 可以检查`config.yml``model_config`对应的路径是否存在,文件夹命名是否正确
更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples)
......@@ -5,13 +5,13 @@ PaddleClas 在 Windows 平台下基于 `Visual Studio 2019 Community` 进行了
-----
## 目录
* [1. 前置条件](#1)
* [1.1 下载 PaddlePaddle C++ 预测库 paddle_inference_install_dir](#1.1)
* [1.2 安装配置 OpenCV](#1.2)
* [1.1 下载 PaddlePaddle C++ 预测库 paddle_inference_install_dir](#1.1)
* [1.2 安装配置 OpenCV](#1.2)
* [2. 使用 Visual Studio 2019 编译](#2)
* [3. 预测](#3)
* [3.1 准备 inference model](#3.1)
* [3.2 运行预测](#3.2)
* [3.3 注意事项](#3.3)
* [3.1 准备 inference model](#3.1)
* [3.2 运行预测](#3.2)
* [3.3 注意事项](#3.3)
<a name='1'></a>
## 1. 前置条件
......
......@@ -91,9 +91,9 @@ python3 tools/export_model.py \
导出的 inference 模型文件可用于预测引擎进行推理部署,根据不同的部署方式/平台,可参考:
* [Python 预测](./python_deploy.md)
* [C++ 预测](./cpp_deploy.md)(目前仅支持分类模型)
* [Python Whl 预测](./whl_deploy.md)(目前仅支持分类模型)
* [PaddleHub Serving 部署](./paddle_hub_serving_deploy.md)(目前仅支持分类模型)
* [PaddleServing 部署](./paddle_serving_deploy.md)
* [PaddleLite 部署](./paddle_lite_deploy.md)(目前仅支持分类模型)
* [Python 预测](./inference/python_deploy.md)
* [C++ 预测](./inference/cpp_deploy.md)(目前仅支持分类模型)
* [Python Whl 预测](./inference/whl_deploy.md)(目前仅支持分类模型)
* [PaddleHub Serving 部署](./deployment/paddle_hub_serving_deploy.md)(目前仅支持分类模型)
* [PaddleServing 部署](./deployment/paddle_serving_deploy.md)
* [PaddleLite 部署](./deployment/paddle_lite_deploy.md)(目前仅支持分类模型)
简体中文 | [English](../../en/inference_deployment/paddle_hub_serving_deploy_en.md)
# 基于 PaddleHub Serving 的服务部署
PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。
---
## 目录
- [1. 简介](#1)
......@@ -22,20 +22,20 @@ PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图
hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下:
```
hubserving/clas/
└─ __init__.py 空文件,必选
└─ config.json 配置文件,可选,使用配置启动服务时作为参数传入
└─ module.py 主模块,必选,包含服务的完整逻辑
└─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数
```shell
deploy/hubserving/clas/
├── __init__.py # 空文件,必选
├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入
├── module.py # 主模块,必选,包含服务的完整逻辑
└── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数
```
<a name="2"></a>
## 2. 准备环境
```shell
# 安装 paddlehub,请安装 2.0 版本
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
# 安装 paddlehub,建议安装 2.1.0 版本
python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
```
......@@ -53,30 +53,27 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim
```python
"inference_model_dir": "../inference/"
```
需要注意,
* 模型文件(包括 `.pdmodel``.pdiparams`)名称必须为 `inference`
* 我们也提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
* 模型文件(包括 `.pdmodel``.pdiparams`)的名称必须为 `inference`
* 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
<a name="4"></a>
## 4. 安装服务模块
针对 Linux 环境和 Windows 环境,安装命令如下。
* 在 Linux 环境下,安装示例如下:
```shell
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
```shell
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
* 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下:
```shell
cd PaddleClas\deploy
# 安装服务模块:
hub install hubserving\clas\
```
```shell
cd PaddleClas\deploy
# 安装服务模块:
hub install hubserving\clas\
```
<a name="5"></a>
......@@ -84,36 +81,34 @@ hub install hubserving\clas\
<a name="5.1"></a>
### 5.1 命令行命令启动
### 5.1 命令行启动
该方式仅支持使用 CPU 预测。启动命令:
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
hub serving start \
--modules clas_system
--port 8866
```
这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
**参数说明**:
|参数|用途|
|-|-|
|--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`*|
|--port/-p| [**可选**] 服务端口,默认为 8866|
|参数|用途|
|-|-|
|--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`*|
|--port/-p| [**可选**] 服务端口,默认为 8866|
|--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式<br>*`Windows 操作系统只支持单进程方式`*|
|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数|
如按默认参数启动服务:```hub serving start -m clas_system```
这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数|
更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
<a name="5.2"></a>
### 5.2 配置文件启动
该方式仅支持使用 CPU 或 GPU 预测。启动命令:
```hub serving start -c config.json```
```shell
hub serving start -c config.json
```
其中,`config.json` 格式如下:
......@@ -163,12 +158,21 @@ hub serving start -c hubserving/clas/config.json
```shell
cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path
```
python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**预测输出**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
```
**脚本参数说明**:
* **server_url**:服务地址,格式为
`http://[ip_address]:[port]/predict/[module_name]`
* **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。
* **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。
* **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。
* **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。
......@@ -178,41 +182,44 @@ python hubserving/test_hubserving.py server_url image_path
**注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
访问示例:
```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
```
**返回结果格式说明**:
返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
```
```shell
list: 返回结果
└─ list: 第一张图片结果
─ list: 前 k 个分类结果,依 score 递减排序
─ list: 前 k 个分类结果对应的 score,依 score 递减排序
└─ float: 该图分类耗时,单位秒
└─list: 第一张图片结果
├── list: 前 k 个分类结果,依 score 递减排序
├── list: 前 k 个分类结果对应的 score,依 score 递减排序
└─ float: 该图分类耗时,单位秒
```
<a name="7"></a>
## 7. 自定义修改服务模块
如果需要修改服务逻辑,需要进行以下操作:
如果需要修改服务逻辑,需要进行以下操作:
1. 停止服务
```hub serving stop --port/-p XXXX```
1. 停止服务
```shell
hub serving stop --port/-p XXXX
```
2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过 `python hubserving/clas/module.py` 测试已安装服务模块
2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码
3. 卸载旧服务包
```hub uninstall clas_system```
3. 卸载旧服务包
```shell
hub uninstall clas_system
```
4. 安装修改后的新服务包
```hub install hubserving/clas/```
4. 安装修改后的新服务包
```shell
hub install hubserving/clas/
```
5.重新启动服务
```hub serving start -m clas_system```
5. 重新启动服务
```shell
hub serving start -m clas_system
```
**注意**:
常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改:
......@@ -229,4 +236,4 @@ list: 返回结果
'class_id_map_file':
```
为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 `PaddleClas/deploy/hubserving/test_hubserving.py#L35-L52` 中修改
为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L41-L47](../../../deploy/hubserving/test_hubserving.py#L41-L47) 以及 [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](../../../deploy/hubserving/test_hubserving.py#L51-L76) 中修改数据预处理逻辑相关代码
......@@ -231,9 +231,9 @@ adb push imgs/tabby_cat.jpg /data/local/tmp/arm_cpu/
```shell
clas_model_file ./MobileNetV3_large_x1_0.nb # 模型文件地址
label_path ./imagenet1k_label_list.txt # 类别映射文本文件
label_path ./imagenet1k_label_list.txt # 类别映射文本文件
resize_short_size 256 # resize之后的短边边长
crop_size 224 # 裁剪后用于预测的边长
crop_size 224 # 裁剪后用于预测的边长
visualize 0 # 是否进行可视化,如果选择的话,会在当前文件夹下生成名为clas_result.png的图像文件
num_threads 1 # 线程数,默认是1。
precision FP32 # 精度类型,可以选择 FP32 或者 INT8,默认是 FP32。
......@@ -263,4 +263,3 @@ A1:如果已经走通了上述步骤,更换模型只需要替换 `.nb` 模
Q2:换一个图测试怎么做?
A2:替换 debug 下的测试图像为你想要测试的图像,使用 ADB 再次 push 到手机上即可。
# 模型服务化部署
--------
## 目录
- [1. 简介](#1)
- [2. Serving 安装](#2)
- [3. 图像分类服务部署](#3)
- [3.1 模型转换](#3.1)
- [3.2 服务部署和请求](#3.2)
- [3.2.1 Python Serving](#3.2.1)
- [3.2.2 C++ Serving](#3.2.2)
- [4. 图像识别服务部署](#4)
- [4.1 模型转换](#4.1)
- [4.2 服务部署和请求](#4.2)
- [4.2.1 Python Serving](#4.2.1)
- [4.2.2 C++ Serving](#4.2.2)
- [5. FAQ](#5)
<a name="1"></a>
## 1. 简介
[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。
该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。
<a name="2"></a>
## 2. Serving 安装
Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。
```shell
# 启动GPU docker
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
# 启动CPU docker
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash
docker exec -it test bash
```
进入 docker 后,需要安装 Serving 相关的 python 包。
```shell
pip3 install paddle-serving-client==0.7.0
pip3 install paddle-serving-app==0.7.0
pip3 install faiss-cpu==1.7.1post2
#若为CPU部署环境:
pip3 install paddle-serving-server==0.7.0 # CPU
pip3 install paddlepaddle==2.2.0 # CPU
#若为GPU部署环境
pip3 install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
pip3 install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#其他GPU环境需要确认环境再选择执行哪一条
pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。
* 其他环境配置安装请参考: [使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md)
<a name="3"></a>
## 3. 图像分类服务部署
<a name="3.1"></a>
### 3.1 模型转换
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。
- 进入工作目录:
```shell
cd deploy/paddleserving
```
- 下载 ResNet50_vd 的 inference 模型:
```shell
# 下载并解压 ResNet50_vd 模型
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
```
- 用 paddle_serving_client 把下载的 inference 模型转换成易于 Server 部署的模型格式:
```
# 转换 ResNet50_vd 模型
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ResNet50_vd_serving/ \
--serving_client ./ResNet50_vd_client/
```
ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving``ResNet50_vd_client` 的文件夹,具备如下格式:
```
|- ResNet50_vd_serving/
|- inference.pdiparams
|- inference.pdmodel
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- ResNet50_vd_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
```
得到模型文件之后,需要分别修改 `ResNet50_vd_serving``ResNet50_vd_client` 下文件 `serving_server_conf.prototxt` 中的 alias 名字:将 `fetch_var` 中的 `alias_name` 改为 `prediction`
**备注**: Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。这样,不同的模型在推理部署时,只需要修改配置文件的 alias_name 即可,无需修改代码即可完成推理部署。
修改后的 serving_server_conf.prototxt 如下所示:
```
feed_var {
name: "inputs"
alias_name: "inputs"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "prediction"
is_lod_tensor: false
fetch_type: 1
shape: 1000
}
```
<a name="3.2"></a>
### 3.2 服务部署和请求
paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,包括:
```shell
__init__.py
config.yml # 启动pipeline服务的配置文件
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
classification_web_service.py # 启动pipeline服务端的脚本
run_cpp_serving.sh # 启动C++ Serving部署的脚本
test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
```
<a name="3.2.1"></a>
#### 3.2.1 Python Serving
- 启动服务:
```shell
# 启动服务,运行日志保存在 log.txt
python3 classification_web_service.py &>log.txt &
```
- 发送请求:
```shell
# 发送服务请求
python3 pipeline_http_client.py
```
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
```
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []}
```
<a name="3.2.2"></a>
#### 3.2.2 C++ Serving
- 启动服务:
```shell
# 启动服务, 服务在后台运行,运行日志保存在 nohup.txt
sh run_cpp_serving.sh
```
- 发送请求:
```shell
# 发送服务请求
python3 test_cpp_serving_client.py
```
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
```
prediction: daisy, probability: 0.9341399073600769
```
<a name="4"></a>
## 4.图像识别服务部署
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。
<a name="4.1"></a>
## 4.1 模型转换
- 下载通用检测 inference 模型和通用识别 inference 模型
```
cd deploy
# 下载并解压通用识别模型
wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
cd models
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
# 下载并解压通用检测模型
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
```
- 转换识别 inference 模型为 Serving 模型:
```
# 转换识别模型
python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
```
识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 serving_server_conf.prototxt 中的 alias 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`
修改后的 serving_server_conf.prototxt 内容如下:
```
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
```
- 转换通用检测 inference 模型为 Serving 模型:
```
# 转换通用检测模型
python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
```
检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/``picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹。
**注意:** 此处不需要修改 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 目录下的 serving_server_conf.prototxt 中的 alias 名字。
- 下载并解压已经构建后的检索库 index
```
cd ../
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
```
<a name="4.2"></a>
## 4.2 服务部署和请求
**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
- 进入到工作目录
```shell
cd ./deploy/paddleserving/recognition
```
paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括:
```
__init__.py
config.yml # 启动python pipeline服务的配置文件
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
recognition_web_service.py # 启动pipeline服务端的脚本
run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本
test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本
```
<a name="4.2.1"></a>
#### 4.2.1 Python Serving
- 启动服务:
```
# 启动服务,运行日志保存在 log.txt
python3 recognition_web_service.py &>log.txt &
```
- 发送请求:
```
python3 pipeline_http_client.py
```
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
```
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []}
```
<a name="4.2.2"></a>
#### 4.2.2 C++ Serving
- 启动服务:
```shell
# 启动服务: 此处会在后台同时启动主体检测和特征提取服务,端口号分别为9293和9294;
# 运行日志分别保存在 log_mainbody_detection.txt 和 log_feature_extraction.txt中
sh run_cpp_serving.sh
```
- 发送请求:
```shell
# 发送服务请求
python3 test_cpp_serving_client.py
```
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下所示:
```
[{'bbox': [345, 95, 524, 586], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8016462}]
```
<a name="5"></a>
## 5.FAQ
**Q1**: 发送请求后没有结果返回或者提示输出解码报错
**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
```
unset https_proxy
unset http_proxy
```
更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.7.0/examples)
......@@ -8,9 +8,9 @@
- [1. 图像分类模型推理](#1)
- [2. PP-ShiTu模型推理](#2)
- [2.1 主体检测模型推理](#2.1)
- [2.2 特征提取模型推理](#2.2)
- [2.3 PP-ShiTu PipeLine推理](#2.3)
- [2.1 主体检测模型推理](#2.1)
- [2.2 特征提取模型推理](#2.2)
- [2.3 PP-ShiTu PipeLine推理](#2.3)
<a name="1"></a>
## 1. 图像分类推理
......
简体中文 | [English](../../en/inference_deployment/recognition_serving_deploy_en.md)
# 识别模型服务化部署
## 目录
- [1. 简介](#1-简介)
- [2. Serving 安装](#2-serving-安装)
- [3. 图像识别服务部署](#3-图像识别服务部署)
- [3.1 模型转换](#31-模型转换)
- [3.2 服务部署和请求](#32-服务部署和请求)
- [3.2.1 Python Serving](#321-python-serving)
- [3.2.2 C++ Serving](#322-c-serving)
- [4. FAQ](#4-faq)
<a name="1"></a>
## 1. 简介
[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。
该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。
<a name="2"></a>
## 2. Serving 安装
Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。
```shell
# 启动GPU docker
docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
# 启动CPU docker
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash
docker exec -it test bash
```
进入 docker 后,需要安装 Serving 相关的 python 包。
```shell
python3.7 -m pip install paddle-serving-client==0.7.0
python3.7 -m pip install paddle-serving-app==0.7.0
python3.7 -m pip install faiss-cpu==1.7.1post2
#若为CPU部署环境:
python3.7 -m pip install paddle-serving-server==0.7.0 # CPU
python3.7 -m pip install paddlepaddle==2.2.0 # CPU
#若为GPU部署环境
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6
python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2
#其他GPU环境需要确认环境再选择执行哪一条
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。
* 其他环境配置安装请参考:[使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md)
<a name="3"></a>
## 3. 图像识别服务部署
使用 PaddleServing 做图像识别服务化部署时,**需要将保存的多个 inference 模型都转换为 Serving 模型**。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。
<a name="3.1"></a>
### 3.1 模型转换
- 进入工作目录:
```shell
cd deploy/
```
- 下载通用检测 inference 模型和通用识别 inference 模型
```shell
# 创建并进入models文件夹
mkdir models
cd models
# 下载并解压通用识别模型
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
# 下载并解压通用检测模型
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
```
- 转换通用识别 inference 模型为 Serving 模型:
```shell
# 转换通用识别模型
python3.7 -m paddle_serving_client.convert \
--dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
```
上述命令的参数含义与[#3.1 模型转换](#3.1)相同
通用识别 inference 模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹,具备如下结构:
```shell
├── general_PPLCNet_x2_5_lite_v1.0_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── general_PPLCNet_x2_5_lite_v1.0_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
- 转换通用检测 inference 模型为 Serving 模型:
```shell
# 转换通用检测模型
python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
```
上述命令的参数含义与[#3.1 模型转换](#3.1)相同
识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/``general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 `serving_server_conf.prototxt` 中的 `alias` 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 修改后的 `serving_server_conf.prototxt` 内容如下
```log
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
```
通用检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/``picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹,具备如下结构:
```shell
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
│ ├── inference.pdiparams
│ ├── inference.pdmodel
│ ├── serving_server_conf.prototxt
│ └── serving_server_conf.stream.prototxt
└── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
├── serving_client_conf.prototxt
└── serving_client_conf.stream.prototxt
```
上述命令中参数具体含义如下表所示
| 参数 | 类型 | 默认值 | 描述 |
| ----------------- | ---- | ------------------ | ------------------------------------------------------------ |
| `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 |
| `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 |
| `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None |
| `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server |
| `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client |
- 下载并解压已经构建后完成的检索库 index
```shell
# 回到deploy目录
cd ../
# 下载构建完成的检索库 index
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar
# 解压构建完成的检索库 index
tar -xf drink_dataset_v1.0.tar
```
<a name="3.2"></a>
### 3.2 服务部署和请求
**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
- 进入到工作目录
```shell
cd ./deploy/paddleserving/recognition
```
paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括:
```shell
__init__.py
config.yml # 启动python pipeline服务的配置文件
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
recognition_web_service.py # 启动pipeline服务端的脚本
readme.md # 识别模型服务化部署文档
run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本
test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本
```
<a name="3.2.1"></a>
#### 3.2.1 Python Serving
- 启动服务:
```shell
# 启动服务,运行日志保存在 log.txt
python3.7 recognition_web_service.py &>log.txt &
```
- 发送请求:
```shell
python3.7 pipeline_http_client.py
```
成功运行后,模型预测的结果会打印在客户端中,如下所示:
```log
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []}
```
<a name="3.2.2"></a>
#### 3.2.2 C++ Serving
与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`
- 编译并安装Serving server包
```shell
# 进入工作目录
cd PaddleClas/deploy/paddleserving
# 一键编译安装Serving server、设置 SERVING_BIN
source ./build_server.sh python3.7
```
**注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。
```shell
# 进入PaddleClas/deploy目录
cd PaddleClas/deploy/
# 覆盖prototxt文件
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
```
- 启动服务:
```shell
# 进入工作目录
cd PaddleClas/deploy/paddleserving/recognition
# 端口号默认为9400;运行日志默认保存在 log_PPShiTu.txt 中
# CPU部署
bash run_cpp_serving.sh
# GPU部署,并指定第0号卡
bash run_cpp_serving.sh 0
```
- 发送请求:
```shell
# 发送服务请求
python3.7 test_cpp_serving_client.py
```
成功运行后,模型预测的结果会打印在客户端中,如下所示:
```log
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1
I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms.
[{'bbox': [345, 95, 524, 585], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8073724}]
```
- 关闭服务
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
```bash
python3.7 -m paddle_serving_server.serve stop
```
执行完毕后出现`Process stopped`信息表示成功关闭服务。
<a name="4"></a>
## 4. FAQ
**Q1**: 发送请求后没有结果返回或者提示输出解码报错
**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
```shell
unset https_proxy
unset http_proxy
```
**Q2**: 启动服务后没有任何反应
**A2**: 可以检查`config.yml``model_config`对应的路径是否存在,文件夹命名是否正确
更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册