提交 a709d58e 编写于 作者: H HydrogenSulfate

polish_serving_docs

上级 6ed4ae5e
English|[Chinese](../../zh_CN/inference_deployment/paddle_hub_serving_deploy.md)
# Service deployment based on PaddleHub Serving # Service deployment based on PaddleHub Serving
PaddleClas supports rapid service deployment through Paddlehub. At present, it supports the deployment of image classification. Please look forward to the deployment of image recognition. PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition.
--- ---
## Catalogue ## Catalogue
- [1. Introduction](#1) - [1. Introduction](#1)
- [2. Prepare the environment](#2) - [2. Prepare the environment](#2)
- [3. Download inference model](#3) - [3. Download inference model](#3)
...@@ -16,97 +16,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s ...@@ -16,97 +16,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s
- [6. Send prediction requests](#6) - [6. Send prediction requests](#6)
- [7. User defined service module modification](#7) - [7. User defined service module modification](#7)
<a name="1"></a> <a name="1"></a>
## 1. Introduction ## 1 Introduction
HubServing service pack contains 3 files, the directory is as follows: The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows:
```shell
deploy/hubserving/clas/
├── __init__.py # Empty file, required
├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration
├── module.py # The main module, required, contains the complete logic of the service
└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters
``` ```
hubserving/clas/
└─ __init__.py Empty file, required
└─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service
└─ module.py Main module file, required, contains the complete logic of the service
└─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters
```
<a name="2"></a> <a name="2"></a>
## 2. Prepare the environment ## 2. Prepare the environment
```shell ```shell
# Install version 2.0 of PaddleHub # Install paddlehub, please install version 2.0
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```
<a name="3"></a> <a name="3"></a>
## 3. Download inference model ## 3. Download the inference model
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
* Model structure file: `PaddleClas/inference/inference.pdmodel` * Classification inference model structure file: `PaddleClas/inference/inference.pdmodel`
* Model parameters file: `PaddleClas/inference/inference.pdiparams` * Classification inference model weight file: `PaddleClas/inference/inference.pdiparams`
**Notice**: **Notice**:
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`. * Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* It should be noted that the prefix of model structure file and model parameters file must be `inference`.
* More models provided by PaddleClas can be obtained from the [model library](../algorithm_introduction/ImageNet_models_en.md). You can also use models trained by yourself. ```python
"inference_model_dir": "../inference/"
```
* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`.
* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models.md), or you can use your own trained and converted models.
<a name="4"></a> <a name="4"></a>
## 4. Install Service Module ## 4. Install the service module
* On Linux platform, the examples are as follows. * In the Linux environment, the installation example is as follows:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
hub install hubserving/clas/ # Install the service module:
``` hub install hubserving/clas/
```
* In the Windows environment (the folder separator is `\`), the installation example is as follows:
```shell
cd PaddleClas\deploy
# Install the service module:
hub install hubserving\clas\
```
* On Windows platform, the examples are as follows.
```shell
cd PaddleClas\deploy
hub install hubserving\clas\
```
<a name="5"></a> <a name="5"></a>
## 5. Start service ## 5. Start service
<a name="5.1"></a> <a name="5.1"></a>
### 5.1 Start with command line parameters ### 5.1 Start with command line parameters
This method only supports CPU. Command as follow: This method only supports prediction using CPU. Start command:
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
**parameters:**
|parameters|usage|
|-|-|
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866|
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For example, start service:
```shell ```shell
hub serving start -m clas_system hub serving start \
``` --modules clas_system
--port 8866
```
This completes the deployment of a serviced API, using the default port number 8866.
This completes the deployment of a service API, using the default port number 8866. **Parameter Description**:
|parameters|uses|
|-|-|
|--modules/-m| [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When no Version is specified, the latest is selected by default version `*|
|--port/-p| [**OPTIONAL**] Service port, default is 8866|
|--use_multiprocess| [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers| [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
<a name="5.2"></a> <a name="5.2"></a>
### 5.2 Start with configuration file ### 5.2 Start with configuration file
This method supports CPU and GPU. Command as follow: This method only supports prediction using CPU or GPU. Start command:
```shell ```shell
hub serving start --config/-c config.json hub serving start -c config.json
``` ```
Wherein, the format of `config.json` is as follows: Among them, the format of `config.json` is as follows:
```json ```json
{ {
...@@ -127,18 +131,19 @@ Wherein, the format of `config.json` is as follows: ...@@ -127,18 +131,19 @@ Wherein, the format of `config.json` is as follows:
} }
``` ```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them, **Parameter Description**:
- when `use_gpu` is `true`, it means that the GPU is used to start the service. * The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in,
- when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate. - When `use_gpu` is `true`, it means to use GPU to start the service.
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. - When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration.
* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Note:** **Notice**:
- When using the configuration file to start the service, other parameters will be ignored. * When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored;
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it. * If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`;
- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.** * **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`;
- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.** * ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used.
For example, use GPU card No. 3 to start the 2-stage series service: If you use GPU No. 3 card to start the service:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
...@@ -149,88 +154,86 @@ hub serving start -c hubserving/clas/config.json ...@@ -149,88 +154,86 @@ hub serving start -c hubserving/clas/config.json
<a name="6"></a> <a name="6"></a>
## 6. Send prediction requests ## 6. Send prediction requests
After the service starting, you can use the following command to send a prediction request to obtain the prediction result: After configuring the server, you can use the following command to send a prediction request to get the prediction result:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**Predicted output**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
``` ```
Two required parameters need to be passed to the script: **Script parameter description**:
* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`.
- **server_url**: service address,format of which is * **image_path**: The test image path, which can be a single image path or an image collection directory path.
`http://[ip_address]:[port]/predict/[module_name]` * **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`.
- **image_path**: Test image path, can be a single image path or an image directory path * **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`.
- **batch_size**: [**Optional**] batch_size. Default by `1`. * **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`.
- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256` * **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`.
- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224` * **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`.
- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`
- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`
**Notice**: **Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`.
If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`.
**Eg.** **Return result format description**:
The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows:
```shell ```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 list: return result
└──list: first image result
├── list: the top k classification results, sorted in descending order of score
├── list: the scores corresponding to the first k classification results, sorted in descending order of score
└── float: The image classification time, in seconds
``` ```
The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows.
```
list: The returned results
└─ list: The result of first picture
└─ list: The top-k classification results, sorted in descending order of score
└─ list: The scores corresponding to the top-k classification results, sorted in descending order of score
└─ float: The time cost of predicting the picture, unit second
```
**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section.
<a name="7"></a> <a name="7"></a>
## 7. User defined service module modification ## 7. User defined service module modification
If you need to modify the service logic, the following steps are generally required: If you need to modify the service logic, you need to do the following:
1. Stop service
```shell
hub serving stop --port/-p XXXX
```
2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`.
After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module.
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation.
3. Uninstall old service module 1. Stop the service
```shell ```shell
hub uninstall clas_system hub serving stop --port/-p XXXX
``` ```
4. Install modified service module 2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment.
```shell
hub install hubserving/clas/
```
5. Restart service 3. Uninstall the old service pack
```shell ```shell
hub serving start -m clas_system hub uninstall clas_system
``` ```
**Note**: 4. Install the new modified service pack
```shell
hub install hubserving/clas/
```
Common parameters can be modified in params.py: 5. Restart the service
* Directory of model files(include model structure file and model parameters file): ```shell
```python hub serving start -m clas_system
"inference_model_dir": ```
```
* The number of Top-k results returned during post-processing:
```python
'topk':
```
* Mapping file corresponding to label and class ID during post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](../../../deploy/hubserving/test_hubserving.py#L35-L52) if necessary. **Notice**:
Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* To replace the model, you need to modify the model file path parameters:
```python
"inference_model_dir":
```
* Change the number of `top-k` results returned when postprocessing:
```python
'topk':
```
* The mapping file corresponding to the lable and class id when changing the post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to be in [PaddleClas/deploy/hubserving/test_hubserving.py# Modify the code related to data preprocessing logic in L35-L52](../../../deploy/hubserving/test_hubserving.py).
English|[Chinese](../../zh_CN/inference_deployment/paddle_serving_deploy.md)
# Model Service deployment # Model Service deployment
-------- --------
## Catalogue ## Catalogue
...@@ -9,10 +10,10 @@ ...@@ -9,10 +10,10 @@
- [3.2.1 Python Serving](#3.2.1) - [3.2.1 Python Serving](#3.2.1)
- [3.2.2 C++ Serving](#3.2.2) - [3.2.2 C++ Serving](#3.2.2)
- [4. Service Deployment for Image Recognition](#4) - [4. Service Deployment for Image Recognition](#4)
- [4.1 Model Transformation](#4.1) - [4.1 Model Transformation](#4.1)
- [4.2 Service Deployment and Request](#4.2) - [4.2 Service Deployment and Request](#4.2)
- [4.2.1 Python Serving](#4.2.1) - [4.2.1 Python Serving](#4.2.1)
- [4.2.2 C++ Serving](#4.2.2) - [4.2.2 C++ Serving](#4.2.2)
- [5. FAQ](#5) - [5. FAQ](#5)
<a name="1"></a> <a name="1"></a>
...@@ -22,7 +23,7 @@ ...@@ -22,7 +23,7 @@
This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported. This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported.
<a name="2"></a> <a name="2"></a>
## 2. Serving installation ## 2. Installation of Serving
The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker. The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker.
```shell ```shell
...@@ -61,12 +62,12 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD ...@@ -61,12 +62,12 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD
<a name="3"></a> <a name="3"></a>
## 3. Image Classification Service Deployment ## 3. Service Deployment for Image Classification
The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service. The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service.
<a name="3.1"></a> <a name="3.1"></a>
### 3.1 Model conversion ### 3.1 Model Transformation
When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model. When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model.
- Go to the working directory: - Go to the working directory:
```shell ```shell
...@@ -131,7 +132,7 @@ When using PaddleServing for service deployment, you need to convert the saved i ...@@ -131,7 +132,7 @@ When using PaddleServing for service deployment, you need to convert the saved i
} }
``` ```
<a name="3.2"></a> <a name="3.2"></a>
### 3.2 Service deployment and request ### 3.2 Service Deployment and Request
The paddleserving directory contains the code to start the pipeline service, C++ serving service and send prediction requests, including: The paddleserving directory contains the code to start the pipeline service, C++ serving service and send prediction requests, including:
```shell ```shell
__init__.py __init__.py
...@@ -179,11 +180,11 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests ...@@ -179,11 +180,11 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests
``` ```
<a name="4"></a> <a name="4"></a>
## 4. Image recognition service deployment ## 4. Service Deployment for Image Recognition
In addition to the single-model deployment method introduced in [Chapter 3 Image Classification Service Deployment](#3), we will introduce how to use the detection + classification model to complete the multi-model **image recognition service deployment** In addition to the single-model deployment method introduced in [Chapter 3 Service Deployment for Image Classification](#3), we will introduce how to use the detection + classification model to complete the multi-model **image recognition service deployment**
When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services. When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services.
<a name="4.1"></a> <a name="4.1"></a>
## 4.1 Model conversion ### 4.1 Model Transformation
- Go to the working directory: - Go to the working directory:
```shell ```shell
cd deploy/ cd deploy/
...@@ -210,8 +211,8 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -210,8 +211,8 @@ When using PaddleServing for image recognition service deployment, **need to con
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
``` ```
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1) The meaning of the parameters of the above command is the same as [#3.1 Model Transformation](#3.1)
After the conversion of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure: After the transformation of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure:
```shell ```shell
├── general_PPLCNet_x2_5_lite_v1.0_serving/ ├── general_PPLCNet_x2_5_lite_v1.0_serving/
│ ├── inference.pdiparams │ ├── inference.pdiparams
...@@ -253,9 +254,9 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -253,9 +254,9 @@ When using PaddleServing for image recognition service deployment, **need to con
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
``` ```
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1) The meaning of the parameters of the above command is the same as [#3.1 Model Transformation](#3.1)
After the general detection inference model conversion is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure: After the general detection inference model transformation is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure:
```shell ```shell
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ ├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
│ ├── inference.pdiparams │ ├── inference.pdiparams
...@@ -280,7 +281,7 @@ When using PaddleServing for image recognition service deployment, **need to con ...@@ -280,7 +281,7 @@ When using PaddleServing for image recognition service deployment, **need to con
tar -xf drink_dataset_v1.0.tar tar -xf drink_dataset_v1.0.tar
``` ```
<a name="4.2"></a> <a name="4.2"></a>
## 4.2 Service deployment and request ### 4.2 Service Deployment and Request
**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform. **Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform.
- go to the working directory - go to the working directory
```shell ```shell
......
简体中文|[English](../../en/inference_deployment/paddle_hub_serving_deploy_en.md)
# 基于 PaddleHub Serving 的服务部署 # 基于 PaddleHub Serving 的服务部署
PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。 PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。
...@@ -22,12 +23,12 @@ PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图 ...@@ -22,12 +23,12 @@ PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图
hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下: hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下:
``` ```shell
hubserving/clas/ deploy/hubserving/clas/
└─ __init__.py 空文件,必选 ├── __init__.py # 空文件,必选
└─ config.json 配置文件,可选,使用配置启动服务时作为参数传入 ├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入
└─ module.py 主模块,必选,包含服务的完整逻辑 ├── module.py # 主模块,必选,包含服务的完整逻辑
└─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数 └── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数
``` ```
...@@ -35,7 +36,7 @@ hubserving/clas/ ...@@ -35,7 +36,7 @@ hubserving/clas/
## 2. 准备环境 ## 2. 准备环境
```shell ```shell
# 安装 paddlehub,请安装 2.0 版本 # 安装 paddlehub,请安装 2.0 版本
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```
...@@ -53,30 +54,27 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim ...@@ -53,30 +54,27 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim
```python ```python
"inference_model_dir": "../inference/" "inference_model_dir": "../inference/"
``` ```
需要注意, * 模型文件(包括 `.pdmodel``.pdiparams`)的名称必须为 `inference`
* 模型文件(包括 `.pdmodel``.pdiparams`)名称必须为 `inference` * 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
* 我们也提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
<a name="4"></a> <a name="4"></a>
## 4. 安装服务模块 ## 4. 安装服务模块
针对 Linux 环境和 Windows 环境,安装命令如下。
* 在 Linux 环境下,安装示例如下: * 在 Linux 环境下,安装示例如下:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
# 安装服务模块: # 安装服务模块:
hub install hubserving/clas/ hub install hubserving/clas/
``` ```
* 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下: * 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下:
```shell ```shell
cd PaddleClas\deploy cd PaddleClas\deploy
# 安装服务模块: # 安装服务模块:
hub install hubserving\clas\ hub install hubserving\clas\
``` ```
<a name="5"></a> <a name="5"></a>
...@@ -84,36 +82,34 @@ hub install hubserving\clas\ ...@@ -84,36 +82,34 @@ hub install hubserving\clas\
<a name="5.1"></a> <a name="5.1"></a>
### 5.1 命令行命令启动 ### 5.1 命令行启动
该方式仅支持使用 CPU 预测。启动命令: 该方式仅支持使用 CPU 预测。启动命令:
```shell ```shell
$ hub serving start --modules Module1==Version1 \ hub serving start \
--port XXXX \ --modules clas_system
--use_multiprocess \ --port 8866
--workers \ ```
``` 这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
**参数说明**: **参数说明**:
|参数|用途| |参数|用途|
|-|-| |-|-|
|--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`*| |--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`*|
|--port/-p| [**可选**] 服务端口,默认为 8866| |--port/-p| [**可选**] 服务端口,默认为 8866|
|--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式<br>*`Windows 操作系统只支持单进程方式`*| |--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式<br>*`Windows 操作系统只支持单进程方式`*|
|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数| |--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数|
更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
如按默认参数启动服务:```hub serving start -m clas_system```
这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
<a name="5.2"></a> <a name="5.2"></a>
### 5.2 配置文件启动 ### 5.2 配置文件启动
该方式仅支持使用 CPU 或 GPU 预测。启动命令: 该方式仅支持使用 CPU 或 GPU 预测。启动命令:
```hub serving start -c config.json``` ```shell
hub serving start -c config.json
```
其中,`config.json` 格式如下: 其中,`config.json` 格式如下:
...@@ -163,12 +159,21 @@ hub serving start -c hubserving/clas/config.json ...@@ -163,12 +159,21 @@ hub serving start -c hubserving/clas/config.json
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path python3.7 hubserving/test_hubserving.py \
``` --server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**预测输出**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
```
**脚本参数说明**: **脚本参数说明**:
* **server_url**:服务地址,格式为 * **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。
`http://[ip_address]:[port]/predict/[module_name]`
* **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。 * **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。
* **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。 * **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。
* **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。 * **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。
...@@ -178,41 +183,44 @@ python hubserving/test_hubserving.py server_url image_path ...@@ -178,41 +183,44 @@ python hubserving/test_hubserving.py server_url image_path
**注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。 **注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
访问示例:
```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
```
**返回结果格式说明**: **返回结果格式说明**:
返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下: 返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
``` ```shell
list: 返回结果 list: 返回结果
└─ list: 第一张图片结果 └─list: 第一张图片结果
─ list: 前 k 个分类结果,依 score 递减排序 ├── list: 前 k 个分类结果,依 score 递减排序
─ list: 前 k 个分类结果对应的 score,依 score 递减排序 ├── list: 前 k 个分类结果对应的 score,依 score 递减排序
└─ float: 该图分类耗时,单位秒 └─ float: 该图分类耗时,单位秒
``` ```
<a name="7"></a> <a name="7"></a>
## 7. 自定义修改服务模块 ## 7. 自定义修改服务模块
如果需要修改服务逻辑,需要进行以下操作: 如果需要修改服务逻辑,需要进行以下操作:
1. 停止服务 1. 停止服务
```hub serving stop --port/-p XXXX``` ```shell
hub serving stop --port/-p XXXX
```
2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过 `python hubserving/clas/module.py` 测试已安装服务模块 2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码
3. 卸载旧服务包 3. 卸载旧服务包
```hub uninstall clas_system``` ```shell
hub uninstall clas_system
```
4. 安装修改后的新服务包 4. 安装修改后的新服务包
```hub install hubserving/clas/``` ```shell
hub install hubserving/clas/
```
5.重新启动服务 5. 重新启动服务
```hub serving start -m clas_system``` ```shell
hub serving start -m clas_system
```
**注意**: **注意**:
常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改: 常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改:
...@@ -229,4 +237,4 @@ list: 返回结果 ...@@ -229,4 +237,4 @@ list: 返回结果
'class_id_map_file': 'class_id_map_file':
``` ```
为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 `PaddleClas/deploy/hubserving/test_hubserving.py#L35-L52` 中修改 为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L35-L52](../../../deploy/hubserving/test_hubserving.py) 中修改数据预处理逻辑相关代码
简体中文|[English](../../en/inference_deployment/paddle_serving_deploy_en.md)
# 模型服务化部署 # 模型服务化部署
-------- --------
## 目录 ## 目录
...@@ -9,10 +10,10 @@ ...@@ -9,10 +10,10 @@
- [3.2.1 Python Serving](#3.2.1) - [3.2.1 Python Serving](#3.2.1)
- [3.2.2 C++ Serving](#3.2.2) - [3.2.2 C++ Serving](#3.2.2)
- [4. 图像识别服务部署](#4) - [4. 图像识别服务部署](#4)
- [4.1 模型转换](#4.1) - [4.1 模型转换](#4.1)
- [4.2 服务部署和请求](#4.2) - [4.2 服务部署和请求](#4.2)
- [4.2.1 Python Serving](#4.2.1) - [4.2.1 Python Serving](#4.2.1)
- [4.2.2 C++ Serving](#4.2.2) - [4.2.2 C++ Serving](#4.2.2)
- [5. FAQ](#5) - [5. FAQ](#5)
<a name="1"></a> <a name="1"></a>
...@@ -183,7 +184,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -183,7 +184,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
除了[第三章图像分类服务部署](#3)介绍的单模型部署方式,接下来会介绍如何使用检测+分类模型来完成多模型串联的**图像识别服务部署** 除了[第三章图像分类服务部署](#3)介绍的单模型部署方式,接下来会介绍如何使用检测+分类模型来完成多模型串联的**图像识别服务部署**
使用 PaddleServing 做图像识别服务化部署时,**需要将保存的多个 inference 模型都转换为 Serving 模型**。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。 使用 PaddleServing 做图像识别服务化部署时,**需要将保存的多个 inference 模型都转换为 Serving 模型**。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。
<a name="4.1"></a> <a name="4.1"></a>
## 4.1 模型转换 ### 4.1 模型转换
- 进入工作目录: - 进入工作目录:
```shell ```shell
cd deploy/ cd deploy/
...@@ -280,7 +281,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -280,7 +281,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
tar -xf drink_dataset_v1.0.tar tar -xf drink_dataset_v1.0.tar
``` ```
<a name="4.2"></a> <a name="4.2"></a>
## 4.2 服务部署和请求 ### 4.2 服务部署和请求
**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。 **注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
- 进入到工作目录 - 进入到工作目录
```shell ```shell
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册