PaddleClas supports rapid service deployment through Paddlehub. At present, it supports the deployment of image classification. Please look forward to the deployment of image recognition.
PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition.
---
---
## Catalogue
## Catalogue
-[1. Introduction](#1)
-[1. Introduction](#1)
-[2. Prepare the environment](#2)
-[2. Prepare the environment](#2)
-[3. Download inference model](#3)
-[3. Download inference model](#3)
...
@@ -16,97 +16,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s
...
@@ -16,97 +16,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s
-[6. Send prediction requests](#6)
-[6. Send prediction requests](#6)
-[7. User defined service module modification](#7)
-[7. User defined service module modification](#7)
<aname="1"></a>
<aname="1"></a>
## 1. Introduction
## 1 Introduction
HubServing service pack contains 3 files, the directory is as follows:
The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows:
```shell
deploy/hubserving/clas/
├── __init__.py # Empty file, required
├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration
├── module.py # The main module, required, contains the complete logic of the service
└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters
```
```
hubserving/clas/
└─ __init__.py Empty file, required
└─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service
└─ module.py Main module file, required, contains the complete logic of the service
└─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters
*Classification inference model weight file: `PaddleClas/inference/inference.pdiparams`
**Notice**:
**Notice**:
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`.
* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* It should be noted that the prefix of model structure file and model parameters file must be `inference`.
* More models provided by PaddleClas can be obtained from the [model library](../algorithm_introduction/ImageNet_models_en.md). You can also use models trained by yourself.
```python
"inference_model_dir":"../inference/"
```
* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`.
* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models.md), or you can use your own trained and converted models.
<aname="4"></a>
<aname="4"></a>
## 4. Install Service Module
## 4. Install the service module
* On Linux platform, the examples are as follows.
* In the Linux environment, the installation example is as follows:
```shell
```shell
cd PaddleClas/deploy
cd PaddleClas/deploy
hub install hubserving/clas/
# Install the service module:
```
hub install hubserving/clas/
```
* In the Windows environment (the folder separator is `\`), the installation example is as follows:
```shell
cd PaddleClas\deploy
# Install the service module:
hub install hubserving\clas\
```
* On Windows platform, the examples are as follows.
```shell
cd PaddleClas\deploy
hub install hubserving\clas\
```
<a name="5"></a>
<a name="5"></a>
## 5. Start service
## 5. Start service
<a name="5.1"></a>
<a name="5.1"></a>
### 5.1 Start with command line parameters
### 5.1 Start with command line parameters
This method only supports CPU. Command as follow:
This method only supports prediction using CPU. Start command:
```shell
$ hub serving start --modulesModule1==Version1 \
--port XXXX \
--use_multiprocess\
--workers\
```
**parameters:**
|parameters|usage|
|-|-|
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866|
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For example, start service:
```shell
```shell
hub serving start -m clas_system
hub serving start \
```
--modules clas_system
--port 8866
```
This completes the deployment of a serviced API, using the default port number 8866.
This completes the deployment of a service API, using the default port number 8866.
**Parameter Description**:
|parameters|uses|
|-|-|
|--modules/-m| [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When no Version is specified, the latest is selected by default version `*|
|--port/-p| [**OPTIONAL**] Service port, default is 8866|
|--use_multiprocess| [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers| [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
<a name="5.2"></a>
<a name="5.2"></a>
### 5.2 Start with configuration file
### 5.2 Start with configuration file
This method supports CPU and GPU. Command as follow:
This method only supports prediction using CPU or GPU. Start command:
```shell
```shell
hub serving start --config/-c config.json
hub serving start -c config.json
```
```
Wherein, the format of `config.json` is as follows:
Among them, the format of `config.json` is as follows:
```json
```json
{
{
...
@@ -127,18 +131,19 @@ Wherein, the format of `config.json` is as follows:
...
@@ -127,18 +131,19 @@ Wherein, the format of `config.json` is as follows:
}
}
```
```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them,
**Parameter Description**:
- when `use_gpu` is `true`, it means that the GPU is used to start the service.
* The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in,
- when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate.
- When `use_gpu` is `true`, it means to use GPU to start the service.
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
- When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration.
* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
**Note:**
**Notice**:
- When using the configuration file to start the service, other parameters will be ignored.
* When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored;
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it.
* If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`;
-**`use_gpu` and `use_multiprocess` cannot be `true` at the same time.**
* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`;
-**When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.**
* ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used.
For example, use GPU card No. 3 to start the 2-stage series service:
* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`.
-**image_path**: Test image path, can be a single image path or an image directory path
* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`.
-**batch_size**: [**Optional**] batch_size. Default by `1`.
* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`.
-**resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`。
* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`.
-**crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`。
* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`.
-**normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`。
-**to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`。
**Notice**:
**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`.
If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`.
**Eg.**
**Return result format description**:
The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows:
├── list: the top k classification results, sorted in descending order of score
├── list: the scores corresponding to the first k classification results, sorted in descending order of score
└── float: The image classification time, in seconds
```
```
The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows.
```
list: The returned results
└─ list: The result of first picture
└─ list: The top-k classification results, sorted in descending order of score
└─ list: The scores corresponding to the top-k classification results, sorted in descending order of score
└─ float: The time cost of predicting the picture, unit second
```
**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section.
<a name="7"></a>
<a name="7"></a>
## 7. User defined service module modification
## 7. User defined service module modification
If you need to modify the service logic, the following steps are generally required:
If you need to modify the service logic, you need to do the following:
1. Stop service
```shell
hub serving stop --port/-p XXXX
```
2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`.
After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module.
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation.
3. Uninstall old service module
1. Stop the service
```shell
```shell
hub uninstall clas_system
hub serving stop --port/-p XXXX
```
```
4. Install modified service module
2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment.
```shell
hub install hubserving/clas/
```
5. Restart service
3. Uninstall the old service pack
```shell
```shell
hub serving start -m clas_system
hub uninstall clas_system
```
```
**Note**:
4. Install the new modified service pack
```shell
hub install hubserving/clas/
```
Common parameters can be modified in params.py:
5. Restart the service
* Directory of model files(include model structure file and model parameters file):
```shell
```python
hub serving start -m clas_system
"inference_model_dir":
```
```
* The number of Top-k results returned during post-processing:
```python
'topk':
```
* Mapping file corresponding to label and class ID during post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](../../../deploy/hubserving/test_hubserving.py#L35-L52) if necessary.
**Notice**:
Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* To replace the model, you need to modify the model file path parameters:
```python
"inference_model_dir":
```
* Change the number of `top-k` results returned when postprocessing:
```python
'topk':
```
* The mapping file corresponding to the lable and class id when changing the post-processing:
```python
'class_id_map_file':
```
In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to be in [PaddleClas/deploy/hubserving/test_hubserving.py# Modify the code related to data preprocessing logic in L35-L52](../../../deploy/hubserving/test_hubserving.py).
-[4. Service Deployment for Image Recognition](#4)
-[4. Service Deployment for Image Recognition](#4)
-[4.1 Model Transformation](#4.1)
-[4.1 Model Transformation](#4.1)
-[4.2 Service Deployment and Request](#4.2)
-[4.2 Service Deployment and Request](#4.2)
-[4.2.1 Python Serving](#4.2.1)
-[4.2.1 Python Serving](#4.2.1)
-[4.2.2 C++ Serving](#4.2.2)
-[4.2.2 C++ Serving](#4.2.2)
-[5. FAQ](#5)
-[5. FAQ](#5)
<aname="1"></a>
<aname="1"></a>
...
@@ -22,7 +23,7 @@
...
@@ -22,7 +23,7 @@
This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported.
This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported.
<aname="2"></a>
<aname="2"></a>
## 2. Serving installation
## 2. Installation of Serving
The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker.
The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker.
The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service.
The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service.
<aname="3.1"></a>
<aname="3.1"></a>
### 3.1 Model conversion
### 3.1 Model Transformation
When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model.
When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model.
- Go to the working directory:
- Go to the working directory:
```shell
```shell
...
@@ -131,7 +132,7 @@ When using PaddleServing for service deployment, you need to convert the saved i
...
@@ -131,7 +132,7 @@ When using PaddleServing for service deployment, you need to convert the saved i
}
}
```
```
<aname="3.2"></a>
<aname="3.2"></a>
### 3.2 Service deployment and request
### 3.2 Service Deployment and Request
The paddleserving directory contains the code to start the pipeline service, C++ serving service and send prediction requests, including:
The paddleserving directory contains the code to start the pipeline service, C++ serving service and send prediction requests, including:
```shell
```shell
__init__.py
__init__.py
...
@@ -179,11 +180,11 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests
...
@@ -179,11 +180,11 @@ test_cpp_serving_client.py # Script for sending C++ serving prediction requests
```
```
<aname="4"></a>
<aname="4"></a>
## 4. Image recognition service deployment
## 4. Service Deployment for Image Recognition
In addition to the single-model deployment method introduced in [Chapter 3 Image Classification Service Deployment](#3), we will introduce how to use the detection + classification model to complete the multi-model **image recognition service deployment**
In addition to the single-model deployment method introduced in [Chapter 3 Service Deployment for Image Classification](#3), we will introduce how to use the detection + classification model to complete the multi-model **image recognition service deployment**
When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services.
When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services.
<aname="4.1"></a>
<aname="4.1"></a>
## 4.1 Model conversion
### 4.1 Model Transformation
- Go to the working directory:
- Go to the working directory:
```shell
```shell
cd deploy/
cd deploy/
...
@@ -210,8 +211,8 @@ When using PaddleServing for image recognition service deployment, **need to con
...
@@ -210,8 +211,8 @@ When using PaddleServing for image recognition service deployment, **need to con
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
The meaning of the parameters of the above command is the same as [#3.1 Model Transformation](#3.1)
After the conversion of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure:
After the transformation of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure:
```shell
```shell
├── general_PPLCNet_x2_5_lite_v1.0_serving/
├── general_PPLCNet_x2_5_lite_v1.0_serving/
│ ├── inference.pdiparams
│ ├── inference.pdiparams
...
@@ -253,9 +254,9 @@ When using PaddleServing for image recognition service deployment, **need to con
...
@@ -253,9 +254,9 @@ When using PaddleServing for image recognition service deployment, **need to con
The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1)
The meaning of the parameters of the above command is the same as [#3.1 Model Transformation](#3.1)
After the general detection inference model conversion is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/`in the current folder, with the following structure:
After the general detection inference model transformation is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/`in the current folder, with the following structure:
@@ -280,7 +281,7 @@ When using PaddleServing for image recognition service deployment, **need to con
...
@@ -280,7 +281,7 @@ When using PaddleServing for image recognition service deployment, **need to con
tar -xf drink_dataset_v1.0.tar
tar -xf drink_dataset_v1.0.tar
```
```
<a name="4.2"></a>
<a name="4.2"></a>
## 4.2 Service deployment and request
### 4.2 Service Deployment and Request
**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform.
**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform.