PaddleClas provides two service deployment methods:
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
- Based on **PaddleServing**: Code path is "`./deploy/paddleserving`". Please follow this tutorial.
- Based on **PaddleServing**: Code path is "`./deploy/paddleserving`". if you prefer retrieval_based image reocognition service, please refer to [tutorial](./recognition/README.md),if you'd like image classification service, Please follow this tutorial.
# Service deployment based on PaddleServing
# Image Classification Service deployment based on PaddleServing
This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the ResNet50_vd model as a pipeline online service.
...
...
@@ -131,7 +131,7 @@ fetch_var {
config.yml # configuration file of starting the service
pipeline_http_client.py # script to send pipeline prediction request by http
pipeline_rpc_client.py # script to send pipeline prediction request by rpc
resnet50_web_service.py # start the script of the pipeline server
classification_web_service.py # start the script of the pipeline server
```
2. Run the following command to start the service.
...
...
@@ -147,7 +147,7 @@ fetch_var {
python3 pipeline_http_client.py
```
After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
![](./imgs/results.png)
![](./imgs/results.png)
Adjust the number of concurrency in config.yml to get the largest QPS.
# Product Recognition Service deployment based on PaddleServing
(English|[简体中文](./README_CN.md))
This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the product recognition model based on retrieval method as a pipeline online service.
Some Key Features of Paddle Serving:
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
- Highly concurrent and efficient communication between clients and servers supported.
The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.1.0.
2. The steps of PaddleServing operating environment prepare are as follows:
Install serving which used to start the service
```
pip3 install paddle-serving-server==0.6.1 # for CPU
pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
# Other GPU environments need to confirm the environment and then choose to execute the following commands
pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
```
3. Install the client to send requests to the service
In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
**note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
<aname="model-conversion"></a>
## Model conversion
When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.
The following assumes that the current working directory is the PaddleClas root directory
Firstly, download the inference model of ResNet50_vd
After the ResNet50_vd inference model is converted, there will be additional folders of `product_ResNet50_vd_aliproduct_v1.0_serving` and `product_ResNet50_vd_aliproduct_v1.0_client` in the current folder, with the following format:
```
|- product_ResNet50_vd_aliproduct_v1.0_serving/
|- __model__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- product_ResNet50_vd_aliproduct_v1.0_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
```
Once you have the model file for deployment, you need to change the alias name in `serving_server_conf.prototxt`: change `alias_name` in `fetch_var` to `features`,
The modified serving_server_conf.prototxt file is as follows:
```
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
```
Next,download and unpack the built index of product gallery
```
cd ../
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_v1.1.tar && tar -xf recognition_demo_data_v1.1.tar
```
<aname="paddle-serving-pipeline-deployment"></a>
## Paddle Serving pipeline deployment
1. Download the PaddleClas code, if you have already downloaded it, you can skip this step.
The paddleserving directory contains the code to start the pipeline service and send prediction requests, including:
```
__init__.py
config.yml # configuration file of starting the service
pipeline_http_client.py # script to send pipeline prediction request by http
pipeline_rpc_client.py # script to send pipeline prediction request by rpc
recognition_web_service.py # start the script of the pipeline server
```
2. Run the following command to start the service.
```
# Start the service and save the running log in log.txt
python3 recognition_web_service.py &>log.txt &
```
After the service is successfully started, a log similar to the following will be printed in log.txt
![](../imgs/start_server_recog.png)
3. Send service request
```
python3 pipeline_http_client.py
```
After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
![](../imgs/results_recog.png)
Adjust the number of concurrency in config.yml to get the largest QPS.
```
op:
concurrency: 8
...
```
Multiple service requests can be sent at the same time if necessary.
The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
<aname="faq"></a>
## FAQ
**Q1**: No result return after sending the request.
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
Among them, `-c` is used to specify the path of the configuration file, `-o` is used to specify the parameters needed to be modified or added, `-o pretrained_model=""` means to not using pre-trained models.
`-o use_gpu=True` means to use GPU for training. If you want to use the CPU for training, you need to set `use_gpu` to `False`.
Among them, `-c` is used to specify the path of the configuration file, `-o` is used to specify the parameters needed to be modified or added, `-o Arch.pretrained=False` means to not using pre-trained models.
`-o Global.device=gpu` means to use GPU for training. If you want to use the CPU for training, you need to set `Global.device` to `cpu`.
Of course, you can also directly modify the configuration file to update the configuration. For specific configuration parameters, please refer to [Configuration Document](config_description_en.md).
...
...
@@ -54,12 +54,12 @@ After configuring the configuration file, you can finetune it by loading the pre
Among them, `-o pretrained_model` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
Among them, `-o Arch.pretrained` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file. You can also set it into `True` to use pretrained weights that trained in ImageNet1k.
We also provide a lot of pre-trained models trained on the ImageNet-1k dataset. For the model list and download address, please refer to the [model library overview](../models/models_intro_en.md).
...
...
@@ -69,28 +69,26 @@ If the training process is terminated for some reasons, you can also load the ch
The configuration file does not need to be modified. You only need to add the `checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter.
The configuration file does not need to be modified. You only need to add the `Global.checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter.
**Note**:
* The parameter `-o last_epoch=5` means to record the number of the last training epoch as `5`, that is, the number of this training epoch starts from `6`, , and the parameter defaults to `-1`, which means the number of this training epoch starts from `0`.
* The `-o checkpoints` parameter does not need to include the suffix of the checkpoints. The above training command will generate the checkpoints as shown below during the training process. If you want to continue training from the epoch `5`, Just set the `checkpoints` to `./output/MobileNetV3_large_x1_0_gpupaddle/5/ppcls`, PaddleClas will automatically fill in the `pdopt` and `pdparams` suffixes.
* The `-o Global.checkpoints` parameter does not need to include the suffix of the checkpoints. The above training command will generate the checkpoints as shown below during the training process. If you want to continue training from the epoch `5`, Just set the `Global.checkpoints` to `../output/MobileNetV3_large_x1_0/epoch_5`, PaddleClas will automatically fill in the `pdopt` and `pdparams` suffixes.
```shell
output/
└── MobileNetV3_large_x1_0
├── 0
│ ├── ppcls.pdopt
│ └── ppcls.pdparams
├── 1
│ ├── ppcls.pdopt
│ └── ppcls.pdparams
output
├── MobileNetV3_large_x1_0
│ ├── best_model.pdopt
│ ├── best_model.pdparams
│ ├── best_model.pdstates
│ ├── epoch_1.pdopt
│ ├── epoch_1.pdparams
│ ├── epoch_1.pdstates
.
.
.
...
...
@@ -103,18 +101,15 @@ The model evaluation process can be started as follows.
The above command will use `./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml` as the configuration file to evaluate the model `./output/MobileNetV3_large_x1_0/best_model/ppcls`. You can also set the evaluation by changing the parameters in the configuration file, or you can update the configuration with the `-o` parameter, as shown above.
The above command will use `./configs/quick_start/MobileNetV3_large_x1_0.yaml` as the configuration file to evaluate the model `./output/MobileNetV3_large_x1_0/best_model`. You can also set the evaluation by changing the parameters in the configuration file, or you can update the configuration with the `-o` parameter, as shown above.
Some of the configurable evaluation parameters are described as follows:
*`ARCHITECTURE.name`: Model name
*`pretrained_model`: The path of the model file to be evaluated
*`load_static_weights`: Whether the model to be evaluated is a static graph model
*`Arch.name`: Model name
*`Global.pretrained_model`: The path of the model file to be evaluated
**Note:** If the model is a dygraph type, you only need to specify the prefix of the model file when loading the model, instead of specifying the suffix, such as [1.3 Resume Training](#13-resume-training).
...
...
@@ -125,26 +120,15 @@ If you want to run PaddleClas on Linux with GPU, it is highly recommended to use
### 2.1 Model training
After preparing the configuration file, The training process can be started in the following way. `paddle.distributed.launch` specifies the GPU running card number by setting `selected_gpus`:
After preparing the configuration file, The training process can be started in the following way. `paddle.distributed.launch` specifies the GPU running card number by setting `gpus`:
Among them, `pretrained_model` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
Among them, `Arch.pretrained` is set to `True` or `False`. It also can be used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
There contains a lot of examples of model finetuning in [Quick Start](./quick_start_en.md). You can refer to this tutorial to finetune the model on a specific dataset.
...
...
@@ -175,26 +159,26 @@ If the training process is terminated for some reasons, you can also load the ch
The configuration file does not need to be modified. You only need to add the `checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter. About `last_epoch` parameter, please refer [1.3 Resume training](#13-resume-training) for details.
The configuration file does not need to be modified. You only need to add the `Global.checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter as described in [1.3 Resume training](#13-resume-training).
### 2.4 Model evaluation
The model evaluation process can be started as follows.
+`image_file`(i): The path of the image file to be predicted, such as `./test.jpeg`;
+`model`: Model name, such as `MobileNetV3_large_x1_0`;
+`pretrained_model`: Weight file path, such as `./pretrained/MobileNetV3_large_x1_0_pretrained/`;
+`use_gpu`: Whether to use the GPU, default by `True`;
+`load_static_weights`: Whether to load the pre-trained model obtained from static image training, default by `False`;
+`resize_short`: The length of the shortest side of the image that be scaled proportionally, default by `256`;
+`resize`: The side length of the image that be center cropped from resize_shorted image, default by `224`;
+`pre_label_image`: Whether to pre-label the image data, default value: `False`;
+`pre_label_out_idr`: The output path of pre-labeled image data. When `pre_label_image=True`, a lot of subfolders will be generated under the path, each subfolder represent a category, which stores all the images predicted by the model to belong to the category.
**Note**: If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `resize_short=384`, `resize=384`.
About more detailed infomation, you can refer to [infer.py](../../../tools/infer/infer.py).
+`Infer.infer_imgs`: The path of the image file or folder to be predicted;
+`Global.pretrained_model`: Weight file path, such as `./output/MobileNetV3_large_x1_0/best_model`;
<aname="model_inference"></a>
## 4. Use the inference model to predict
PaddlePaddle supports inference using prediction engines, which will be introduced next.
...
...
@@ -235,41 +205,38 @@ PaddlePaddle supports inference using prediction engines, which will be introduc
Firstly, you should export inference model using `tools/export_model.py`.
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model, class_dim means number of class for the model, default as 1000.
**Note**:
1. If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`.
2. You can specify the `shape` of the model input image by setting the parameter `--img_size`, the default is `224`, which means the shape of input image is `224*224`. If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, you need to set `--img_size=384`.
Among them, `Global.pretrained_model` parameter is used to specify the model file path that does not need to include the file suffix name.
The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference:
Go to the deploy directory:
```
cd deploy
```
Using inference engine to inference. Because the mapping file of ImageNet1k dataset is used by default, we should set `PostProcess.Topk.class_id_map_file` into `None`.
+`image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+`model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`;
+`params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`;
+`use_tensorrt`: Whether to use the TesorRT, default by `True`;
+`use_gpu`: Whether to use the GPU, default by `True`
+`enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
+`resize_short`: The length of the shortest side of the image that be scaled proportionally, default by `256`;
+`resize`: The side length of the image that be center cropped from resize_shorted image, default by `224`;
+`enable_calc_topk`: Whether to calculate top-k accuracy of the predction, default by `False`. Top-k accuracy will be printed out when set as `True`.
+`gt_label_path`: Image name and label file, used when `enable_calc_topk` is `True` to get image list and labels.
+`Global.infer_imgs`: The path of the image file to be predicted;
+`Global.inference_model_dir`: Model structure file path, such as `../inference/inference.pdmodel`;
+`Global.use_tensorrt`: Whether to use the TesorRT, default by `False`;
+`Global.use_gpu`: Whether to use the GPU, default by `True`
+`Global.enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. It is valid when `Global.use_gpu` is `False`.
+`Global.use_fp16`: Whether to enable FP16, default by `False`;
**Note**: If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `resize_short=384`, `resize=384`.
If you want to evaluate the speed of the model, it is recommended to use [predict.py](../../../tools/infer/predict.py), and enable TensorRT to accelerate.
If you want to evaluate the speed of the model, it is recommended to enable TensorRT to accelerate for GPU, and MKL-DNN for CPU.
`-c` is used to specify the path to the configuration file, and `-o` is used to specify the parameters that need to be modified or added, where `-o Arch.Backbone.pretrained=True` indicates that the Backbone part uses the pre-trained model, in addition, `Arch.Backbone.pretrained` can also specify backbone.`pretrained` can also specify the address of a specific model weight file, which needs to be replaced with the path to your own pre-trained model weight file when using it. `-o Global.device=gpu` indicates that the GPU is used for training. If you want to use a CPU for training, you need to set `Global.device` to `cpu`.
For more detailed training configuration, you can also modify the corresponding configuration file of the model directly. Refer to the [configuration document](config_en.md) for specific configuration parameters.
For more detailed training configuration, you can also modify the corresponding configuration file of the model directly. Refer to the [configuration document](config_description_en.md) for specific configuration parameters.
Run the above commands to check the output log, an example is as follows:
msg=f"When using warm up, the value of \"Global.epochs\" must be greater than value of \"Optimizer.lr.warmup_epoch\". The value of \"Optimizer.lr.warmup_epoch\" has been set to {epochs}."
logger.warning(msg)
warmup_epoch=epochs
self.learning_rate=learning_rate
self.steps=(epochs-warmup_epoch)*step_each_epoch
self.end_lr=end_lr
...
...
@@ -56,7 +63,8 @@ class Linear(object):
decay_steps=self.steps,
end_lr=self.end_lr,
power=self.power,
last_epoch=self.last_epoch)
last_epoch=self.
last_epoch)ifself.steps>0elseself.learning_rate
ifself.warmup_steps>0:
learning_rate=lr.LinearWarmup(
learning_rate=learning_rate,
...
...
@@ -90,7 +98,11 @@ class Cosine(object):
warmup_start_lr=0.0,
last_epoch=-1,
**kwargs):
super(Cosine,self).__init__()
super().__init__()
ifwarmup_epoch>=epochs:
msg=f"When using warm up, the value of \"Global.epochs\" must be greater than value of \"Optimizer.lr.warmup_epoch\". The value of \"Optimizer.lr.warmup_epoch\" has been set to {epochs}."
logger.warning(msg)
warmup_epoch=epochs
self.learning_rate=learning_rate
self.T_max=(epochs-warmup_epoch)*step_each_epoch
self.eta_min=eta_min
...
...
@@ -103,7 +115,8 @@ class Cosine(object):
learning_rate=self.learning_rate,
T_max=self.T_max,
eta_min=self.eta_min,
last_epoch=self.last_epoch)
last_epoch=self.
last_epoch)ifself.T_max>0elseself.learning_rate
ifself.warmup_steps>0:
learning_rate=lr.LinearWarmup(
learning_rate=learning_rate,
...
...
@@ -132,12 +145,17 @@ class Step(object):
learning_rate,
step_size,
step_each_epoch,
epochs,
gamma,
warmup_epoch=0,
warmup_start_lr=0.0,
last_epoch=-1,
**kwargs):
super(Step,self).__init__()
super().__init__()
ifwarmup_epoch>=epochs:
msg=f"When using warm up, the value of \"Global.epochs\" must be greater than value of \"Optimizer.lr.warmup_epoch\". The value of \"Optimizer.lr.warmup_epoch\" has been set to {epochs}."
logger.warning(msg)
warmup_epoch=epochs
self.step_size=step_each_epoch*step_size
self.learning_rate=learning_rate
self.gamma=gamma
...
...
@@ -177,11 +195,16 @@ class Piecewise(object):
step_each_epoch,
decay_epochs,
values,
epochs,
warmup_epoch=0,
warmup_start_lr=0.0,
last_epoch=-1,
**kwargs):
super(Piecewise,self).__init__()
super().__init__()
ifwarmup_epoch>=epochs:
msg=f"When using warm up, the value of \"Global.epochs\" must be greater than value of \"Optimizer.lr.warmup_epoch\". The value of \"Optimizer.lr.warmup_epoch\" has been set to {epochs}."