未验证 提交 1fccb35b 编写于 作者: C cuicheng01 提交者: GitHub

Merge branch 'develop' into PeleeNet_PR

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
...@@ -12,3 +12,4 @@ build/ ...@@ -12,3 +12,4 @@ build/
log/ log/
nohup.out nohup.out
.DS_Store .DS_Store
.idea
include LICENSE.txt include LICENSE.txt
include README.md include README.md
include docs/en/whl_en.md include docs/en/whl_en.md
recursive-include deploy/python predict_cls.py preprocess.py postprocess.py det_preprocess.py recursive-include deploy/python *.py
recursive-include deploy/configs *.yaml
recursive-include deploy/utils get_image_list.py config.py logger.py predictor.py recursive-include deploy/utils get_image_list.py config.py logger.py predictor.py
recursive-include ppcls/ *.py *.txt recursive-include ppcls/ *.py *.txt
\ No newline at end of file
README_ch.md README_en.md
\ No newline at end of file \ No newline at end of file
...@@ -4,100 +4,130 @@ ...@@ -4,100 +4,130 @@
## 简介 ## 简介
飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别和图像分类任务的工具集,助力使用者训练出更好的视觉模型和应用落地。
**近期更新** <div align="center">
- 2022.4.21 新增 CVPR2022 oral论文 [MixFormer](https://arxiv.org/pdf/2204.02557.pdf) 相关[代码](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files) <img src="./docs/images/class_simple.gif" width = "600" />
- 2022.1.27 全面升级文档;新增[PaddleServing C++ pipeline部署方式](./deploy/paddleserving)[18M图像识别安卓部署Demo](./deploy/lite_shitu) <p>PULC实用图像分类模型效果展示</p>
- 2021.11.1 发布[PP-ShiTu技术报告](https://arxiv.org/pdf/2111.00775.pdf),新增饮料识别demo </div>
- 2021.10.23 发布轻量级图像识别系统PP-ShiTu,CPU上0.2s即可完成在10w+库的图像识别。 &nbsp;
[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)立即体验
- 2021.09.17 发布PP-LCNet系列超轻量骨干网络模型, 在Intel CPU上,单张图像预测速度约5ms,ImageNet-1K数据集上Top1识别准确率达到80.82%,超越ResNet152的模型效果。PP-LCNet的介绍可以参考[论文](https://arxiv.org/pdf/2109.15099.pdf), 或者[PP-LCNet模型介绍](docs/zh_CN/models/PP-LCNet.md),相关指标和预训练权重可以从 [这里](docs/zh_CN/algorithm_introduction/ImageNet_models.md)下载。
- [more](./docs/zh_CN/others/update_history.md)
## 特性
- PP-ShiTu轻量图像识别系统:集成了目标检测、特征学习、图像检索等模块,广泛适用于各类图像识别任务。cpu上0.2s即可完成在10w+库的图像识别。
- PP-LCNet轻量级CPU骨干网络:专门为CPU设备打造轻量级骨干网络,速度、精度均远超竞品。
- 丰富的预训练模型库:提供了36个系列共175个ImageNet预训练模型,其中7个精选系列模型支持结构快速修改。 <div align="center">
<img src="./docs/images/recognition.gif" width = "400" />
<p>PP-ShiTu图像识别系统效果展示</p>
</div>
- 全面易用的特征学习组件:集成arcmargin, triplet loss等12度量学习方法,通过配置文件即可随意组合切换。
- SSLD知识蒸馏:14个分类预训练模型,精度普遍提升3%以上;其中ResNet50_vd模型在ImageNet-1k数据集上的Top-1精度达到了84.0%, ## 近期更新
Res2Net200_vd预训练模型Top-1精度高达85.1%。 - 📢将于**6月15-6月17日晚20:30** 进行为期三天的课程直播,详细介绍超轻量图像分类方案,对各场景模型优化原理及使用方式进行拆解,之后还有产业案例全流程实操,对各类痛难点解决方案进行手把手教学,加上现场互动答疑,抓紧扫码上车吧!
<div align="center"> <div align="center">
<img src="./docs/images/recognition.gif" width = "400" /> <img src="https://user-images.githubusercontent.com/45199522/173483779-2332f990-4941-4f8d-baee-69b62035fc31.png" width = "200" height = "200"/>
</div> </div>
- 🔥️ 2022.6.15 发布[PULC超轻量图像分类实用方案](docs/zh_CN/PULC/PULC_train.md),CPU推理3ms,精度比肩SwinTransformer,覆盖人、车、OCR场景九大常见任务。
- 2022.5.26 [飞桨产业实践范例直播课](http://aglc.cn/v-c4FAR),解读**超轻量重点区域人员出入管理方案**
- 2022.5.23 新增[人员出入管理范例库](https://aistudio.baidu.com/aistudio/projectdetail/4094475),具体内容可以在 AI Studio 上体验。
- 2022.5.20 上线[PP-HGNet](./docs/zh_CN/models/PP-HGNet.md), [PP-LCNetv2](./docs/zh_CN/models/PP-LCNetV2.md)
- 2022.4.21 新增 CVPR2022 oral论文 [MixFormer](https://arxiv.org/pdf/2204.02557.pdf) 相关[代码](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files)
- [more](./docs/zh_CN/others/update_history.md)
## 特性
PaddleClas发布了[PP-HGNet](docs/zh_CN/models/PP-HGNet.md)[PP-LCNetv2](docs/zh_CN/models/PP-LCNetV2.md)[PP-LCNet](docs/zh_CN/models/PP-LCNet.md)[SSLD半监督知识蒸馏方案](docs/zh_CN/advanced_tutorials/ssld.md)等算法,
并支持多种图像分类、识别相关算法,在此基础上打造[PULC超轻量图像分类方案](docs/zh_CN/PULC/PULC_quickstart.md)[PP-ShiTu图像识别系统](./docs/zh_CN/quick_start/quick_start_recognition.md)
![](https://user-images.githubusercontent.com/19523330/173273046-239a42da-c88d-4c2c-94b1-2134557afa49.png)
## 欢迎加入技术交流群 ## 欢迎加入技术交流群
* 您可以扫描下面的QQ/微信二维码(添加小助手微信并回复“C”),加入PaddleClas微信交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。 * 您可以扫描下面的微信/QQ二维码(添加小助手微信并回复“C”),加入PaddleClas微信交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/80816848/164383225-e375eb86-716e-41b4-a9e0-4b8a3976c1aa.jpg" width="200"/>
<img src="https://user-images.githubusercontent.com/48054808/160531099-9811bbe6-cfbb-47d5-8bdb-c2b40684d7dd.png" width="200"/> <img src="https://user-images.githubusercontent.com/48054808/160531099-9811bbe6-cfbb-47d5-8bdb-c2b40684d7dd.png" width="200"/>
<img src="https://user-images.githubusercontent.com/80816848/164383225-e375eb86-716e-41b4-a9e0-4b8a3976c1aa.jpg" width="200"/>
</div> </div>
## 快速体验 ## 快速体验
PULC超轻量图像分类方案快速体验:[点击这里](docs/zh_CN/PULC/PULC_quickstart.md)
PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md) PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)
## 文档教程 ## 文档教程
- 安装说明 - [环境准备](docs/zh_CN/installation/install_paddleclas.md)
- [安装Paddle](./docs/zh_CN/installation/install_paddle.md) - [PULC超轻量图像分类实用方案](docs/zh_CN/PULC/PULC_train.md)
- [安装PaddleClas](./docs/zh_CN/installation/install_paddleclas.md) - [超轻量图像分类快速体验](docs/zh_CN/PULC/PULC_quickstart.md)
- 快速体验 - [超轻量图像分类模型库](docs/zh_CN/PULC/PULC_model_list.md)
- [PP-ShiTu图像识别快速体验](./docs/zh_CN/quick_start/quick_start_recognition.md) - [PULC有人/无人分类模型](docs/zh_CN/PULC/PULC_person_exists.md)
- 图像分类快速体验 - [PULC人体属性识别模型](docs/zh_CN/PULC/PULC_person_attribute.md)
- [尝鲜版](./docs/zh_CN/quick_start/quick_start_classification_new_user.md) - [PULC佩戴安全帽分类模型](docs/zh_CN/PULC/PULC_safety_helmet.md)
- [进阶版](./docs/zh_CN/quick_start/quick_start_classification_professional.md) - [PULC交通标志分类模型](docs/zh_CN/PULC/PULC_traffic_sign.md)
- [多标签分类](./docs/zh_CN/quick_start/quick_start_multilabel_classification.md) - [PULC车辆属性识别模型](docs/zh_CN/PULC/PULC_vehicle_attribute.md)
- [PULC有车/无车分类模型](docs/zh_CN/PULC/PULC_car_exists.md)
- [PULC含文字图像方向分类模型](docs/zh_CN/PULC/PULC_text_image_orientation.md)
- [PULC文本行方向分类模型](docs/zh_CN/PULC/PULC_textline_orientation.md)
- [PULC语种分类模型](docs/zh_CN/PULC/PULC_language_classification.md)
- [模型训练](docs/zh_CN/PULC/PULC_train.md)
- 推理部署
- [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#1)
- [基于C++预测引擎推理](docs/zh_CN/inference_deployment/cpp_deploy.md)
- [服务化部署](docs/zh_CN/inference_deployment/classification_serving_deploy.md)
- [端侧部署](docs/zh_CN/inference_deployment/paddle_lite_deploy.md)
- [Paddle2ONNX模型转化与预测](deploy/paddle2onnx/readme.md)
- [模型压缩](deploy/slim/README.md)
- [PP-ShiTu图像识别系统介绍](#图像识别系统介绍) - [PP-ShiTu图像识别系统介绍](#图像识别系统介绍)
- [图像识别快速体验](docs/zh_CN/quick_start/quick_start_recognition.md)
- 模块介绍
- [主体检测](./docs/zh_CN/image_recognition_pipeline/mainbody_detection.md) - [主体检测](./docs/zh_CN/image_recognition_pipeline/mainbody_detection.md)
- [特征提取](./docs/zh_CN/image_recognition_pipeline/feature_extraction.md) - [特征提取模型](./docs/zh_CN/image_recognition_pipeline/feature_extraction.md)
- [向量检索](./docs/zh_CN/image_recognition_pipeline/vector_search.md) - [向量检索](./docs/zh_CN/image_recognition_pipeline/vector_search.md)
- [骨干网络和预训练模型库](./docs/zh_CN/algorithm_introduction/ImageNet_models.md) - [哈希编码](docs/zh_CN/image_recognition_pipeline/)
- 数据准备 - [模型训练](docs/zh_CN/models_training/recognition.md)
- [图像分类数据集介绍](./docs/zh_CN/data_preparation/classification_dataset.md) - 推理部署
- [图像识别数据集介绍](./docs/zh_CN/data_preparation/recognition_dataset.md) - [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#2)
- 模型训练 - [基于C++预测引擎推理](deploy/cpp_shitu/readme.md)
- [图像分类任务](./docs/zh_CN/models_training/classification.md) - [服务化部署](docs/zh_CN/inference_deployment/recognition_serving_deploy.md)
- [图像识别任务](./docs/zh_CN/models_training/recognition.md) - [端侧部署](deploy/lite_shitu/README.md)
- [训练参数调整策略](./docs/zh_CN/models_training/train_strategy.md) - PP系列骨干网络模型
- [配置文件说明](./docs/zh_CN/models_training/config_description.md) - [PP-HGNet](docs/zh_CN/models/PP-HGNet.md)
- 模型预测部署 - [PP-LCNetv2](docs/zh_CN/models/PP-LCNetV2.md)
- [模型导出](./docs/zh_CN/inference_deployment/export_model.md) - [PP-LCNet](docs/zh_CN/models/PP-LCNet.md)
- Python/C++ 预测引擎 - [SSLD半监督知识蒸馏方案](docs/zh_CN/advanced_tutorials/ssld.md)
- [基于Python预测引擎预测推理](./docs/zh_CN/inference_deployment/python_deploy.md) - 前沿算法
- [基于C++分类预测引擎预测推理](./docs/zh_CN/inference_deployment/cpp_deploy.md)[基于C++的PP-ShiTu预测引擎预测推理](deploy/cpp_shitu/readme.md) - [骨干网络和预训练模型库](docs/zh_CN/algorithm_introduction/ImageNet_models.md)
- 服务化部署 - [度量学习](docs/zh_CN/algorithm_introduction/metric_learning.md)
- [Paddle Serving服务化部署(推荐)](./docs/zh_CN/inference_deployment/paddle_serving_deploy.md) - [模型压缩](docs/zh_CN/algorithm_introduction/model_prune_quantization.md)
- [Hub serving服务化部署](./docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md) - [模型蒸馏](docs/zh_CN/algorithm_introduction/knowledge_distillation.md)
- [端侧部署](./deploy/lite/readme.md) - [数据增强](docs/zh_CN/advanced_tutorials/DataAugmentation.md)
- [whl包预测](./docs/zh_CN/inference_deployment/whl_deploy.md) - [产业实用范例库](docs/zh_CN/samples)
- 算法介绍 - [30分钟快速体验图像分类](docs/zh_CN/quick_start/quick_start_classification_new_user.md)
- [图像分类任务介绍](./docs/zh_CN/algorithm_introduction/image_classification.md)
- [度量学习介绍](./docs/zh_CN/algorithm_introduction/metric_learning.md)
- 高阶使用
- [数据增广](./docs/zh_CN/advanced_tutorials/DataAugmentation.md)
- [模型量化](./docs/zh_CN/advanced_tutorials/model_prune_quantization.md)
- [知识蒸馏](./docs/zh_CN/advanced_tutorials/knowledge_distillation.md)
- [PaddleClas结构解析](./docs/zh_CN/advanced_tutorials/code_overview.md)
- [社区贡献指南](./docs/zh_CN/advanced_tutorials/how_to_contribute.md)
- FAQ - FAQ
- [图像识别精选问题](docs/zh_CN/faq_series/faq_2021_s2.md) - [图像识别精选问题](docs/zh_CN/faq_series/faq_2021_s2.md)
- [图像分类精选问题](docs/zh_CN/faq_series/faq_selected_30.md) - [图像分类精选问题](docs/zh_CN/faq_series/faq_selected_30.md)
- [图像分类FAQ第一季](docs/zh_CN/faq_series/faq_2020_s1.md) - [图像分类FAQ第一季](docs/zh_CN/faq_series/faq_2020_s1.md)
- [图像分类FAQ第二季](docs/zh_CN/faq_series/faq_2021_s1.md) - [图像分类FAQ第二季](docs/zh_CN/faq_series/faq_2021_s1.md)
- [社区贡献指南](./docs/zh_CN/advanced_tutorials/how_to_contribute.md)
- [许可证书](#许可证书) - [许可证书](#许可证书)
- [贡献代码](#贡献代码) - [贡献代码](#贡献代码)
<a name="PULC超轻量图像分类方案"></a>
## PULC超轻量图像分类方案
<div align="center">
<img src="https://user-images.githubusercontent.com/19523330/173011854-b10fcd7a-b799-4dfd-a1cf-9504952a3c44.png" width = "800" />
</div>
PULC融合了骨干网络、数据增广、蒸馏等多种前沿算法,可以自动训练得到轻量且高精度的图像分类模型。
PaddleClas提供了覆盖人、车、OCR场景九大常见任务的分类模型,CPU推理3ms,精度比肩SwinTransformer。
<a name="图像识别系统介绍"></a> <a name="图像识别系统介绍"></a>
## PP-ShiTu图像识别系统介绍 ## PP-ShiTu图像识别系统
<div align="center"> <div align="center">
<img src="./docs/images/structure.jpg" width = "800" /> <img src="./docs/images/structure.jpg" width = "800" />
...@@ -105,6 +135,11 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick ...@@ -105,6 +135,11 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick
PP-ShiTu是一个实用的轻量级通用图像识别系统,主要由主体检测、特征学习和向量检索三个模块组成。该系统从骨干网络选择和调整、损失函数的选择、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型裁剪量化8个方面,采用多种策略,对各个模块的模型进行优化,最终得到在CPU上仅0.2s即可完成10w+库的图像识别的系统。更多细节请参考[PP-ShiTu技术方案](https://arxiv.org/pdf/2111.00775.pdf) PP-ShiTu是一个实用的轻量级通用图像识别系统,主要由主体检测、特征学习和向量检索三个模块组成。该系统从骨干网络选择和调整、损失函数的选择、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型裁剪量化8个方面,采用多种策略,对各个模块的模型进行优化,最终得到在CPU上仅0.2s即可完成10w+库的图像识别的系统。更多细节请参考[PP-ShiTu技术方案](https://arxiv.org/pdf/2111.00775.pdf)
<a name="分类效果展示"></a>
## PULC实用图像分类模型效果展示
<div align="center">
<img src="docs/images/classification.gif">
</div>
<a name="识别效果展示"></a> <a name="识别效果展示"></a>
## PP-ShiTu图像识别系统效果展示 ## PP-ShiTu图像识别系统效果展示
......
...@@ -4,39 +4,41 @@ ...@@ -4,39 +4,41 @@
## Introduction ## Introduction
PaddleClas is an image recognition toolset for industry and academia, helping users train better computer vision models and apply them in real scenarios. PaddleClas is an image classification and image recognition toolset for industry and academia, helping users train better computer vision models and apply them in real scenarios.
**Recent updates** <div align="center">
<img src="./docs/images/class_simple_en.gif" width = "600" />
- 2022.4.21 Added the related [code](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files) of the CVPR2022 oral paper [MixFormer](https://arxiv.org/pdf/2204.02557.pdf).
- 2021.09.17 Add PP-LCNet series model developed by PaddleClas, these models show strong competitiveness on Intel CPUs.
For the introduction of PP-LCNet, please refer to [paper](https://arxiv.org/pdf/2109.15099.pdf) or [PP-LCNet model introduction](docs/en/models/PP-LCNet_en.md). The metrics and pretrained model are available [here](docs/en/ImageNet_models_en.md).
- 2021.06.29 Add Swin-transformer series model,Highest top1 acc on ImageNet1k dataset reaches 87.2%, training, evaluation and inference are all supported. Pretrained models can be downloaded [here](docs/en/models/models_intro_en.md).
- 2021.06.16 PaddleClas release/2.2. Add metric learning and vector search modules. Add product recognition, animation character recognition, vehicle recognition and logo recognition. Added 30 pretrained models of LeViT, Twins, TNT, DLA, HarDNet, and RedNet, and the accuracy is roughly the same as that of the paper.
- [more](./docs/en/update_history_en.md)
## Features PULC demo images
</div>
&nbsp;
- A practical image recognition system consist of detection, feature learning and retrieval modules, widely applicable to all types of image recognition tasks.
Four sample solutions are provided, including product recognition, vehicle recognition, logo recognition and animation character recognition.
- Rich library of pre-trained models: Provide a total of 164 ImageNet pre-trained models in 35 series, among which 6 selected series of models support fast structural modification. <div align="center">
<img src="./docs/images/recognition.gif" width = "400" />
- Comprehensive and easy-to-use feature learning components: 12 metric learning methods are integrated and can be combined and switched at will through configuration files. PP-ShiTu demo images
</div>
- SSLD knowledge distillation: The 14 classification pre-training models generally improved their accuracy by more than 3%; among them, the ResNet50_vd model achieved a Top-1 accuracy of 84.0% on the Image-Net-1k dataset and the Res2Net200_vd pre-training model achieved a Top-1 accuracy of 85.1%. **Recent updates**
- 2022.6.15 Release [**P**ractical **U**ltra **L**ight-weight image **C**lassification solutions](./docs/en/PULC/PULC_quickstart_en.md). PULC models inference within 3ms on CPU devices, with accuracy on par with SwinTransformer. We also release 9 practical classification models covering pedestrian, vehicle and OCR scenario.
- 2022.4.21 Added the related [code](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files) of the CVPR2022 oral paper [MixFormer](https://arxiv.org/pdf/2204.02557.pdf).
- Data augmentation: Provide 8 data augmentation algorithms such as AutoAugment, Cutout, Cutmix, etc. with detailed introduction, code replication and evaluation of effectiveness in a unified experimental environment. - 2021.09.17 Add PP-LCNet series model developed by PaddleClas, these models show strong competitiveness on Intel CPUs.
For the introduction of PP-LCNet, please refer to [paper](https://arxiv.org/pdf/2109.15099.pdf) or [PP-LCNet model introduction](docs/en/models/PP-LCNet_en.md). The metrics and pretrained model are available [here](docs/en/algorithm_introduction/ImageNet_models_en.md).
- 2021.06.29 Add [Swin-transformer](docs/en/models/SwinTransformer_en.md)) series model,Highest top1 acc on ImageNet1k dataset reaches 87.2%, training, evaluation and inference are all supported. Pretrained models can be downloaded [here](docs/en/algorithm_introduction/ImageNet_models_en.md#16).
- 2021.06.16 PaddleClas release/2.2. Add metric learning and vector search modules. Add product recognition, animation character recognition, vehicle recognition and logo recognition. Added 30 pretrained models of LeViT, Twins, TNT, DLA, HarDNet, and RedNet, and the accuracy is roughly the same as that of the paper.
- [more](./docs/en/others/update_history_en.md)
## Features
PaddleClas release PP-HGNet、PP-LCNetv2、 PP-LCNet and **S**imple **S**emi-supervised **L**abel **D**istillation algorithms, and support plenty of
image classification and image recognition algorithms.
Based on th algorithms above, PaddleClas release PP-ShiTu image recognition system and [**P**ractical **U**ltra **L**ight-weight image **C**lassification solutions](docs/en/PULC/PULC_quickstart_en.md).
<div align="center">
<img src="./docs/images/recognition_en.gif" width = "400" />
</div>
![](https://user-images.githubusercontent.com/19523330/173539361-68cf7ab1-7e3b-4e5e-b00f-1500719bd2a2.png)
## Welcome to Join the Technical Exchange Group ## Welcome to Join the Technical Exchange Group
...@@ -48,41 +50,57 @@ Four sample solutions are provided, including product recognition, vehicle recog ...@@ -48,41 +50,57 @@ Four sample solutions are provided, including product recognition, vehicle recog
</div> </div>
## Quick Start ## Quick Start
Quick experience of image recognition:[Link](./docs/en/tutorials/quick_start_recognition_en.md) Quick experience of PP-ShiTu image recognition system:[Link](./docs/en/quick_start/quick_start_recognition_en.md)
Quick experience of **P**ractical **U**ltra **L**ight-weight image **C**lassification models:[Link](docs/en/PULC/PULC_quickstart_en.md)
## Tutorials ## Tutorials
- [Quick Installation](./docs/en/tutorials/install_en.md) - [Install Paddle](./docs/en/installation/install_paddle_en.md)
- [Quick Start of Recognition](./docs/en/tutorials/quick_start_recognition_en.md) - [Install PaddleClas Environment](./docs/en/installation/install_paddleclas_en.md)
- [Practical Ultra Light-weight image Classification solutions](./docs/en/PULC/PULC_train_en.md)
- [PULC Quick Start](docs/en/PULC/PULC_quickstart_en.md)
- [PULC Model Zoo](docs/en/PULC/PULC_model_list_en.md)
- [PULC Classification Model of Someone or Nobody](docs/en/PULC/PULC_person_exists_en.md)
- [PULC Recognition Model of Person Attribute](docs/en/PULC/PULC_person_attribute_en.md)
- [PULC Classification Model of Wearing or Unwearing Safety Helmet](docs/en/PULC/PULC_safety_helmet_en.md)
- [PULC Classification Model of Traffic Sign](docs/en/PULC/PULC_traffic_sign_en.md)
- [PULC Recognition Model of Vehicle Attribute](docs/en/PULC/PULC_vehicle_attribute_en.md)
- [PULC Classification Model of Containing or Uncontaining Car](docs/en/PULC/PULC_car_exists_en.md)
- [PULC Classification Model of Text Image Orientation](docs/en/PULC/PULC_text_image_orientation_en.md)
- [PULC Classification Model of Textline Orientation](docs/en/PULC/PULC_textline_orientation_en.md)
- [PULC Classification Model of Language](docs/en/PULC/PULC_language_classification_en.md)
- [Quick Start of Recognition](./docs/en/quick_start/quick_start_recognition_en.md)
- [Introduction to Image Recognition Systems](#Introduction_to_Image_Recognition_Systems) - [Introduction to Image Recognition Systems](#Introduction_to_Image_Recognition_Systems)
- [Demo images](#Demo_images) - [Image Recognition Demo images](#Rec_Demo_images)
- [PULC demo images](#Clas_Demo_images)
- Algorithms Introduction - Algorithms Introduction
- [Backbone Network and Pre-trained Model Library](./docs/en/ImageNet_models_en.md) - [Backbone Network and Pre-trained Model Library](./docs/en/algorithm_introduction/ImageNet_models_en.md)
- [Mainbody Detection](./docs/en/application/mainbody_detection_en.md) - [Mainbody Detection](./docs/en/image_recognition_pipeline/mainbody_detection_en.md)
- [Image Classification](./docs/en/tutorials/image_classification_en.md) - [Feature Learning](./docs/en/image_recognition_pipeline/feature_extraction_en.md)
- [Feature Learning](./docs/en/application/feature_learning_en.md)
- [Product Recognition](./docs/en/application/product_recognition_en.md)
- [Vehicle Recognition](./docs/en/application/vehicle_recognition_en.md)
- [Logo Recognition](./docs/en/application/logo_recognition_en.md)
- [Animation Character Recognition](./docs/en/application/cartoon_character_recognition_en.md)
- [Vector Search](./deploy/vector_search/README.md) - [Vector Search](./deploy/vector_search/README.md)
- Models Training/Evaluation
- [Image Classification](./docs/en/tutorials/getting_started_en.md)
- [Feature Learning](./docs/en/tutorials/getting_started_retrieval_en.md)
- Inference Model Prediction - Inference Model Prediction
- [Python Inference](./docs/en/inference.md) - [Python Inference](./docs/en/inference_deployment/python_deploy_en.md)
- [C++ Classfication Inference](./deploy/cpp/readme_en.md)[C++ PP-ShiTu Inference](deploy/cpp_shitu/readme_en.md) - [C++ Classfication Inference](./deploy/cpp/readme_en.md)[C++ PP-ShiTu Inference](deploy/cpp_shitu/readme_en.md)
- Model Deploy (only support classification for now, recognition coming soon) - Model Deploy (only support classification for now, recognition coming soon)
- [Hub Serving Deployment](./deploy/hubserving/readme_en.md) - [Hub Serving Deployment](./deploy/hubserving/readme_en.md)
- [Mobile Deployment](./deploy/lite/readme_en.md) - [Mobile Deployment](./deploy/lite/readme_en.md)
- [Inference Using whl](./docs/en/whl_en.md) - [Inference Using whl](./docs/en/inference_deployment/whl_deploy_en.md)
- Advanced Tutorial - Advanced Tutorial
- [Knowledge Distillation](./docs/en/advanced_tutorials/distillation/distillation_en.md) - [Knowledge Distillation](./docs/en/advanced_tutorials/distillation/distillation_en.md)
- [Model Quantization](./docs/en/extension/paddle_quantization_en.md) - [Model Quantization](./docs/en/algorithm_introduction/model_prune_quantization_en.md)
- [Data Augmentation](./docs/en/advanced_tutorials/image_augmentation/ImageAugment_en.md) - [Data Augmentation](./docs/en/advanced_tutorials/DataAugmentation_en.md)
- [License](#License) - [License](#License)
- [Contribution](#Contribution) - [Contribution](#Contribution)
<a name="Introduction_to_PULC"></a>
## Introduction to Practical Ultra Light-weight image Classification solutions
<div align="center">
<img src="https://user-images.githubusercontent.com/19523330/173011854-b10fcd7a-b799-4dfd-a1cf-9504952a3c44.png" width = "800" />
</div>
PULC solutions consists of PP-LCNet light-weight backbone, SSLD pretrained models, Ensemble of Data Augmentation strategy and SKL-UGI knowledge distillation.
PULC models inference within 3ms on CPU devices, with accuracy comparable with SwinTransformer. We also release 9 practical models covering pedestrian, vehicle and OCR.
<a name="Introduction_to_Image_Recognition_Systems"></a> <a name="Introduction_to_Image_Recognition_Systems"></a>
## Introduction to Image Recognition Systems ## Introduction to Image Recognition Systems
...@@ -97,8 +115,14 @@ Image recognition can be divided into three steps: ...@@ -97,8 +115,14 @@ Image recognition can be divided into three steps:
For a new unknown category, there is no need to retrain the model, just prepare images of new category, extract features and update retrieval database and the category can be recognised. For a new unknown category, there is no need to retrain the model, just prepare images of new category, extract features and update retrieval database and the category can be recognised.
<a name="Demo_images"></a> <a name="Clas_Demo_images"></a>
## Demo images [more](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.2/docs/images/recognition/more_demo_images) ## PULC demo images
<div align="center">
<img src="docs/images/classification_en.gif">
</div>
<a name="Rec_Demo_images"></a>
## Image Recognition Demo images [more](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.2/docs/images/recognition/more_demo_images)
- Product recognition - Product recognition
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/18028216/122769644-51604f80-d2d7-11eb-8290-c53b12a5c1f6.gif" width = "400" /> <img src="https://user-images.githubusercontent.com/18028216/122769644-51604f80-d2d7-11eb-8290-c53b12a5c1f6.gif" width = "400" />
......
Global:
infer_imgs: "./images/PULC/car_exists/objects365_00001507.jpeg"
inference_model_dir: "./models/car_exists_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.5
label_0: no_car
label_1: contains_car
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/language_classification/word_35404.png"
inference_model_dir: "./models/language_classification_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [160, 80]
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 2
class_id_map_file: "../ppcls/utils/PULC_label_list/language_classification_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/person_attribute/090004.jpg"
inference_model_dir: "./models/person_attribute_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [192, 256]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: PersonAttribute
PersonAttribute:
threshold: 0.5 #default threshold
glasses_threshold: 0.3 #threshold only for glasses
hold_threshold: 0.6 #threshold only for hold
Global:
infer_imgs: "./images/PULC/person_exists/objects365_02035329.jpg"
inference_model_dir: "./models/person_exists_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.5
label_0: nobody
label_1: someone
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/safety_helmet/safety_helmet_test_1.png"
inference_model_dir: "./models/safety_helmet_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.5
label_0: wearing_helmet
label_1: unwearing_helmet
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/text_image_orientation/img_rot0_demo.jpg"
inference_model_dir: "./models/text_image_orientation_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 2
class_id_map_file: "../ppcls/utils/PULC_label_list/text_image_orientation_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/textline_orientation/textline_orientation_test_0_0.png"
inference_model_dir: "./models/textline_orientation_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [160, 80]
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 1
class_id_map_file: "../ppcls/utils/PULC_label_list/textline_orientation_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/traffic_sign/99603_17806.jpg"
inference_model_dir: "./models/traffic_sign_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 5
class_id_map_file: "../ppcls/utils/PULC_label_list/traffic_sign_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/vehicle_attribute/0002_c002_00030670_0.jpg"
inference_model_dir: "./models/vehicle_attribute_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [256, 192]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: VehicleAttribute
VehicleAttribute:
color_threshold: 0.5
type_threshold: 0.5
Global:
infer_imgs: "./images/Pedestrain_Attr.jpg"
inference_model_dir: "../inference/"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [192, 256]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: PersonAttribute
PersonAttribute:
threshold: 0.5 #default threshold
glasses_threshold: 0.3 #threshold only for glasses
hold_threshold: 0.6 #threshold only for hold
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
Global: Global:
infer_imgs: "./images/ILSVRC2012_val_00000010.jpeg" infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg"
inference_model_dir: "./models" inference_model_dir: "./models"
batch_size: 1 batch_size: 1
use_gpu: True use_gpu: True
......
Global:
infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg"
inference_model_dir: "./models/PPHGNet_tiny_calling_halfbody/"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 2
class_id_map_file: "../dataset/data/phone_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global: Global:
infer_imgs: "./images/ILSVRC2012_val_00000010.jpeg" infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg"
inference_model_dir: "./models" inference_model_dir: "./models"
batch_size: 1 batch_size: 1
use_gpu: True use_gpu: True
......
...@@ -5,7 +5,7 @@ Global: ...@@ -5,7 +5,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 1 max_det_results: 1
labe_list: label_list:
- foreground - foreground
# inference engine config # inference engine config
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
# inference engine config # inference engine config
......
...@@ -8,7 +8,7 @@ Global: ...@@ -8,7 +8,7 @@ Global:
image_shape: [3, 640, 640] image_shape: [3, 640, 640]
threshold: 0.2 threshold: 0.2
max_det_results: 5 max_det_results: 5
labe_list: label_list:
- foreground - foreground
use_gpu: True use_gpu: True
......
...@@ -33,26 +33,26 @@ using namespace paddle_infer; ...@@ -33,26 +33,26 @@ using namespace paddle_infer;
namespace Detection { namespace Detection {
// Object Detection Result // Object Detection Result
struct ObjectResult { struct ObjectResult {
// Rectangle coordinates of detected object: left, right, top, down // Rectangle coordinates of detected object: left, right, top, down
std::vector<int> rect; std::vector<int> rect;
// Class id of detected object // Class id of detected object
int class_id; int class_id;
// Confidence of detected object // Confidence of detected object
float confidence; float confidence;
}; };
// Generate visualization colormap for each class // Generate visualization colormap for each class
std::vector<int> GenerateColorMap(int num_class); std::vector<int> GenerateColorMap(int num_class);
// Visualiztion Detection Result // Visualiztion Detection Result
cv::Mat VisualizeResult(const cv::Mat &img, cv::Mat VisualizeResult(const cv::Mat &img,
const std::vector <ObjectResult> &results, const std::vector<ObjectResult> &results,
const std::vector <std::string> &lables, const std::vector<std::string> &lables,
const std::vector<int> &colormap, const bool is_rbox); const std::vector<int> &colormap, const bool is_rbox);
class ObjectDetector { class ObjectDetector {
public: public:
explicit ObjectDetector(const YAML::Node &config_file) { explicit ObjectDetector(const YAML::Node &config_file) {
this->use_gpu_ = config_file["Global"]["use_gpu"].as<bool>(); this->use_gpu_ = config_file["Global"]["use_gpu"].as<bool>();
if (config_file["Global"]["gpu_id"].IsDefined()) if (config_file["Global"]["gpu_id"].IsDefined())
...@@ -68,9 +68,9 @@ namespace Detection { ...@@ -68,9 +68,9 @@ namespace Detection {
this->threshold_ = config_file["Global"]["threshold"].as<float>(); this->threshold_ = config_file["Global"]["threshold"].as<float>();
this->max_det_results_ = config_file["Global"]["max_det_results"].as<int>(); this->max_det_results_ = config_file["Global"]["max_det_results"].as<int>();
this->image_shape_ = this->image_shape_ =
config_file["Global"]["image_shape"].as < std::vector < int >> (); config_file["Global"]["image_shape"].as<std::vector<int>>();
this->label_list_ = this->label_list_ =
config_file["Global"]["labe_list"].as < std::vector < std::string >> (); config_file["Global"]["label_list"].as<std::vector<std::string>>();
this->ir_optim_ = config_file["Global"]["ir_optim"].as<bool>(); this->ir_optim_ = config_file["Global"]["ir_optim"].as<bool>();
this->batch_size_ = config_file["Global"]["batch_size"].as<int>(); this->batch_size_ = config_file["Global"]["batch_size"].as<int>();
...@@ -83,19 +83,19 @@ namespace Detection { ...@@ -83,19 +83,19 @@ namespace Detection {
const std::string &run_mode = "fluid"); const std::string &run_mode = "fluid");
// Run predictor // Run predictor
void Predict(const std::vector <cv::Mat> imgs, const int warmup = 0, void Predict(const std::vector<cv::Mat> imgs, const int warmup = 0,
const int repeats = 1, const int repeats = 1,
std::vector <ObjectResult> *result = nullptr, std::vector<ObjectResult> *result = nullptr,
std::vector<int> *bbox_num = nullptr, std::vector<int> *bbox_num = nullptr,
std::vector<double> *times = nullptr); std::vector<double> *times = nullptr);
const std::vector <std::string> &GetLabelList() const { const std::vector<std::string> &GetLabelList() const {
return this->label_list_; return this->label_list_;
} }
const float &GetThreshold() const { return this->threshold_; } const float &GetThreshold() const { return this->threshold_; }
private: private:
bool use_gpu_ = true; bool use_gpu_ = true;
int gpu_id_ = 0; int gpu_id_ = 0;
int gpu_mem_ = 800; int gpu_mem_ = 800;
...@@ -109,7 +109,7 @@ namespace Detection { ...@@ -109,7 +109,7 @@ namespace Detection {
float threshold_ = 0.5; float threshold_ = 0.5;
float max_det_results_ = 5; float max_det_results_ = 5;
std::vector<int> image_shape_ = {3, 640, 640}; std::vector<int> image_shape_ = {3, 640, 640};
std::vector <std::string> label_list_; std::vector<std::string> label_list_;
bool ir_optim_ = true; bool ir_optim_ = true;
bool det_permute_ = true; bool det_permute_ = true;
bool det_postprocess_ = true; bool det_postprocess_ = true;
...@@ -124,15 +124,15 @@ namespace Detection { ...@@ -124,15 +124,15 @@ namespace Detection {
void Preprocess(const cv::Mat &image_mat); void Preprocess(const cv::Mat &image_mat);
// Postprocess result // Postprocess result
void Postprocess(const std::vector <cv::Mat> mats, void Postprocess(const std::vector<cv::Mat> mats,
std::vector <ObjectResult> *result, std::vector<int> bbox_num, std::vector<ObjectResult> *result, std::vector<int> bbox_num,
bool is_rbox); bool is_rbox);
std::shared_ptr <Predictor> predictor_; std::shared_ptr<Predictor> predictor_;
Preprocessor preprocessor_; Preprocessor preprocessor_;
ImageBlob inputs_; ImageBlob inputs_;
std::vector<float> output_data_; std::vector<float> output_data_;
std::vector<int> out_bbox_num_data_; std::vector<int> out_bbox_num_data_;
}; };
} // namespace Detection } // namespace Detection
[English](readme_en.md) | 简体中文 简体中文 | [English](readme_en.md)
# 基于PaddleHub Serving的服务部署 # 基于 PaddleHub Serving 的服务部署
hubserving服务部署配置服务包`clas`下包含3个必选文件,目录如下: PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。
```
hubserving/clas/
└─ __init__.py 空文件,必选 ## 目录
└─ config.json 配置文件,可选,使用配置启动服务时作为参数传入 - [1. 简介](#1-简介)
└─ module.py 主模块,必选,包含服务的完整逻辑 - [2. 准备环境](#2-准备环境)
└─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数 - [3. 下载推理模型](#3-下载推理模型)
- [4. 安装服务模块](#4-安装服务模块)
- [5. 启动服务](#5-启动服务)
- [5.1 命令行启动](#51-命令行启动)
- [5.2 配置文件启动](#52-配置文件启动)
- [6. 发送预测请求](#6-发送预测请求)
- [7. 自定义修改服务模块](#7-自定义修改服务模块)
<a name="1"></a>
## 1. 简介
hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下:
```shell
deploy/hubserving/clas/
├── __init__.py # 空文件,必选
├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入
├── module.py # 主模块,必选,包含服务的完整逻辑
└── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数
``` ```
## 快速启动服务
### 1. 准备环境 <a name="2"></a>
## 2. 准备环境
```shell ```shell
# 安装paddlehub,请安装2.0版本 # 安装 paddlehub,建议安装 2.1.0 版本
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```
### 2. 下载推理模型
<a name="3"></a>
## 3. 下载推理模型
安装服务模块前,需要准备推理模型并放到正确路径,默认模型路径为: 安装服务模块前,需要准备推理模型并放到正确路径,默认模型路径为:
```
分类推理模型结构文件:PaddleClas/inference/inference.pdmodel * 分类推理模型结构文件:`PaddleClas/inference/inference.pdmodel`
分类推理模型权重文件:PaddleClas/inference/inference.pdiparams * 分类推理模型权重文件:`PaddleClas/inference/inference.pdiparams`
```
**注意** **注意**
* 模型文件路径可在`PaddleClas/deploy/hubserving/clas/params.py`中查看和修改: * 模型文件路径可在 `PaddleClas/deploy/hubserving/clas/params.py` 中查看和修改:
```python ```python
"inference_model_dir": "../inference/" "inference_model_dir": "../inference/"
``` ```
需要注意,模型文件(包括.pdmodel与.pdiparams)名称必须为`inference` * 模型文件(包括 `.pdmodel``.pdiparams`)的名称必须为 `inference`
* 我们也提供了大量基于ImageNet-1k数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/models/models_intro.md),也可以使用自己训练转换好的模型。 * 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
### 3. 安装服务模块
针对Linux环境和Windows环境,安装命令如下。
* 在Linux环境下,安装示例如下: <a name="4"></a>
```shell ## 4. 安装服务模块
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
* 在Windows环境下(文件夹的分隔符为`\`),安装示例如下: * 在 Linux 环境下,安装示例如下:
```shell
cd PaddleClas/deploy
# 安装服务模块:
hub install hubserving/clas/
```
```shell * 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下:
cd PaddleClas\deploy
# 安装服务模块: ```shell
hub install hubserving\clas\ cd PaddleClas\deploy
``` # 安装服务模块:
hub install hubserving\clas\
```
<a name="5"></a>
## 5. 启动服务
<a name="5.1"></a>
### 5.1 命令行启动
该方式仅支持使用 CPU 预测。启动命令:
### 4. 启动服务
#### 方式1. 命令行命令启动(仅支持CPU)
**启动命令:**
```shell ```shell
$ hub serving start --modules Module1==Version1 \ hub serving start \
--port XXXX \ --modules clas_system
--use_multiprocess \ --port 8866
--workers \
``` ```
这样就完成了一个服务化 API 的部署,使用默认端口号 8866。
**参数:** **参数说明**:
|参数|用途| | 参数 | 用途 |
|-|-| | ------------------ | ----------------------------------------------------------------------------------------------------------------------------- |
|--modules/-m| [**必选**] PaddleHub Serving预安装模型,以多个Module==Version键值对的形式列出<br>*`当不指定Version时,默认选择最新版本`*| | --modules/-m | [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出<br>*`当不指定 Version 时,默认选择最新版本`* |
|--port/-p| [**可选**] 服务端口,默认为8866| | --port/-p | [**可选**] 服务端口,默认为 8866 |
|--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*| | --use_multiprocess | [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式<br>*`Windows 操作系统只支持单进程方式`* |
|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为`2*cpu_count-1`,其中`cpu_count`为CPU核数| | --workers | [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数 |
更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
如按默认参数启动服务: ```hub serving start -m clas_system``` <a name="5.2"></a>
### 5.2 配置文件启动
这样就完成了一个服务化API的部署,使用默认端口号8866。 该方式仅支持使用 CPU 或 GPU 预测。启动命令:
#### 方式2. 配置文件启动(支持CPU、GPU) ```shell
**启动命令:** hub serving start -c config.json
```hub serving start -c config.json``` ```
其中,`config.json` 格式如下:
其中,`config.json`格式如下:
```json ```json
{ {
"modules_info": { "modules_info": {
...@@ -97,92 +131,109 @@ $ hub serving start --modules Module1==Version1 \ ...@@ -97,92 +131,109 @@ $ hub serving start --modules Module1==Version1 \
} }
``` ```
- `init_args`中的可配参数与`module.py`中的`_initialize`函数接口一致。其中, **参数说明**:
- 当`use_gpu`为`true`时,表示使用GPU启动服务。 * `init_args` 中的可配参数与 `module.py` 中的 `_initialize` 函数接口一致。其中,
- 当`enable_mkldnn`为`true`时,表示使用MKL-DNN加速。 - 当 `use_gpu` 为 `true` 时,表示使用 GPU 启动服务。
- `predict_args`中的可配参数与`module.py`中的`predict`函数接口一致。 - 当 `enable_mkldnn` 为 `true` 时,表示使用 MKL-DNN 加速。
* `predict_args` 中的可配参数与 `module.py` 中的 `predict` 函数接口一致。
**注意:** **注意**:
- 使用配置文件启动服务时,其他参数会被忽略。 * 使用配置文件启动服务时,将使用配置文件中的参数设置,其他命令行参数将被忽略;
- 如果使用GPU预测(即,`use_gpu`置为`true`),则需要在启动服务之前,设置CUDA_VISIBLE_DEVICES环境变量,如:```export CUDA_VISIBLE_DEVICES=0```,否则不用设置。 * 如果使用 GPU 预测(即,`use_gpu` 置为 `true`),则需要在启动服务之前,设置 `CUDA_VISIBLE_DEVICES` 环境变量来指定所使用的 GPU 卡号,如:`export CUDA_VISIBLE_DEVICES=0`;
- **`use_gpu`不可与`use_multiprocess`同时为`true`**。 * **`use_gpu` 不可与 `use_multiprocess` 同时为 `true`**;
- **`use_gpu`与`enable_mkldnn`同时为`true`时,将忽略`enable_mkldnn`,而使用GPU**。 * **`use_gpu` 与 `enable_mkldnn` 同时为 `true` 时,将忽略 `enable_mkldnn`,而使用 GPU**。
如使用 GPU 3 号卡启动服务:
如,使用GPU 3号卡启动串联服务:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
export CUDA_VISIBLE_DEVICES=3 export CUDA_VISIBLE_DEVICES=3
hub serving start -c hubserving/clas/config.json hub serving start -c hubserving/clas/config.json
``` ```
## 发送预测请求 <a name="6"></a>
配置好服务端,可使用以下命令发送预测请求,获取预测结果: ## 6. 发送预测请求
配置好服务端后,可使用以下命令发送预测请求,获取预测结果:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
--batch_size 8
```
**预测输出**
```log
The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
The average time of prediction cost: 2.970 s/image
The average time cost: 3.014 s/image
The average top-1 score: 0.110
``` ```
需要给脚本传递2个必须参数: **脚本参数说明**:
- **server_url**:服务地址,格式为 * **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。
`http://[ip_address]:[port]/predict/[module_name]` * **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。
- **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。 * **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。
- **batch_size**:[**可选**] 以`batch_size`大小为单位进行预测,默认为`1`。 * **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。
- **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为`256`。 * **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为 `224`。
- **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为`224`。 * **normalize**:[**可选**] 预处理时,是否进行 `normalize`,默认为 `True`。
- **normalize**:[**可选**] 预处理时,是否进行`normalize`,默认为`True`。 * **to_chw**:[**可选**] 预处理时,是否调整为 `CHW` 顺序,默认为 `True`。
- **to_chw**:[**可选**] 预处理时,是否调整为`CHW`顺序,默认为`True`。
**注意**:如果使用`Transformer`系列模型,如`DeiT_***_384`, `ViT_***_384`等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
**注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。
访问示例: **返回结果格式说明**:
返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
```shell ```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8
```
### 返回结果格式说明
返回结果为列表(list),包含top-k个分类结果,以及对应的得分,还有此图片预测耗时,具体如下:
```
list: 返回结果 list: 返回结果
└─ list: 第一张图片结果 └─list: 第一张图片结果
└─ list: 前k个分类结果,依score递减排序 ├── list: 前 k 个分类结果,依 score 递减排序
└─ list: 前k个分类结果对应的score,依score递减排序 ├── list: 前 k 个分类结果对应的 score,依 score 递减排序
└─ float: 该图分类耗时,单位秒 └─ float: 该图分类耗时,单位秒
``` ```
**说明:** 如果需要增加、删除、修改返回字段,可对相应模块进行修改,完整流程参考下一节自定义修改服务模块。
## 自定义修改服务模块
如果需要修改服务逻辑,你一般需要操作以下步骤:
- 1、 停止服务 <a name="7"></a>
```hub serving stop --port/-p XXXX``` ## 7. 自定义修改服务模块
- 2、 到相应的`module.py`和`params.py`等文件中根据实际需求修改代码。`module.py`修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过`python hubserving/clas/module.py`测试已安装服务模块。 如果需要修改服务逻辑,需要进行以下操作:
- 3、 卸载旧服务包 1. 停止服务
```hub uninstall clas_system``` ```shell
hub serving stop --port/-p XXXX
```
- 4、 安装修改后的新服务包 2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码。
```hub install hubserving/clas/```
- 5、重新启动服务 3. 卸载旧服务包
```hub serving start -m clas_system``` ```shell
hub uninstall clas_system
```
4. 安装修改后的新服务包
```shell
hub install hubserving/clas/
```
5. 重新启动服务
```shell
hub serving start -m clas_system
```
**注意**: **注意**:
常用参数可在[params.py](./clas/params.py)中修改: 常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改:
* 更换模型,需要修改模型文件路径参数: * 更换模型,需要修改模型文件路径参数:
```python ```python
"inference_model_dir": "inference_model_dir":
``` ```
* 更改后处理时返回的`top-k`结果数量: * 更改后处理时返回的 `top-k` 结果数量:
```python ```python
'topk': 'topk':
``` ```
* 更改后处理时的lable与class id对应映射文件: * 更改后处理时的 lable 与 class id 对应映射文件:
```python ```python
'class_id_map_file': 'class_id_map_file':
``` ```
为了避免不必要的延时以及能够以batch_size进行预测,数据预处理逻辑(包括resize、crop等操作)在客户端完成,因此需要在[test_hubserving.py](./test_hubserving.py#L35-L52)中修改 为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L41-L47](./test_hubserving.py#L41-L47) 以及 [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76) 中修改数据预处理逻辑相关代码
...@@ -2,82 +2,115 @@ English | [简体中文](readme.md) ...@@ -2,82 +2,115 @@ English | [简体中文](readme.md)
# Service deployment based on PaddleHub Serving # Service deployment based on PaddleHub Serving
HubServing service pack contains 3 files, the directory is as follows: PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition.
```
hubserving/clas/ ## Catalogue
└─ __init__.py Empty file, required - [1 Introduction](#1-introduction)
└─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service - [2. Prepare the environment](#2-prepare-the-environment)
└─ module.py Main module file, required, contains the complete logic of the service - [3. Download the inference model](#3-download-the-inference-model)
└─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters - [4. Install the service module](#4-install-the-service-module)
- [5. Start service](#5-start-service)
- [5.1 Start with command line parameters](#51-start-with-command-line-parameters)
- [5.2 Start with configuration file](#52-start-with-configuration-file)
- [6. Send prediction requests](#6-send-prediction-requests)
- [7. User defined service module modification](#7-user-defined-service-module-modification)
<a name="1"></a>
## 1 Introduction
The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows:
```shell
deploy/hubserving/clas/
├── __init__.py # Empty file, required
├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration
├── module.py # The main module, required, contains the complete logic of the service
└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters
``` ```
## Quick start service
### 1. Prepare the environment <a name="2"></a>
## 2. Prepare the environment
```shell ```shell
# Install version 2.0 of PaddleHub # Install paddlehub, version 2.1.0 is recommended
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```
### 2. Download inference model
<a name="3"></a>
## 3. Download the inference model
Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is:
``` * Classification inference model structure file: `PaddleClas/inference/inference.pdmodel`
Model structure file: PaddleClas/inference/inference.pdmodel * Classification inference model weight file: `PaddleClas/inference/inference.pdiparams`
Model parameters file: PaddleClas/inference/inference.pdiparams
``` **Notice**:
* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`:
```python
"inference_model_dir": "../inference/"
```
* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`.
* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../../docs/en/algorithm_introduction/ImageNet_models_en.md), or you can use your own trained and converted models.
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`.
It should be noted that the prefix of model structure file and model parameters file must be `inference`. <a name="4"></a>
## 4. Install the service module
* More models provided by PaddleClas can be obtained from the [model library](../../docs/en/models/models_intro_en.md). You can also use models trained by yourself. * In the Linux environment, the installation example is as follows:
```shell
cd PaddleClas/deploy
# Install the service module:
hub install hubserving/clas/
```
### 3. Install Service Module * In the Windows environment (the folder separator is `\`), the installation example is as follows:
* On Linux platform, the examples are as follows. ```shell
```shell cd PaddleClas\deploy
cd PaddleClas/deploy # Install the service module:
hub install hubserving/clas/ hub install hubserving\clas\
``` ```
* On Windows platform, the examples are as follows.
```shell
cd PaddleClas\deploy
hub install hubserving\clas\
```
### 4. Start service <a name="5"></a>
#### Way 1. Start with command line parameters (CPU only) ## 5. Start service
**start command:**
```shell
$ hub serving start --modules Module1==Version1 \
--port XXXX \
--use_multiprocess \
--workers \
```
**parameters:**
|parameters|usage| <a name="5.1"></a>
|-|-| ### 5.1 Start with command line parameters
|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When Version is not specified, the latest version is selected by default`*|
|--port/-p|Service port, default is 8866| This method only supports prediction using CPU. Start command:
|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`*|
|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores|
For example, start the 2-stage series service:
```shell ```shell
hub serving start -m clas_system hub serving start \
--modules clas_system
--port 8866
``` ```
This completes the deployment of a serviced API, using the default port number 8866.
This completes the deployment of a service API, using the default port number 8866. **Parameter Description**:
| parameters | uses |
| ------------------ | ------------------- |
| --modules/-m | [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs<br>*`When no Version is specified, the latest is selected by default version`* |
| --port/-p | [**OPTIONAL**] Service port, default is 8866 |
| --use_multiprocess | [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines<br>*`Windows operating system only supports single-process mode`* |
| --workers | [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores |
For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html)
<a name="5.2"></a>
### 5.2 Start with configuration file
This method only supports prediction using CPU or GPU. Start command:
#### Way 2. Start with configuration file(CPU、GPU)
**start command:**
```shell ```shell
hub serving start --config/-c config.json hub serving start -c config.json
``` ```
Wherein, the format of `config.json` is as follows:
Among them, the format of `config.json` is as follows:
```json ```json
{ {
"modules_info": { "modules_info": {
...@@ -96,104 +129,110 @@ Wherein, the format of `config.json` is as follows: ...@@ -96,104 +129,110 @@ Wherein, the format of `config.json` is as follows:
"workers": 2 "workers": 2
} }
``` ```
- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them,
- when `use_gpu` is `true`, it means that the GPU is used to start the service. **Parameter Description**:
- when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate. * The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in,
- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. - When `use_gpu` is `true`, it means to use GPU to start the service.
- When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration.
**Note:** * The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`.
- When using the configuration file to start the service, other parameters will be ignored.
- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it. **Notice**:
- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.** * When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored;
- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.** * If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`;
* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`;
For example, use GPU card No. 3 to start the 2-stage series service: * ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used.
If you use GPU No. 3 card to start the service:
```shell ```shell
cd PaddleClas/deploy cd PaddleClas/deploy
export CUDA_VISIBLE_DEVICES=3 export CUDA_VISIBLE_DEVICES=3
hub serving start -c hubserving/clas/config.json hub serving start -c hubserving/clas/config.json
``` ```
## Send prediction requests <a name="6"></a>
After the service starts, you can use the following command to send a prediction request to obtain the prediction result: ## 6. Send prediction requests
```shell
cd PaddleClas/deploy
python hubserving/test_hubserving.py server_url image_path
```
Two required parameters need to be passed to the script:
- **server_url**: service address,format of which is
`http://[ip_address]:[port]/predict/[module_name]`
- **image_path**: Test image path, can be a single image path or an image directory path
- **batch_size**: [**Optional**] batch_size. Default by `1`.
- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`
- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`
- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`
- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`
**Notice**: After configuring the server, you can use the following command to send a prediction request to get the prediction result:
If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`.
**Eg.**
```shell ```shell
python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 cd PaddleClas/deploy
``` python3.7 hubserving/test_hubserving.py \
--server_url http://127.0.0.1:8866/predict/clas_system \
### Returned result format --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \
The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows. --batch_size 8
```
``` **Predicted output**
list: The returned results ```log
└─ list: The result of first picture The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285]
└─ list: The top-k classification results, sorted in descending order of score The average time of prediction cost: 2.970 s/image
└─ list: The scores corresponding to the top-k classification results, sorted in descending order of score The average time cost: 3.014 s/image
└─ float: The time cost of predicting the picture, unit second The average top-1 score: 0.110
```
**Script parameter description**:
* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`.
* **image_path**: The test image path, which can be a single image path or an image collection directory path.
* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`.
* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`.
* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`.
* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`.
* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`.
**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`.
**Return result format description**:
The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows:
```shell
list: return result
└──list: first image result
├── list: the top k classification results, sorted in descending order of score
├── list: the scores corresponding to the first k classification results, sorted in descending order of score
└── float: The image classification time, in seconds
``` ```
**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section.
## User defined service module modification
If you need to modify the service logic, the following steps are generally required:
1. Stop service <a name="7"></a>
```shell ## 7. User defined service module modification
hub serving stop --port/-p XXXX
```
2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`. If you need to modify the service logic, you need to do the following:
After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module.
For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation. 1. Stop the service
```shell
hub serving stop --port/-p XXXX
```
3. Uninstall old service module 2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment.
```shell
hub uninstall clas_system
```
4. Install modified service module 3. Uninstall the old service pack
```shell ```shell
hub install hubserving/clas/ hub uninstall clas_system
``` ```
5. Restart service 4. Install the new modified service pack
```shell ```shell
hub serving start -m clas_system hub install hubserving/clas/
``` ```
**Note**: 5. Restart the service
```shell
hub serving start -m clas_system
```
Common parameters can be modified in params.py: **Notice**:
* Directory of model files(include model structure file and model parameters file): Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`:
* To replace the model, you need to modify the model file path parameters:
```python ```python
"inference_model_dir": "inference_model_dir":
``` ```
* The number of Top-k results returned during post-processing: * Change the number of `top-k` results returned when postprocessing:
```python ```python
'topk': 'topk':
``` ```
* Mapping file corresponding to label and class ID during post-processing: * The mapping file corresponding to the lable and class id when changing the post-processing:
```python ```python
'class_id_map_file': 'class_id_map_file':
``` ```
In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](./test_hubserving.py#L35-L52) if necessary. In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to modify data preprocessing logic related code in [PaddleClas/deploy/hubserving/test_hubserving.py# L41-L47](./test_hubserving.py#L41-L47) and [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76).
...@@ -92,9 +92,9 @@ PaddleClas 提供了转换并优化后的推理模型,可以直接参考下方 ...@@ -92,9 +92,9 @@ PaddleClas 提供了转换并优化后的推理模型,可以直接参考下方
```shell ```shell
# 进入lite_ppshitu目录 # 进入lite_ppshitu目录
cd $PaddleClas/deploy/lite_shitu cd $PaddleClas/deploy/lite_shitu
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.1.tar wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.2.tar
tar -xf ppshitu_lite_models_v1.1.tar tar -xf ppshitu_lite_models_v1.2.tar
rm -f ppshitu_lite_models_v1.1.tar rm -f ppshitu_lite_models_v1.2.tar
``` ```
#### 2.1.2 使用其他模型 #### 2.1.2 使用其他模型
...@@ -162,7 +162,7 @@ git clone https://github.com/PaddlePaddle/PaddleDetection.git ...@@ -162,7 +162,7 @@ git clone https://github.com/PaddlePaddle/PaddleDetection.git
# 进入PaddleDetection根目录 # 进入PaddleDetection根目录
cd PaddleDetection cd PaddleDetection
# 将预训练模型导出为inference模型 # 将预训练模型导出为inference模型
python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams --output_dir=inference python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams export_post_process=False --output_dir=inference
# 将inference模型转化为Paddle-Lite优化模型 # 将inference模型转化为Paddle-Lite优化模型
paddle_lite_opt --model_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdmodel --param_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdiparams --optimize_out=inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det paddle_lite_opt --model_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdmodel --param_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdiparams --optimize_out=inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det
# 将转好的模型复制到lite_shitu目录下 # 将转好的模型复制到lite_shitu目录下
...@@ -183,24 +183,56 @@ cd deploy/lite_shitu ...@@ -183,24 +183,56 @@ cd deploy/lite_shitu
**注意**`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb``--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。 **注意**`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb``--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。
### 2.2 将yaml文件转换成json文件 ### 2.2 生成新的检索库
由于lite 版本的检索库用的是`faiss1.5.3`版本,与新版本不兼容,因此需要重新生成index库
#### 2.2.1 数据及环境配置
```shell
# 进入上级目录
cd ..
# 下载瓶装饮料数据集
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0/index
# 安装1.5.3版本的faiss
pip install faiss-cpu==1.5.3
# 下载通用识别模型,可替换成自己的inference model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
rm -rf general_PPLCNet_x2_5_lite_v1.0_infer.tar
```
#### 2.2.2 生成新的index文件
```shell
# 生成新的index库,注意指定好识别模型的路径,同时将index_mothod修改成Flat,HNSW32和IVF在此版本中可能存在bug,请慎重使用。
# 如果使用自己的识别模型,对应的修改inference model的目录
python python/build_gallery.py -c configs/inference_drink.yaml -o Global.rec_inference_model_dir=general_PPLCNet_x2_5_lite_v1.0_infer -o IndexProcess.index_method=Flat
# 进入到lite_shitu目录
cd lite_shitu
mv ../drink_dataset_v1.0 .
```
### 2.3 将yaml文件转换成json文件
```shell ```shell
# 如果测试单张图像 # 如果测试单张图像
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_path images/demo.jpg python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_path images/demo.jpeg
# or # or
# 如果测试多张图像 # 如果测试多张图像
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_dir images python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_dir images
# 执行完成后,会在lit_shitu下生成shitu_config.json配置文件 # 执行完成后,会在lit_shitu下生成shitu_config.json配置文件
``` ```
### 2.3 index字典转换 ### 2.4 index字典转换
由于python的检索库字典,使用`pickle`进行的序列化存储,导致C++不方便读取,因此需要进行转换 由于python的检索库字典,使用`pickle`进行的序列化存储,导致C++不方便读取,因此需要进行转换
```shell ```shell
# 下载瓶装饮料数据集
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0.tar
# 转化id_map.pkl为id_map.txt # 转化id_map.pkl为id_map.txt
python transform_id_map.py -c ../configs/inference_drink.yaml python transform_id_map.py -c ../configs/inference_drink.yaml
...@@ -208,7 +240,7 @@ python transform_id_map.py -c ../configs/inference_drink.yaml ...@@ -208,7 +240,7 @@ python transform_id_map.py -c ../configs/inference_drink.yaml
转换成功后,会在`IndexProcess.index_dir`目录下生成`id_map.txt` 转换成功后,会在`IndexProcess.index_dir`目录下生成`id_map.txt`
### 2.4 与手机联调 ### 2.5 与手机联调
首先需要进行一些准备工作。 首先需要进行一些准备工作。
1. 准备一台arm8的安卓手机,如果编译的预测库是armv7,则需要arm7的手机,并修改Makefile中`ARM_ABI=arm7` 1. 准备一台arm8的安卓手机,如果编译的预测库是armv7,则需要arm7的手机,并修改Makefile中`ARM_ABI=arm7`
...@@ -308,8 +340,9 @@ chmod 777 pp_shitu ...@@ -308,8 +340,9 @@ chmod 777 pp_shitu
运行效果如下: 运行效果如下:
``` ```
images/demo.jpg: images/demo.jpeg:
result0: bbox[253, 275, 1146, 872], score: 0.974196, label: 伊藤园_果蔬汁 result0: bbox[344, 98, 527, 593], score: 0.811656, label: 红牛-强化型
result1: bbox[0, 0, 600, 600], score: 0.729664, label: 红牛-强化型
``` ```
## FAQ ## FAQ
......
...@@ -95,7 +95,7 @@ def main(): ...@@ -95,7 +95,7 @@ def main():
config_json["Global"]["det_model_path"] = args.det_model_path config_json["Global"]["det_model_path"] = args.det_model_path
config_json["Global"]["rec_model_path"] = args.rec_model_path config_json["Global"]["rec_model_path"] = args.rec_model_path
config_json["Global"]["rec_label_path"] = args.rec_label_path config_json["Global"]["rec_label_path"] = args.rec_label_path
config_json["Global"]["label_list"] = config_yaml["Global"]["labe_list"] config_json["Global"]["label_list"] = config_yaml["Global"]["label_list"]
config_json["Global"]["rec_nms_thresold"] = config_yaml["Global"][ config_json["Global"]["rec_nms_thresold"] = config_yaml["Global"][
"rec_nms_thresold"] "rec_nms_thresold"]
config_json["Global"]["max_det_results"] = config_yaml["Global"][ config_json["Global"]["max_det_results"] = config_yaml["Global"][
......
...@@ -29,16 +29,16 @@ ...@@ -29,16 +29,16 @@
namespace PPShiTu { namespace PPShiTu {
void load_jsonf(std::string jsonfile, Json::Value& jsondata); void load_jsonf(std::string jsonfile, Json::Value &jsondata);
// Inference model configuration parser // Inference model configuration parser
class ConfigPaser { class ConfigPaser {
public: public:
ConfigPaser() {} ConfigPaser() {}
~ConfigPaser() {} ~ConfigPaser() {}
bool load_config(const Json::Value& config) { bool load_config(const Json::Value &config) {
// Get model arch : YOLO, SSD, RetinaNet, RCNN, Face // Get model arch : YOLO, SSD, RetinaNet, RCNN, Face
if (config["Global"].isMember("det_arch")) { if (config["Global"].isMember("det_arch")) {
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <arm_neon.h> #include <arm_neon.h>
#include <chrono> #include <chrono>
#include <fstream> #include <fstream>
#include <include/preprocess_op.h>
#include <iostream> #include <iostream>
#include <math.h> #include <math.h>
#include <opencv2/opencv.hpp> #include <opencv2/opencv.hpp>
...@@ -48,10 +49,6 @@ public: ...@@ -48,10 +49,6 @@ public:
config_file["Global"]["rec_model_path"].as<std::string>()); config_file["Global"]["rec_model_path"].as<std::string>());
this->predictor = CreatePaddlePredictor<MobileConfig>(config); this->predictor = CreatePaddlePredictor<MobileConfig>(config);
if (config_file["Global"]["rec_label_path"].as<std::string>().empty()) {
std::cout << "Please set [rec_label_path] in config file" << std::endl;
exit(-1);
}
SetPreProcessParam(config_file["RecPreProcess"]["transform_ops"]); SetPreProcessParam(config_file["RecPreProcess"]["transform_ops"]);
printf("feature extract model create!\n"); printf("feature extract model create!\n");
} }
...@@ -68,24 +65,29 @@ public: ...@@ -68,24 +65,29 @@ public:
this->mean.emplace_back(tmp.as<float>()); this->mean.emplace_back(tmp.as<float>());
} }
for (auto tmp : item["std"]) { for (auto tmp : item["std"]) {
this->std.emplace_back(1 / tmp.as<float>()); this->std.emplace_back(tmp.as<float>());
} }
this->scale = item["scale"].as<double>(); this->scale = item["scale"].as<double>();
} }
} }
} }
void RunRecModel(const cv::Mat &img, double &cost_time, std::vector<float> &feature); void RunRecModel(const cv::Mat &img, double &cost_time,
//void PostProcess(std::vector<float> &feature); std::vector<float> &feature);
cv::Mat ResizeImage(const cv::Mat &img); // void PostProcess(std::vector<float> &feature);
void NeonMeanScale(const float *din, float *dout, int size); void FeatureNorm(std::vector<float> &featuer);
private: private:
std::shared_ptr<PaddlePredictor> predictor; std::shared_ptr<PaddlePredictor> predictor;
//std::vector<std::string> label_list; // std::vector<std::string> label_list;
std::vector<float> mean = {0.485f, 0.456f, 0.406f}; std::vector<float> mean = {0.485f, 0.456f, 0.406f};
std::vector<float> std = {1 / 0.229f, 1 / 0.224f, 1 / 0.225f}; std::vector<float> std = {0.229f, 0.224f, 0.225f};
double scale = 0.00392157; double scale = 0.00392157;
float size = 224; int size = 224;
// pre-process
Resize resize_op_;
NormalizeImage normalize_op_;
Permute permute_op_;
}; };
} // namespace PPShiTu } // namespace PPShiTu
...@@ -16,22 +16,22 @@ ...@@ -16,22 +16,22 @@
#include <ctime> #include <ctime>
#include <memory> #include <memory>
#include <stdlib.h>
#include <string> #include <string>
#include <utility> #include <utility>
#include <vector> #include <vector>
#include <stdlib.h>
#include "json/json.h"
#include <opencv2/core/core.hpp> #include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp> #include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/imgproc/imgproc.hpp>
#include "json/json.h"
#include "paddle_api.h" // NOLINT #include "paddle_api.h" // NOLINT
#include "include/config_parser.h" #include "include/config_parser.h"
#include "include/picodet_postprocess.h"
#include "include/preprocess_op.h" #include "include/preprocess_op.h"
#include "include/utils.h" #include "include/utils.h"
#include "include/picodet_postprocess.h"
using namespace paddle::lite_api; // NOLINT using namespace paddle::lite_api; // NOLINT
...@@ -41,53 +41,51 @@ namespace PPShiTu { ...@@ -41,53 +41,51 @@ namespace PPShiTu {
std::vector<int> GenerateColorMap(int num_class); std::vector<int> GenerateColorMap(int num_class);
// Visualiztion Detection Result // Visualiztion Detection Result
cv::Mat VisualizeResult(const cv::Mat& img, cv::Mat VisualizeResult(const cv::Mat &img,
const std::vector<PPShiTu::ObjectResult>& results, const std::vector<PPShiTu::ObjectResult> &results,
const std::vector<std::string>& lables, const std::vector<std::string> &lables,
const std::vector<int>& colormap, const std::vector<int> &colormap, const bool is_rbox);
const bool is_rbox);
class ObjectDetector { class ObjectDetector {
public: public:
explicit ObjectDetector(const Json::Value& config, explicit ObjectDetector(const Json::Value &config,
const std::string& model_dir, const std::string &model_dir, int cpu_threads = 1,
int cpu_threads = 1,
const int batch_size = 1) { const int batch_size = 1) {
config_.load_config(config); config_.load_config(config);
printf("config created\n"); printf("config created\n");
preprocessor_.Init(config_.preprocess_info_); preprocessor_.Init(config_.preprocess_info_);
printf("before object detector\n"); printf("before object detector\n");
if(config["Global"]["det_model_path"].as<std::string>().empty()){ if (config["Global"]["det_model_path"].as<std::string>().empty()) {
std::cout << "Please set [det_model_path] in config file" << std::endl; std::cout << "Please set [det_model_path] in config file" << std::endl;
exit(-1); exit(-1);
} }
LoadModel(config["Global"]["det_model_path"].as<std::string>(), cpu_threads); LoadModel(config["Global"]["det_model_path"].as<std::string>(),
printf("create object detector\n"); } cpu_threads);
printf("create object detector\n");
}
// Load Paddle inference model // Load Paddle inference model
void LoadModel(std::string model_file, int num_theads); void LoadModel(std::string model_file, int num_theads);
// Run predictor // Run predictor
void Predict(const std::vector<cv::Mat>& imgs, void Predict(const std::vector<cv::Mat> &imgs, const int warmup = 0,
const int warmup = 0,
const int repeats = 1, const int repeats = 1,
std::vector<PPShiTu::ObjectResult>* result = nullptr, std::vector<PPShiTu::ObjectResult> *result = nullptr,
std::vector<int>* bbox_num = nullptr, std::vector<int> *bbox_num = nullptr,
std::vector<double>* times = nullptr); std::vector<double> *times = nullptr);
// Get Model Label list // Get Model Label list
const std::vector<std::string>& GetLabelList() const { const std::vector<std::string> &GetLabelList() const {
return config_.label_list_; return config_.label_list_;
} }
private: private:
// Preprocess image and copy data to input buffer // Preprocess image and copy data to input buffer
void Preprocess(const cv::Mat& image_mat); void Preprocess(const cv::Mat &image_mat);
// Postprocess result // Postprocess result
void Postprocess(const std::vector<cv::Mat> mats, void Postprocess(const std::vector<cv::Mat> mats,
std::vector<PPShiTu::ObjectResult>* result, std::vector<PPShiTu::ObjectResult> *result,
std::vector<int> bbox_num, std::vector<int> bbox_num, bool is_rbox);
bool is_rbox);
std::shared_ptr<PaddlePredictor> predictor_; std::shared_ptr<PaddlePredictor> predictor_;
Preprocessor preprocessor_; Preprocessor preprocessor_;
...@@ -96,7 +94,6 @@ class ObjectDetector { ...@@ -96,7 +94,6 @@ class ObjectDetector {
std::vector<int> out_bbox_num_data_; std::vector<int> out_bbox_num_data_;
float threshold_; float threshold_;
ConfigPaser config_; ConfigPaser config_;
}; };
} // namespace PPShiTu } // namespace PPShiTu
...@@ -14,25 +14,23 @@ ...@@ -14,25 +14,23 @@
#pragma once #pragma once
#include <string>
#include <vector>
#include <memory>
#include <utility>
#include <ctime> #include <ctime>
#include <memory>
#include <numeric> #include <numeric>
#include <string>
#include <utility>
#include <vector>
#include "include/utils.h" #include "include/utils.h"
namespace PPShiTu { namespace PPShiTu {
void PicoDetPostProcess(std::vector<PPShiTu::ObjectResult>* results, void PicoDetPostProcess(std::vector<PPShiTu::ObjectResult> *results,
std::vector<const float *> outs, std::vector<const float *> outs,
std::vector<int> fpn_stride, std::vector<int> fpn_stride,
std::vector<float> im_shape, std::vector<float> im_shape,
std::vector<float> scale_factor, std::vector<float> scale_factor,
float score_threshold = 0.3, float score_threshold = 0.3, float nms_threshold = 0.5,
float nms_threshold = 0.5, int num_class = 80, int reg_max = 7);
int num_class = 80,
int reg_max = 7);
} // namespace PPShiTu } // namespace PPShiTu
...@@ -21,16 +21,16 @@ ...@@ -21,16 +21,16 @@
#include <utility> #include <utility>
#include <vector> #include <vector>
#include "json/json.h"
#include <opencv2/core/core.hpp> #include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp> #include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/imgproc/imgproc.hpp>
#include "json/json.h"
namespace PPShiTu { namespace PPShiTu {
// Object for storing all preprocessed data // Object for storing all preprocessed data
class ImageBlob { class ImageBlob {
public: public:
// image width and height // image width and height
std::vector<float> im_shape_; std::vector<float> im_shape_;
// Buffer for image data after preprocessing // Buffer for image data after preprocessing
...@@ -45,20 +45,20 @@ class ImageBlob { ...@@ -45,20 +45,20 @@ class ImageBlob {
// Abstraction of preprocessing opration class // Abstraction of preprocessing opration class
class PreprocessOp { class PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) = 0; virtual void Init(const Json::Value &item) = 0;
virtual void Run(cv::Mat* im, ImageBlob* data) = 0; virtual void Run(cv::Mat *im, ImageBlob *data) = 0;
}; };
class InitInfo : public PreprocessOp { class InitInfo : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) {} virtual void Init(const Json::Value &item) {}
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
}; };
class NormalizeImage : public PreprocessOp { class NormalizeImage : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) { virtual void Init(const Json::Value &item) {
mean_.clear(); mean_.clear();
scale_.clear(); scale_.clear();
for (auto tmp : item["mean"]) { for (auto tmp : item["mean"]) {
...@@ -70,9 +70,11 @@ class NormalizeImage : public PreprocessOp { ...@@ -70,9 +70,11 @@ class NormalizeImage : public PreprocessOp {
is_scale_ = item["is_scale"].as<bool>(); is_scale_ = item["is_scale"].as<bool>();
} }
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale);
private: private:
// CHW or HWC // CHW or HWC
std::vector<float> mean_; std::vector<float> mean_;
std::vector<float> scale_; std::vector<float> scale_;
...@@ -80,14 +82,15 @@ class NormalizeImage : public PreprocessOp { ...@@ -80,14 +82,15 @@ class NormalizeImage : public PreprocessOp {
}; };
class Permute : public PreprocessOp { class Permute : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) {} virtual void Init(const Json::Value &item) {}
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(const cv::Mat *im, float *data);
}; };
class Resize : public PreprocessOp { class Resize : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) { virtual void Init(const Json::Value &item) {
interp_ = item["interp"].as<int>(); interp_ = item["interp"].as<int>();
// max_size_ = item["target_size"].as<int>(); // max_size_ = item["target_size"].as<int>();
keep_ratio_ = item["keep_ratio"].as<bool>(); keep_ratio_ = item["keep_ratio"].as<bool>();
...@@ -98,11 +101,13 @@ class Resize : public PreprocessOp { ...@@ -98,11 +101,13 @@ class Resize : public PreprocessOp {
} }
// Compute best resize scale for x-dimension, y-dimension // Compute best resize scale for x-dimension, y-dimension
std::pair<float, float> GenerateScale(const cv::Mat& im); std::pair<float, float> GenerateScale(const cv::Mat &im);
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(const cv::Mat &img, cv::Mat &resize_img, int max_size_len,
int size = 0);
private: private:
int interp_; int interp_;
bool keep_ratio_; bool keep_ratio_;
std::vector<int> target_size_; std::vector<int> target_size_;
...@@ -111,46 +116,43 @@ class Resize : public PreprocessOp { ...@@ -111,46 +116,43 @@ class Resize : public PreprocessOp {
// Models with FPN need input shape % stride == 0 // Models with FPN need input shape % stride == 0
class PadStride : public PreprocessOp { class PadStride : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) { virtual void Init(const Json::Value &item) {
stride_ = item["stride"].as<int>(); stride_ = item["stride"].as<int>();
} }
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
private: private:
int stride_; int stride_;
}; };
class TopDownEvalAffine : public PreprocessOp { class TopDownEvalAffine : public PreprocessOp {
public: public:
virtual void Init(const Json::Value& item) { virtual void Init(const Json::Value &item) {
trainsize_.clear(); trainsize_.clear();
for (auto tmp : item["trainsize"]) { for (auto tmp : item["trainsize"]) {
trainsize_.emplace_back(tmp.as<int>()); trainsize_.emplace_back(tmp.as<int>());
} }
} }
virtual void Run(cv::Mat* im, ImageBlob* data); virtual void Run(cv::Mat *im, ImageBlob *data);
private: private:
int interp_ = 1; int interp_ = 1;
std::vector<int> trainsize_; std::vector<int> trainsize_;
}; };
void CropImg(cv::Mat& img, void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector<int> &area,
cv::Mat& crop_img, std::vector<float> &center, std::vector<float> &scale,
std::vector<int>& area,
std::vector<float>& center,
std::vector<float>& scale,
float expandratio = 0.15); float expandratio = 0.15);
class Preprocessor { class Preprocessor {
public: public:
void Init(const Json::Value& config_node) { void Init(const Json::Value &config_node) {
// initialize image info at first // initialize image info at first
ops_["InitInfo"] = std::make_shared<InitInfo>(); ops_["InitInfo"] = std::make_shared<InitInfo>();
for (const auto& item : config_node) { for (const auto &item : config_node) {
auto op_name = item["type"].as<std::string>(); auto op_name = item["type"].as<std::string>();
ops_[op_name] = CreateOp(op_name); ops_[op_name] = CreateOp(op_name);
...@@ -158,7 +160,7 @@ class Preprocessor { ...@@ -158,7 +160,7 @@ class Preprocessor {
} }
} }
std::shared_ptr<PreprocessOp> CreateOp(const std::string& name) { std::shared_ptr<PreprocessOp> CreateOp(const std::string &name) {
if (name == "DetResize") { if (name == "DetResize") {
return std::make_shared<Resize>(); return std::make_shared<Resize>();
} else if (name == "DetPermute") { } else if (name == "DetPermute") {
...@@ -176,12 +178,12 @@ class Preprocessor { ...@@ -176,12 +178,12 @@ class Preprocessor {
return nullptr; return nullptr;
} }
void Run(cv::Mat* im, ImageBlob* data); void Run(cv::Mat *im, ImageBlob *data);
public: public:
static const std::vector<std::string> RUN_ORDER; static const std::vector<std::string> RUN_ORDER;
private: private:
std::unordered_map<std::string, std::shared_ptr<PreprocessOp>> ops_; std::unordered_map<std::string, std::shared_ptr<PreprocessOp>> ops_;
}; };
......
...@@ -38,6 +38,23 @@ struct ObjectResult { ...@@ -38,6 +38,23 @@ struct ObjectResult {
std::vector<RESULT> rec_result; std::vector<RESULT> rec_result;
}; };
void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold, bool rec_nms=false); void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold,
bool rec_nms = false);
template <typename T>
static inline bool SortScorePairDescend(const std::pair<float, T> &pair1,
const std::pair<float, T> &pair2) {
return pair1.first > pair2.first;
}
float RectOverlap(const ObjectResult &a, const ObjectResult &b);
inline void
GetMaxScoreIndex(const std::vector<ObjectResult> &det_result,
const float threshold,
std::vector<std::pair<float, int>> &score_index_vec);
void NMSBoxes(const std::vector<ObjectResult> det_result,
const float score_threshold, const float nms_threshold,
std::vector<int> &indices);
} // namespace PPShiTu } // namespace PPShiTu
...@@ -70,4 +70,4 @@ private: ...@@ -70,4 +70,4 @@ private:
std::vector<faiss::Index::idx_t> I; std::vector<faiss::Index::idx_t> I;
SearchResult sr; SearchResult sr;
}; };
} } // namespace PPShiTu
...@@ -13,24 +13,29 @@ ...@@ -13,24 +13,29 @@
// limitations under the License. // limitations under the License.
#include "include/feature_extractor.h" #include "include/feature_extractor.h"
#include <cmath>
#include <numeric>
namespace PPShiTu { namespace PPShiTu {
void FeatureExtract::RunRecModel(const cv::Mat &img, void FeatureExtract::RunRecModel(const cv::Mat &img, double &cost_time,
double &cost_time,
std::vector<float> &feature) { std::vector<float> &feature) {
// Read img // Read img
cv::Mat resize_image = ResizeImage(img);
cv::Mat img_fp; cv::Mat img_fp;
resize_image.convertTo(img_fp, CV_32FC3, scale); this->resize_op_.Run_feature(img, img_fp, this->size, this->size);
this->normalize_op_.Run_feature(&img_fp, this->mean, this->std, this->scale);
std::vector<float> input(1 * 3 * img_fp.rows * img_fp.cols, 0.0f);
this->permute_op_.Run_feature(&img_fp, input.data());
// Prepare input data from image // Prepare input data from image
std::unique_ptr<Tensor> input_tensor(std::move(this->predictor->GetInput(0))); std::unique_ptr<Tensor> input_tensor(std::move(this->predictor->GetInput(0)));
input_tensor->Resize({1, 3, img_fp.rows, img_fp.cols}); input_tensor->Resize({1, 3, this->size, this->size});
auto *data0 = input_tensor->mutable_data<float>(); auto *data0 = input_tensor->mutable_data<float>();
const float *dimg = reinterpret_cast<const float *>(img_fp.data); // const float *dimg = reinterpret_cast<const float *>(img_fp.data);
NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols); // NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols);
for (int i = 0; i < input.size(); ++i) {
data0[i] = input[i];
}
auto start = std::chrono::system_clock::now(); auto start = std::chrono::system_clock::now();
// Run predictor // Run predictor
...@@ -38,7 +43,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, ...@@ -38,7 +43,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
// Get output and post process // Get output and post process
std::unique_ptr<const Tensor> output_tensor( std::unique_ptr<const Tensor> output_tensor(
std::move(this->predictor->GetOutput(0))); //only one output std::move(this->predictor->GetOutput(0))); // only one output
auto end = std::chrono::system_clock::now(); auto end = std::chrono::system_clock::now();
auto duration = auto duration =
std::chrono::duration_cast<std::chrono::microseconds>(end - start); std::chrono::duration_cast<std::chrono::microseconds>(end - start);
...@@ -46,7 +51,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, ...@@ -46,7 +51,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
std::chrono::microseconds::period::num / std::chrono::microseconds::period::num /
std::chrono::microseconds::period::den; std::chrono::microseconds::period::den;
//do postprocess // do postprocess
int output_size = 1; int output_size = 1;
for (auto dim : output_tensor->shape()) { for (auto dim : output_tensor->shape()) {
output_size *= dim; output_size *= dim;
...@@ -54,63 +59,15 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, ...@@ -54,63 +59,15 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
feature.resize(output_size); feature.resize(output_size);
output_tensor->CopyToCpu(feature.data()); output_tensor->CopyToCpu(feature.data());
//postprocess include sqrt or binarize. // postprocess include sqrt or binarize.
//PostProcess(feature); FeatureNorm(feature);
return; return;
} }
// void FeatureExtract::PostProcess(std::vector<float> &feature){ void FeatureExtract::FeatureNorm(std::vector<float> &feature) {
// float feature_sqrt = std::sqrt(std::inner_product( float feature_sqrt = std::sqrt(std::inner_product(
// feature.begin(), feature.end(), feature.begin(), 0.0f)); feature.begin(), feature.end(), feature.begin(), 0.0f));
// for (int i = 0; i < feature.size(); ++i) for (int i = 0; i < feature.size(); ++i)
// feature[i] /= feature_sqrt; feature[i] /= feature_sqrt;
// }
void FeatureExtract::NeonMeanScale(const float *din, float *dout, int size) {
if (this->mean.size() != 3 || this->std.size() != 3) {
std::cerr << "[ERROR] mean or scale size must equal to 3\n";
exit(1);
}
float32x4_t vmean0 = vdupq_n_f32(mean[0]);
float32x4_t vmean1 = vdupq_n_f32(mean[1]);
float32x4_t vmean2 = vdupq_n_f32(mean[2]);
float32x4_t vscale0 = vdupq_n_f32(std[0]);
float32x4_t vscale1 = vdupq_n_f32(std[1]);
float32x4_t vscale2 = vdupq_n_f32(std[2]);
float *dout_c0 = dout;
float *dout_c1 = dout + size;
float *dout_c2 = dout + size * 2;
int i = 0;
for (; i < size - 3; i += 4) {
float32x4x3_t vin3 = vld3q_f32(din);
float32x4_t vsub0 = vsubq_f32(vin3.val[0], vmean0);
float32x4_t vsub1 = vsubq_f32(vin3.val[1], vmean1);
float32x4_t vsub2 = vsubq_f32(vin3.val[2], vmean2);
float32x4_t vs0 = vmulq_f32(vsub0, vscale0);
float32x4_t vs1 = vmulq_f32(vsub1, vscale1);
float32x4_t vs2 = vmulq_f32(vsub2, vscale2);
vst1q_f32(dout_c0, vs0);
vst1q_f32(dout_c1, vs1);
vst1q_f32(dout_c2, vs2);
din += 12;
dout_c0 += 4;
dout_c1 += 4;
dout_c2 += 4;
}
for (; i < size; i++) {
*(dout_c0++) = (*(din++) - this->mean[0]) * this->std[0];
*(dout_c1++) = (*(din++) - this->mean[1]) * this->std[1];
*(dout_c2++) = (*(din++) - this->mean[2]) * this->std[2];
}
}
cv::Mat FeatureExtract::ResizeImage(const cv::Mat &img) {
cv::Mat resize_img;
cv::resize(img, resize_img, cv::Size(this->size, this->size));
return resize_img;
}
} }
} // namespace PPShiTu
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include "include/feature_extractor.h" #include "include/feature_extractor.h"
#include "include/object_detector.h" #include "include/object_detector.h"
#include "include/preprocess_op.h" #include "include/preprocess_op.h"
#include "include/utils.h"
#include "include/vector_search.h" #include "include/vector_search.h"
#include "json/json.h" #include "json/json.h"
...@@ -158,6 +159,11 @@ int main(int argc, char **argv) { ...@@ -158,6 +159,11 @@ int main(int argc, char **argv) {
<< " [image_dir]>" << std::endl; << " [image_dir]>" << std::endl;
return -1; return -1;
} }
float rec_nms_threshold = 0.05;
if (RT_Config["Global"]["rec_nms_thresold"].isDouble())
rec_nms_threshold = RT_Config["Global"]["rec_nms_thresold"].as<float>();
// Load model and create a object detector // Load model and create a object detector
PPShiTu::ObjectDetector det( PPShiTu::ObjectDetector det(
RT_Config, RT_Config["Global"]["det_model_path"].as<std::string>(), RT_Config, RT_Config["Global"]["det_model_path"].as<std::string>(),
...@@ -174,6 +180,7 @@ int main(int argc, char **argv) { ...@@ -174,6 +180,7 @@ int main(int argc, char **argv) {
// for vector search // for vector search
std::vector<float> feature; std::vector<float> feature;
std::vector<float> features; std::vector<float> features;
std::vector<int> indeices;
double rec_time; double rec_time;
if (!RT_Config["Global"]["infer_imgs"].as<std::string>().empty() || if (!RT_Config["Global"]["infer_imgs"].as<std::string>().empty() ||
!img_dir.empty()) { !img_dir.empty()) {
...@@ -208,9 +215,9 @@ int main(int argc, char **argv) { ...@@ -208,9 +215,9 @@ int main(int argc, char **argv) {
RT_Config["Global"]["max_det_results"].as<int>(), false, &det); RT_Config["Global"]["max_det_results"].as<int>(), false, &det);
// add the whole image for recognition to improve recall // add the whole image for recognition to improve recall
// PPShiTu::ObjectResult result_whole_img = { PPShiTu::ObjectResult result_whole_img = {
// {0, 0, srcimg.cols, srcimg.rows}, 0, 1.0}; {0, 0, srcimg.cols, srcimg.rows}, 0, 1.0};
// det_result.push_back(result_whole_img); det_result.push_back(result_whole_img);
// get rec result // get rec result
PPShiTu::SearchResult search_result; PPShiTu::SearchResult search_result;
...@@ -225,10 +232,18 @@ int main(int argc, char **argv) { ...@@ -225,10 +232,18 @@ int main(int argc, char **argv) {
// do vectore search // do vectore search
search_result = searcher.Search(features.data(), det_result.size()); search_result = searcher.Search(features.data(), det_result.size());
for (int i = 0; i < det_result.size(); ++i) {
det_result[i].confidence = search_result.D[search_result.return_k * i];
}
NMSBoxes(det_result, searcher.GetThreshold(), rec_nms_threshold,
indeices);
PrintResult(img_path, det_result, searcher, search_result); PrintResult(img_path, det_result, searcher, search_result);
batch_imgs.clear(); batch_imgs.clear();
det_result.clear(); det_result.clear();
features.clear();
feature.clear();
indeices.clear();
} }
} }
return 0; return 0;
......
...@@ -13,9 +13,9 @@ ...@@ -13,9 +13,9 @@
// limitations under the License. // limitations under the License.
#include <sstream> #include <sstream>
// for setprecision // for setprecision
#include "include/object_detector.h"
#include <chrono> #include <chrono>
#include <iomanip> #include <iomanip>
#include "include/object_detector.h"
namespace PPShiTu { namespace PPShiTu {
...@@ -30,10 +30,10 @@ void ObjectDetector::LoadModel(std::string model_file, int num_theads) { ...@@ -30,10 +30,10 @@ void ObjectDetector::LoadModel(std::string model_file, int num_theads) {
} }
// Visualiztion MaskDetector results // Visualiztion MaskDetector results
cv::Mat VisualizeResult(const cv::Mat& img, cv::Mat VisualizeResult(const cv::Mat &img,
const std::vector<PPShiTu::ObjectResult>& results, const std::vector<PPShiTu::ObjectResult> &results,
const std::vector<std::string>& lables, const std::vector<std::string> &lables,
const std::vector<int>& colormap, const std::vector<int> &colormap,
const bool is_rbox = false) { const bool is_rbox = false) {
cv::Mat vis_img = img.clone(); cv::Mat vis_img = img.clone();
for (int i = 0; i < results.size(); ++i) { for (int i = 0; i < results.size(); ++i) {
...@@ -75,24 +75,18 @@ cv::Mat VisualizeResult(const cv::Mat& img, ...@@ -75,24 +75,18 @@ cv::Mat VisualizeResult(const cv::Mat& img,
origin.y = results[i].rect[1]; origin.y = results[i].rect[1];
// Configure text background // Configure text background
cv::Rect text_back = cv::Rect(results[i].rect[0], cv::Rect text_back =
results[i].rect[1] - text_size.height, cv::Rect(results[i].rect[0], results[i].rect[1] - text_size.height,
text_size.width, text_size.width, text_size.height);
text_size.height);
// Draw text, and background // Draw text, and background
cv::rectangle(vis_img, text_back, roi_color, -1); cv::rectangle(vis_img, text_back, roi_color, -1);
cv::putText(vis_img, cv::putText(vis_img, text, origin, font_face, font_scale,
text, cv::Scalar(255, 255, 255), thickness);
origin,
font_face,
font_scale,
cv::Scalar(255, 255, 255),
thickness);
} }
return vis_img; return vis_img;
} }
void ObjectDetector::Preprocess(const cv::Mat& ori_im) { void ObjectDetector::Preprocess(const cv::Mat &ori_im) {
// Clone the image : keep the original mat for postprocess // Clone the image : keep the original mat for postprocess
cv::Mat im = ori_im.clone(); cv::Mat im = ori_im.clone();
// cv::cvtColor(im, im, cv::COLOR_BGR2RGB); // cv::cvtColor(im, im, cv::COLOR_BGR2RGB);
...@@ -100,7 +94,7 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) { ...@@ -100,7 +94,7 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) {
} }
void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats, void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats,
std::vector<PPShiTu::ObjectResult>* result, std::vector<PPShiTu::ObjectResult> *result,
std::vector<int> bbox_num, std::vector<int> bbox_num,
bool is_rbox = false) { bool is_rbox = false) {
result->clear(); result->clear();
...@@ -156,12 +150,11 @@ void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats, ...@@ -156,12 +150,11 @@ void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats,
} }
} }
void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, void ObjectDetector::Predict(const std::vector<cv::Mat> &imgs, const int warmup,
const int warmup,
const int repeats, const int repeats,
std::vector<PPShiTu::ObjectResult>* result, std::vector<PPShiTu::ObjectResult> *result,
std::vector<int>* bbox_num, std::vector<int> *bbox_num,
std::vector<double>* times) { std::vector<double> *times) {
auto preprocess_start = std::chrono::steady_clock::now(); auto preprocess_start = std::chrono::steady_clock::now();
int batch_size = imgs.size(); int batch_size = imgs.size();
...@@ -180,29 +173,29 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, ...@@ -180,29 +173,29 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1]; scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1];
// TODO: reduce cost time // TODO: reduce cost time
in_data_all.insert( in_data_all.insert(in_data_all.end(), inputs_.im_data_.begin(),
in_data_all.end(), inputs_.im_data_.begin(), inputs_.im_data_.end()); inputs_.im_data_.end());
} }
auto preprocess_end = std::chrono::steady_clock::now(); auto preprocess_end = std::chrono::steady_clock::now();
std::vector<const float *> output_data_list_; std::vector<const float *> output_data_list_;
// Prepare input tensor // Prepare input tensor
auto input_names = predictor_->GetInputNames(); auto input_names = predictor_->GetInputNames();
for (const auto& tensor_name : input_names) { for (const auto &tensor_name : input_names) {
auto in_tensor = predictor_->GetInputByName(tensor_name); auto in_tensor = predictor_->GetInputByName(tensor_name);
if (tensor_name == "image") { if (tensor_name == "image") {
int rh = inputs_.in_net_shape_[0]; int rh = inputs_.in_net_shape_[0];
int rw = inputs_.in_net_shape_[1]; int rw = inputs_.in_net_shape_[1];
in_tensor->Resize({batch_size, 3, rh, rw}); in_tensor->Resize({batch_size, 3, rh, rw});
auto* inptr = in_tensor->mutable_data<float>(); auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(in_data_all.data(), in_data_all.size(), inptr); std::copy_n(in_data_all.data(), in_data_all.size(), inptr);
} else if (tensor_name == "im_shape") { } else if (tensor_name == "im_shape") {
in_tensor->Resize({batch_size, 2}); in_tensor->Resize({batch_size, 2});
auto* inptr = in_tensor->mutable_data<float>(); auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(im_shape_all.data(), im_shape_all.size(), inptr); std::copy_n(im_shape_all.data(), im_shape_all.size(), inptr);
} else if (tensor_name == "scale_factor") { } else if (tensor_name == "scale_factor") {
in_tensor->Resize({batch_size, 2}); in_tensor->Resize({batch_size, 2});
auto* inptr = in_tensor->mutable_data<float>(); auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(scale_factor_all.data(), scale_factor_all.size(), inptr); std::copy_n(scale_factor_all.data(), scale_factor_all.size(), inptr);
} }
} }
...@@ -216,7 +209,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, ...@@ -216,7 +209,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
if (config_.arch_ == "PicoDet") { if (config_.arch_ == "PicoDet") {
for (int j = 0; j < output_names.size(); j++) { for (int j = 0; j < output_names.size(); j++) {
auto output_tensor = predictor_->GetTensor(output_names[j]); auto output_tensor = predictor_->GetTensor(output_names[j]);
const float* outptr = output_tensor->data<float>(); const float *outptr = output_tensor->data<float>();
std::vector<int64_t> output_shape = output_tensor->shape(); std::vector<int64_t> output_shape = output_tensor->shape();
output_data_list_.push_back(outptr); output_data_list_.push_back(outptr);
} }
...@@ -242,7 +235,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, ...@@ -242,7 +235,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
if (config_.arch_ == "PicoDet") { if (config_.arch_ == "PicoDet") {
for (int i = 0; i < output_names.size(); i++) { for (int i = 0; i < output_names.size(); i++) {
auto output_tensor = predictor_->GetTensor(output_names[i]); auto output_tensor = predictor_->GetTensor(output_names[i]);
const float* outptr = output_tensor->data<float>(); const float *outptr = output_tensor->data<float>();
std::vector<int64_t> output_shape = output_tensor->shape(); std::vector<int64_t> output_shape = output_tensor->shape();
if (i == 0) { if (i == 0) {
num_class = output_shape[2]; num_class = output_shape[2];
...@@ -268,16 +261,15 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, ...@@ -268,16 +261,15 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
std::cerr << "[WARNING] No object detected." << std::endl; std::cerr << "[WARNING] No object detected." << std::endl;
} }
output_data_.resize(output_size); output_data_.resize(output_size);
std::copy_n( std::copy_n(output_tensor->mutable_data<float>(), output_size,
output_tensor->mutable_data<float>(), output_size, output_data_.data()); output_data_.data());
int out_bbox_num_size = 1; int out_bbox_num_size = 1;
for (int j = 0; j < out_bbox_num_shape.size(); ++j) { for (int j = 0; j < out_bbox_num_shape.size(); ++j) {
out_bbox_num_size *= out_bbox_num_shape[j]; out_bbox_num_size *= out_bbox_num_shape[j];
} }
out_bbox_num_data_.resize(out_bbox_num_size); out_bbox_num_data_.resize(out_bbox_num_size);
std::copy_n(out_bbox_num->mutable_data<int>(), std::copy_n(out_bbox_num->mutable_data<int>(), out_bbox_num_size,
out_bbox_num_size,
out_bbox_num_data_.data()); out_bbox_num_data_.data());
} }
// Postprocessing result // Postprocessing result
...@@ -285,9 +277,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs, ...@@ -285,9 +277,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
result->clear(); result->clear();
if (config_.arch_ == "PicoDet") { if (config_.arch_ == "PicoDet") {
PPShiTu::PicoDetPostProcess( PPShiTu::PicoDetPostProcess(
result, output_data_list_, config_.fpn_stride_, result, output_data_list_, config_.fpn_stride_, inputs_.im_shape_,
inputs_.im_shape_, inputs_.scale_factor_, inputs_.scale_factor_, config_.nms_info_["score_threshold"].as<float>(),
config_.nms_info_["score_threshold"].as<float>(),
config_.nms_info_["nms_threshold"].as<float>(), num_class, reg_max); config_.nms_info_["nms_threshold"].as<float>(), num_class, reg_max);
bbox_num->push_back(result->size()); bbox_num->push_back(result->size());
} else { } else {
......
...@@ -47,9 +47,9 @@ int activation_function_softmax(const _Tp *src, _Tp *dst, int length) { ...@@ -47,9 +47,9 @@ int activation_function_softmax(const _Tp *src, _Tp *dst, int length) {
} }
// PicoDet decode // PicoDet decode
PPShiTu::ObjectResult PPShiTu::ObjectResult disPred2Bbox(const float *&dfl_det, int label,
disPred2Bbox(const float *&dfl_det, int label, float score, int x, int y, float score, int x, int y, int stride,
int stride, std::vector<float> im_shape, int reg_max) { std::vector<float> im_shape, int reg_max) {
float ct_x = (x + 0.5) * stride; float ct_x = (x + 0.5) * stride;
float ct_y = (y + 0.5) * stride; float ct_y = (y + 0.5) * stride;
std::vector<float> dis_pred; std::vector<float> dis_pred;
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
namespace PPShiTu { namespace PPShiTu {
void InitInfo::Run(cv::Mat* im, ImageBlob* data) { void InitInfo::Run(cv::Mat *im, ImageBlob *data) {
data->im_shape_ = {static_cast<float>(im->rows), data->im_shape_ = {static_cast<float>(im->rows),
static_cast<float>(im->cols)}; static_cast<float>(im->cols)};
data->scale_factor_ = {1., 1.}; data->scale_factor_ = {1., 1.};
...@@ -28,10 +28,10 @@ void InitInfo::Run(cv::Mat* im, ImageBlob* data) { ...@@ -28,10 +28,10 @@ void InitInfo::Run(cv::Mat* im, ImageBlob* data) {
static_cast<float>(im->cols)}; static_cast<float>(im->cols)};
} }
void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) { void NormalizeImage::Run(cv::Mat *im, ImageBlob *data) {
double e = 1.0; double e = 1.0;
if (is_scale_) { if (is_scale_) {
e *= 1./255.0; e *= 1. / 255.0;
} }
(*im).convertTo(*im, CV_32FC3, e); (*im).convertTo(*im, CV_32FC3, e);
for (int h = 0; h < im->rows; h++) { for (int h = 0; h < im->rows; h++) {
...@@ -46,35 +46,61 @@ void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) { ...@@ -46,35 +46,61 @@ void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) {
} }
} }
void Permute::Run(cv::Mat* im, ImageBlob* data) { void NormalizeImage::Run_feature(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale) {
(*im).convertTo(*im, CV_32FC3, scale);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / std[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / std[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / std[2];
}
}
}
void Permute::Run(cv::Mat *im, ImageBlob *data) {
(*im).convertTo(*im, CV_32FC3); (*im).convertTo(*im, CV_32FC3);
int rh = im->rows; int rh = im->rows;
int rw = im->cols; int rw = im->cols;
int rc = im->channels(); int rc = im->channels();
(data->im_data_).resize(rc * rh * rw); (data->im_data_).resize(rc * rh * rw);
float* base = (data->im_data_).data(); float *base = (data->im_data_).data();
for (int i = 0; i < rc; ++i) { for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i); cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i);
} }
} }
void Resize::Run(cv::Mat* im, ImageBlob* data) { void Permute::Run_feature(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Resize::Run(cv::Mat *im, ImageBlob *data) {
auto resize_scale = GenerateScale(*im); auto resize_scale = GenerateScale(*im);
data->im_shape_ = {static_cast<float>(im->cols * resize_scale.first), data->im_shape_ = {static_cast<float>(im->cols * resize_scale.first),
static_cast<float>(im->rows * resize_scale.second)}; static_cast<float>(im->rows * resize_scale.second)};
data->in_net_shape_ = {static_cast<float>(im->cols * resize_scale.first), data->in_net_shape_ = {static_cast<float>(im->cols * resize_scale.first),
static_cast<float>(im->rows * resize_scale.second)}; static_cast<float>(im->rows * resize_scale.second)};
cv::resize( cv::resize(*im, *im, cv::Size(), resize_scale.first, resize_scale.second,
*im, *im, cv::Size(), resize_scale.first, resize_scale.second, interp_); interp_);
data->im_shape_ = { data->im_shape_ = {
static_cast<float>(im->rows), static_cast<float>(im->cols), static_cast<float>(im->rows),
static_cast<float>(im->cols),
}; };
data->scale_factor_ = { data->scale_factor_ = {
resize_scale.second, resize_scale.first, resize_scale.second,
resize_scale.first,
}; };
} }
std::pair<float, float> Resize::GenerateScale(const cv::Mat& im) { std::pair<float, float> Resize::GenerateScale(const cv::Mat &im) {
std::pair<float, float> resize_scale; std::pair<float, float> resize_scale;
int origin_w = im.cols; int origin_w = im.cols;
int origin_h = im.rows; int origin_h = im.rows;
...@@ -101,7 +127,30 @@ std::pair<float, float> Resize::GenerateScale(const cv::Mat& im) { ...@@ -101,7 +127,30 @@ std::pair<float, float> Resize::GenerateScale(const cv::Mat& im) {
return resize_scale; return resize_scale;
} }
void PadStride::Run(cv::Mat* im, ImageBlob* data) { void Resize::Run_feature(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size, int size) {
int resize_h = 0;
int resize_w = 0;
if (size > 0) {
resize_h = size;
resize_w = size;
} else {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
resize_h = round(float(h) * ratio);
resize_w = round(float(w) * ratio);
}
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
void PadStride::Run(cv::Mat *im, ImageBlob *data) {
if (stride_ <= 0) { if (stride_ <= 0) {
return; return;
} }
...@@ -110,42 +159,38 @@ void PadStride::Run(cv::Mat* im, ImageBlob* data) { ...@@ -110,42 +159,38 @@ void PadStride::Run(cv::Mat* im, ImageBlob* data) {
int rw = im->cols; int rw = im->cols;
int nh = (rh / stride_) * stride_ + (rh % stride_ != 0) * stride_; int nh = (rh / stride_) * stride_ + (rh % stride_ != 0) * stride_;
int nw = (rw / stride_) * stride_ + (rw % stride_ != 0) * stride_; int nw = (rw / stride_) * stride_ + (rw % stride_ != 0) * stride_;
cv::copyMakeBorder( cv::copyMakeBorder(*im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT,
*im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT, cv::Scalar(0)); cv::Scalar(0));
data->in_net_shape_ = { data->in_net_shape_ = {
static_cast<float>(im->rows), static_cast<float>(im->cols), static_cast<float>(im->rows),
static_cast<float>(im->cols),
}; };
} }
void TopDownEvalAffine::Run(cv::Mat* im, ImageBlob* data) { void TopDownEvalAffine::Run(cv::Mat *im, ImageBlob *data) {
cv::resize(*im, *im, cv::Size(trainsize_[0], trainsize_[1]), 0, 0, interp_); cv::resize(*im, *im, cv::Size(trainsize_[0], trainsize_[1]), 0, 0, interp_);
// todo: Simd::ResizeBilinear(); // todo: Simd::ResizeBilinear();
data->in_net_shape_ = { data->in_net_shape_ = {
static_cast<float>(trainsize_[1]), static_cast<float>(trainsize_[0]), static_cast<float>(trainsize_[1]),
static_cast<float>(trainsize_[0]),
}; };
} }
// Preprocessor op running order // Preprocessor op running order
const std::vector<std::string> Preprocessor::RUN_ORDER = {"InitInfo", const std::vector<std::string> Preprocessor::RUN_ORDER = {
"DetTopDownEvalAffine", "InitInfo", "DetTopDownEvalAffine", "DetResize",
"DetResize", "DetNormalizeImage", "DetPadStride", "DetPermute"};
"DetNormalizeImage",
"DetPadStride", void Preprocessor::Run(cv::Mat *im, ImageBlob *data) {
"DetPermute"}; for (const auto &name : RUN_ORDER) {
void Preprocessor::Run(cv::Mat* im, ImageBlob* data) {
for (const auto& name : RUN_ORDER) {
if (ops_.find(name) != ops_.end()) { if (ops_.find(name) != ops_.end()) {
ops_[name]->Run(im, data); ops_[name]->Run(im, data);
} }
} }
} }
void CropImg(cv::Mat& img, void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector<int> &area,
cv::Mat& crop_img, std::vector<float> &center, std::vector<float> &scale,
std::vector<int>& area,
std::vector<float>& center,
std::vector<float>& scale,
float expandratio) { float expandratio) {
int crop_x1 = std::max(0, area[0]); int crop_x1 = std::max(0, area[0]);
int crop_y1 = std::max(0, area[1]); int crop_y1 = std::max(0, area[1]);
......
...@@ -54,4 +54,53 @@ void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold, ...@@ -54,4 +54,53 @@ void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold,
} }
} }
float RectOverlap(const ObjectResult &a, const ObjectResult &b) {
float Aa = (a.rect[2] - a.rect[0] + 1) * (a.rect[3] - a.rect[1] + 1);
float Ab = (b.rect[2] - b.rect[0] + 1) * (b.rect[3] - b.rect[1] + 1);
int iou_w = max(min(a.rect[2], b.rect[2]) - max(a.rect[0], b.rect[0]) + 1, 0);
int iou_h = max(min(a.rect[3], b.rect[3]) - max(a.rect[1], b.rect[1]) + 1, 0);
float Aab = iou_w * iou_h;
return Aab / (Aa + Ab - Aab);
}
inline void
GetMaxScoreIndex(const std::vector<ObjectResult> &det_result,
const float threshold,
std::vector<std::pair<float, int>> &score_index_vec) {
// Generate index score pairs.
for (size_t i = 0; i < det_result.size(); ++i) {
if (det_result[i].confidence > threshold) {
score_index_vec.push_back(std::make_pair(det_result[i].confidence, i));
}
}
// Sort the score pair according to the scores in descending order
std::stable_sort(score_index_vec.begin(), score_index_vec.end(),
SortScorePairDescend<int>);
}
void NMSBoxes(const std::vector<ObjectResult> det_result,
const float score_threshold, const float nms_threshold,
std::vector<int> &indices) {
int a = 1;
// Get top_k scores (with corresponding indices).
std::vector<std::pair<float, int>> score_index_vec;
GetMaxScoreIndex(det_result, score_threshold, score_index_vec);
// Do nms
indices.clear();
for (size_t i = 0; i < score_index_vec.size(); ++i) {
const int idx = score_index_vec[i].second;
bool keep = true;
for (int k = 0; k < (int)indices.size() && keep; ++k) {
const int kept_idx = indices[k];
float overlap = RectOverlap(det_result[idx], det_result[kept_idx]);
keep = overlap <= nms_threshold;
}
if (keep)
indices.push_back(idx);
}
}
} // namespace PPShiTu } // namespace PPShiTu
...@@ -64,4 +64,4 @@ const SearchResult &VectorSearch::Search(float *feature, int query_number) { ...@@ -64,4 +64,4 @@ const SearchResult &VectorSearch::Search(float *feature, int query_number) {
const std::string &VectorSearch::GetLabel(faiss::Index::idx_t ind) { const std::string &VectorSearch::GetLabel(faiss::Index::idx_t ind) {
return this->id_map.at(ind); return this->id_map.at(ind);
} }
} } // namespace PPShiTu
\ No newline at end of file
# paddle2onnx 模型转化与预测 # paddle2onnx 模型转化与预测
本章节介绍 ResNet50_vd 模型如何转化为 ONNX 模型,并基于 ONNX 引擎预测。 ## 目录
- [paddle2onnx 模型转化与预测](#paddle2onnx-模型转化与预测)
- [1. 环境准备](#1-环境准备)
- [2. 模型转换](#2-模型转换)
- [3. onnx 预测](#3-onnx-预测)
## 1. 环境准备 ## 1. 环境准备
需要准备 Paddle2ONNX 模型转化环境,和 ONNX 模型预测环境。 需要准备 Paddle2ONNX 模型转化环境,和 ONNX 模型预测环境。
Paddle2ONNX 支持将 PaddlePaddle 模型格式转化到 ONNX 模型格式,算子目前稳定支持导出 ONNX Opset 9~11,部分Paddle算子支持更低的ONNX Opset转换 Paddle2ONNX 支持将 PaddlePaddle inference 模型格式转化到 ONNX 模型格式,算子目前稳定支持导出 ONNX Opset 9~11
更多细节可参考 [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md) 更多细节可参考 [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX#paddle2onnx)
- 安装 Paddle2ONNX - 安装 Paddle2ONNX
``` ```shell
python3.7 -m pip install paddle2onnx python3.7 -m pip install paddle2onnx
``` ```
- 安装 ONNX 运行时 - 安装 ONNX 推理引擎
``` ```shell
python3.7 -m pip install onnxruntime python3.7 -m pip install onnxruntime
``` ```
下面以 ResNet50_vd 为例,介绍如何将 PaddlePaddle inference 模型转换为 ONNX 模型,并基于 ONNX 引擎预测。
## 2. 模型转换 ## 2. 模型转换
- ResNet50_vd inference模型下载 - ResNet50_vd inference模型下载
``` ```shell
cd deploy cd deploy
mkdir models && cd models mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd .. cd ..
``` ```
- 模型转换 - 模型转换
使用 Paddle2ONNX 将 Paddle 静态图模型转换为 ONNX 模型格式: 使用 Paddle2ONNX 将 Paddle 静态图模型转换为 ONNX 模型格式:
``` ```shell
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \ paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \ --model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \ --params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \ --save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \ --opset_version=10 \
--enable_onnx_checker=True --enable_onnx_checker=True
``` ```
执行完毕后,ONNX 模型 `inference.onnx` 会被保存在 `./models/ResNet50_vd_infer/` 路径下 转换完毕后,生成的ONNX 模型 `inference.onnx` 会被保存在 `./models/ResNet50_vd_infer/` 路径下
## 3. onnx 预测 ## 3. onnx 预测
执行如下命令: 执行如下命令:
``` ```shell
python3.7 python/predict_cls.py \ python3.7 python/predict_cls.py \
-c configs/inference_cls.yaml \ -c configs/inference_cls.yaml \
-o Global.use_onnx=True \ -o Global.use_onnx=True \
......
# Paddle2ONNX: Converting To ONNX and Deployment
This section introduce that how to convert the Paddle Inference Model ResNet50_vd to ONNX model and deployment based on ONNX engine.
## 1. Installation
First, you need to install Paddle2ONNX and onnxruntime. Paddle2ONNX is a toolkit to convert Paddle Inference Model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_en.md) for more information.
- Paddle2ONNX Installation
```
python3.7 -m pip install paddle2onnx
```
- ONNX Installation
```
python3.7 -m pip install onnxruntime
```
## 2. Converting to ONNX
Download the Paddle Inference Model ResNet50_vd:
```
cd deploy
mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd ..
```
Converting to ONNX model:
```
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \
--enable_onnx_checker=True
```
After running the above command, the ONNX model file converted would be save in `./models/ResNet50_vd_infer/`.
## 3. Deployment
Deployment with ONNX model, command is as shown below.
```
python3.7 python/predict_cls.py \
-c configs/inference_cls.yaml \
-o Global.use_onnx=True \
-o Global.use_gpu=False \
-o Global.inference_model_dir=./models/ResNet50_vd_infer
```
The prediction results:
```
ILSVRC2012_val_00000010.jpeg: class id(s): [153, 204, 229, 332, 155], score(s): [0.69, 0.10, 0.02, 0.01, 0.01], label_name(s): ['Maltese dog, Maltese terrier, Maltese', 'Lhasa, Lhasa apso', 'Old English sheepdog, bobtail', 'Angora, Angora rabbit', 'Shih-Tzu']
```
# 使用镜像:
# registry.baidubce.com/paddlepaddle/paddle:latest-dev-cuda10.1-cudnn7-gcc82
# 编译Serving Server:
# client和app可以直接使用release版本
# server因为加入了自定义OP,需要重新编译
# 默认编译时的${PWD}=PaddleClas/deploy/paddleserving/
python_name=${1:-'python'}
apt-get update
apt install -y libcurl4-openssl-dev libbz2-dev
wget -nc https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar
tar xf centos_ssl.tar
rm -rf centos_ssl.tar
mv libcrypto.so.1.0.2k /usr/lib/libcrypto.so.1.0.2k
mv libssl.so.1.0.2k /usr/lib/libssl.so.1.0.2k
ln -sf /usr/lib/libcrypto.so.1.0.2k /usr/lib/libcrypto.so.10
ln -sf /usr/lib/libssl.so.1.0.2k /usr/lib/libssl.so.10
ln -sf /usr/lib/libcrypto.so.10 /usr/lib/libcrypto.so
ln -sf /usr/lib/libssl.so.10 /usr/lib/libssl.so
# 安装go依赖
rm -rf /usr/local/go
wget -qO- https://paddle-ci.cdn.bcebos.com/go1.17.2.linux-amd64.tar.gz | tar -xz -C /usr/local
export GOROOT=/usr/local/go
export GOPATH=/root/gopath
export PATH=$PATH:$GOPATH/bin:$GOROOT/bin
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2
go install github.com/golang/protobuf/protoc-gen-go@v1.4.3
go install google.golang.org/grpc@v1.33.0
go env -w GO111MODULE=auto
# 下载opencv库
wget https://paddle-qa.bj.bcebos.com/PaddleServing/opencv3.tar.gz
tar -xvf opencv3.tar.gz
rm -rf opencv3.tar.gz
export OPENCV_DIR=$PWD/opencv3
# clone Serving
git clone https://github.com/PaddlePaddle/Serving.git -b develop --depth=1
cd Serving # PaddleClas/deploy/paddleserving/Serving
export Serving_repo_path=$PWD
git submodule update --init --recursive
${python_name} -m pip install -r python/requirements.txt
# set env
export PYTHON_INCLUDE_DIR=$(${python_name} -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())")
export PYTHON_LIBRARIES=$(${python_name} -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))")
export PYTHON_EXECUTABLE=`which ${python_name}`
export CUDA_PATH='/usr/local/cuda'
export CUDNN_LIBRARY='/usr/local/cuda/lib64/'
export CUDA_CUDART_LIBRARY='/usr/local/cuda/lib64/'
export TENSORRT_LIBRARY_PATH='/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/'
# cp 自定义OP代码
\cp ../preprocess/general_clas_op.* ${Serving_repo_path}/core/general-server/op
\cp ../preprocess/preprocess_op.* ${Serving_repo_path}/core/predictor/tools/pp_shitu_tools
# 编译Server
mkdir server-build-gpu-opencv
cd server-build-gpu-opencv
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
-DCUDNN_LIBRARY=${CUDNN_LIBRARY} \
-DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \
-DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON \
-DSERVER=ON \
-DWITH_GPU=ON ..
make -j32
${python_name} -m pip install python/dist/paddle*
# export SERVING_BIN
export SERVING_BIN=$PWD/core/general-server/serving
cd ../../
\ No newline at end of file
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "core/general-server/op/general_clas_op.h"
#include "core/predictor/framework/infer.h"
#include "core/predictor/framework/memory.h"
#include "core/predictor/framework/resource.h"
#include "core/util/include/timer.h"
#include <algorithm>
#include <iostream>
#include <memory>
#include <sstream>
namespace baidu {
namespace paddle_serving {
namespace serving {
using baidu::paddle_serving::Timer;
using baidu::paddle_serving::predictor::MempoolWrapper;
using baidu::paddle_serving::predictor::general_model::Tensor;
using baidu::paddle_serving::predictor::general_model::Response;
using baidu::paddle_serving::predictor::general_model::Request;
using baidu::paddle_serving::predictor::InferManager;
using baidu::paddle_serving::predictor::PaddleGeneralModelConfig;
int GeneralClasOp::inference() {
VLOG(2) << "Going to run inference";
const std::vector<std::string> pre_node_names = pre_names();
if (pre_node_names.size() != 1) {
LOG(ERROR) << "This op(" << op_name()
<< ") can only have one predecessor op, but received "
<< pre_node_names.size();
return -1;
}
const std::string pre_name = pre_node_names[0];
const GeneralBlob *input_blob = get_depend_argument<GeneralBlob>(pre_name);
if (!input_blob) {
LOG(ERROR) << "input_blob is nullptr,error";
return -1;
}
uint64_t log_id = input_blob->GetLogId();
VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name;
GeneralBlob *output_blob = mutable_data<GeneralBlob>();
if (!output_blob) {
LOG(ERROR) << "output_blob is nullptr,error";
return -1;
}
output_blob->SetLogId(log_id);
if (!input_blob) {
LOG(ERROR) << "(logid=" << log_id
<< ") Failed mutable depended argument, op:" << pre_name;
return -1;
}
const TensorVector *in = &input_blob->tensor_vector;
TensorVector *out = &output_blob->tensor_vector;
int batch_size = input_blob->_batch_size;
output_blob->_batch_size = batch_size;
VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size;
Timer timeline;
int64_t start = timeline.TimeStampUS();
timeline.Start();
// only support string type
char *total_input_ptr = static_cast<char *>(in->at(0).data.data());
std::string base64str = total_input_ptr;
cv::Mat img = Base2Mat(base64str);
// RGB2BGR
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
// Resize
cv::Mat resize_img;
resize_op_.Run(img, resize_img, resize_short_size_);
// CenterCrop
crop_op_.Run(resize_img, crop_size_);
// Normalize
normalize_op_.Run(&resize_img, mean_, scale_, is_scale_);
// Permute
std::vector<float> input(1 * 3 * resize_img.rows * resize_img.cols, 0.0f);
permute_op_.Run(&resize_img, input.data());
float maxValue = *max_element(input.begin(), input.end());
float minValue = *min_element(input.begin(), input.end());
TensorVector *real_in = new TensorVector();
if (!real_in) {
LOG(ERROR) << "real_in is nullptr,error";
return -1;
}
std::vector<int> input_shape;
int in_num = 0;
void *databuf_data = NULL;
char *databuf_char = NULL;
size_t databuf_size = 0;
input_shape = {1, 3, resize_img.rows, resize_img.cols};
in_num = std::accumulate(input_shape.begin(), input_shape.end(), 1,
std::multiplies<int>());
databuf_size = in_num * sizeof(float);
databuf_data = MempoolWrapper::instance().malloc(databuf_size);
if (!databuf_data) {
LOG(ERROR) << "Malloc failed, size: " << databuf_size;
return -1;
}
memcpy(databuf_data, input.data(), databuf_size);
databuf_char = reinterpret_cast<char *>(databuf_data);
paddle::PaddleBuf paddleBuf(databuf_char, databuf_size);
paddle::PaddleTensor tensor_in;
tensor_in.name = in->at(0).name;
tensor_in.dtype = paddle::PaddleDType::FLOAT32;
tensor_in.shape = {1, 3, resize_img.rows, resize_img.cols};
tensor_in.lod = in->at(0).lod;
tensor_in.data = paddleBuf;
real_in->push_back(tensor_in);
if (InferManager::instance().infer(engine_name().c_str(), real_in, out,
batch_size)) {
LOG(ERROR) << "(logid=" << log_id
<< ") Failed do infer in fluid model: " << engine_name().c_str();
return -1;
}
int64_t end = timeline.TimeStampUS();
CopyBlobInfo(input_blob, output_blob);
AddBlobInfo(output_blob, start);
AddBlobInfo(output_blob, end);
return 0;
}
cv::Mat GeneralClasOp::Base2Mat(std::string &base64_data) {
cv::Mat img;
std::string s_mat;
s_mat = base64Decode(base64_data.data(), base64_data.size());
std::vector<char> base64_img(s_mat.begin(), s_mat.end());
img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR
return img;
}
std::string GeneralClasOp::base64Decode(const char *Data, int DataByte) {
const char DecodeTable[] = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
62, // '+'
0, 0, 0,
63, // '/'
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9'
0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z'
0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36,
37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z'
};
std::string strDecode;
int nValue;
int i = 0;
while (i < DataByte) {
if (*Data != '\r' && *Data != '\n') {
nValue = DecodeTable[*Data++] << 18;
nValue += DecodeTable[*Data++] << 12;
strDecode += (nValue & 0x00FF0000) >> 16;
if (*Data != '=') {
nValue += DecodeTable[*Data++] << 6;
strDecode += (nValue & 0x0000FF00) >> 8;
if (*Data != '=') {
nValue += DecodeTable[*Data++];
strDecode += nValue & 0x000000FF;
}
}
i += 4;
} else // 回车换行,跳过
{
Data++;
i++;
}
}
return strDecode;
}
DEFINE_OP(GeneralClasOp);
} // namespace serving
} // namespace paddle_serving
} // namespace baidu
// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include "core/general-server/general_model_service.pb.h"
#include "core/general-server/op/general_infer_helper.h"
#include "core/predictor/tools/pp_shitu_tools/preprocess_op.h"
#include "paddle_inference_api.h" // NOLINT
#include <string>
#include <vector>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <numeric>
namespace baidu {
namespace paddle_serving {
namespace serving {
class GeneralClasOp
: public baidu::paddle_serving::predictor::OpWithChannel<GeneralBlob> {
public:
typedef std::vector<paddle::PaddleTensor> TensorVector;
DECLARE_OP(GeneralClasOp);
int inference();
private:
// clas preprocess
std::vector<float> mean_ = {0.485f, 0.456f, 0.406f};
std::vector<float> scale_ = {0.229f, 0.224f, 0.225f};
bool is_scale_ = true;
int resize_short_size_ = 256;
int crop_size_ = 224;
PaddleClas::ResizeImg resize_op_;
PaddleClas::Normalize normalize_op_;
PaddleClas::Permute permute_op_;
PaddleClas::CenterCropImg crop_op_;
// read pics
cv::Mat Base2Mat(std::string &base64_data);
std::string base64Decode(const char *Data, int DataByte);
};
} // namespace serving
} // namespace paddle_serving
} // namespace baidu
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "paddle_api.h"
#include "paddle_inference_api.h"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <math.h>
#include <numeric>
#include "preprocess_op.h"
namespace Feature {
void Permute::Run(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Normalize::Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale) {
(*im).convertTo(*im, CV_32FC3, scale);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / std[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / std[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / std[2];
}
}
}
void CenterCropImg::Run(cv::Mat &img, const int crop_size) {
int resize_w = img.cols;
int resize_h = img.rows;
int w_start = int((resize_w - crop_size) / 2);
int h_start = int((resize_h - crop_size) / 2);
cv::Rect rect(w_start, h_start, crop_size, crop_size);
img = img(rect);
}
void ResizeImg::Run(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size, int size) {
int resize_h = 0;
int resize_w = 0;
if (size > 0) {
resize_h = size;
resize_w = size;
} else {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
resize_h = round(float(h) * ratio);
resize_w = round(float(w) * ratio);
}
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
} // namespace Feature
namespace PaddleClas {
void Permute::Run(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Normalize::Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &scale, const bool is_scale) {
double e = 1.0;
if (is_scale) {
e /= 255.0;
}
(*im).convertTo(*im, CV_32FC3, e);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / scale[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / scale[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / scale[2];
}
}
}
void CenterCropImg::Run(cv::Mat &img, const int crop_size) {
int resize_w = img.cols;
int resize_h = img.rows;
int w_start = int((resize_w - crop_size) / 2);
int h_start = int((resize_h - crop_size) / 2);
cv::Rect rect(w_start, h_start, crop_size, crop_size);
img = img(rect);
}
void ResizeImg::Run(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size) {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
int resize_h = round(float(h) * ratio);
int resize_w = round(float(w) * ratio);
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
} // namespace PaddleClas
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <numeric>
namespace Feature {
class Normalize {
public:
virtual void Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale);
};
// RGB -> CHW
class Permute {
public:
virtual void Run(const cv::Mat *im, float *data);
};
class CenterCropImg {
public:
virtual void Run(cv::Mat &im, const int crop_size = 224);
};
class ResizeImg {
public:
virtual void Run(const cv::Mat &img, cv::Mat &resize_img, int max_size_len,
int size = 0);
};
} // namespace Feature
namespace PaddleClas {
class Normalize {
public:
virtual void Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &scale, const bool is_scale = true);
};
// RGB -> CHW
class Permute {
public:
virtual void Run(const cv::Mat *im, float *data);
};
class CenterCropImg {
public:
virtual void Run(cv::Mat &im, const int crop_size = 224);
};
class ResizeImg {
public:
virtual void Run(const cv::Mat &img, cv::Mat &resize_img, int max_size_len);
};
} // namespace PaddleClas
../../docs/zh_CN/inference_deployment/paddle_serving_deploy.md ../../docs/zh_CN/inference_deployment/classification_serving_deploy.md
\ No newline at end of file \ No newline at end of file
../../docs/en/inference_deployment/classification_serving_deploy_en.md
\ No newline at end of file
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
feed_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
feed_type: 1
shape: 6
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
fetch_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
fetch_type: 1
shape: 6
}
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
feed_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
feed_type: 1
shape: 6
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
fetch_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
fetch_type: 1
shape: 6
}
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 7
shape: -1
shape: -1
shape: 3
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: false
fetch_type: 2
}
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 7
shape: -1
shape: -1
shape: 3
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: false
fetch_type: 2
}
../../../docs/zh_CN/inference_deployment/recognition_serving_deploy.md
\ No newline at end of file
../../../docs/en/inference_deployment/recognition_serving_deploy_en.md
\ No newline at end of file
nohup python3 -m paddle_serving_server.serve \ gpu_id=$1
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving \
--port 9293 >>log_mainbody_detection.txt 1&>2 &
nohup python3 -m paddle_serving_server.serve \ # PP-ShiTu CPP serving script
--model ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \ if [[ -n "${gpu_id}" ]]; then
--port 9294 >>log_feature_extraction.txt 1&>2 & nohup python3.7 -m paddle_serving_server.serve \
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \
--op GeneralPicodetOp GeneralFeatureExtractOp \
--port 9400 --gpu_id="${gpu_id}" > log_PPShiTu.txt 2>&1 &
else
nohup python3.7 -m paddle_serving_server.serve \
--model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \
--op GeneralPicodetOp GeneralFeatureExtractOp \
--port 9400 > log_PPShiTu.txt 2>&1 &
fi
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import sys
import numpy as np import numpy as np
from paddle_serving_client import Client from paddle_serving_client import Client
...@@ -22,101 +21,14 @@ import faiss ...@@ -22,101 +21,14 @@ import faiss
import os import os
import pickle import pickle
rec_nms_thresold = 0.05
rec_score_thres = 0.5
feature_normalize = True
return_k = 1
index_dir = "../../drink_dataset_v1.0/index"
class MainbodyDetect():
"""
pp-shitu mainbody detect.
include preprocess, process, postprocess
return detect results
Attention: Postprocess include num limit and box filter; no nms
"""
def __init__(self):
self.preprocess = DetectionSequential([
DetectionFile2Image(), DetectionNormalize(
[0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
DetectionResize(
(640, 640), False, interpolation=2), DetectionTranspose(
(2, 0, 1))
])
self.client = Client()
self.client.load_client_config(
"../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/serving_client_conf.prototxt"
)
self.client.connect(['127.0.0.1:9293'])
self.max_det_result = 5
self.conf_threshold = 0.2
def predict(self, imgpath):
im, im_info = self.preprocess(imgpath)
im_shape = np.array(im.shape[1:]).reshape(-1)
scale_factor = np.array(list(im_info['scale_factor'])).reshape(-1)
fetch_map = self.client.predict(
feed={
"image": im,
"im_shape": im_shape,
"scale_factor": scale_factor,
},
fetch=["save_infer_model/scale_0.tmp_1"],
batch=False)
return self.postprocess(fetch_map, imgpath)
def postprocess(self, fetch_map, imgpath):
#1. get top max_det_result
det_results = fetch_map["save_infer_model/scale_0.tmp_1"]
if len(det_results) > self.max_det_result:
boxes_reserved = fetch_map[
"save_infer_model/scale_0.tmp_1"][:self.max_det_result]
else:
boxes_reserved = det_results
#2. do conf threshold
boxes_list = []
for i in range(boxes_reserved.shape[0]):
if (boxes_reserved[i, 1]) > self.conf_threshold:
boxes_list.append(boxes_reserved[i, :])
#3. add origin image box
origin_img = cv2.imread(imgpath)
boxes_list.append(
np.array([0, 1.0, 0, 0, origin_img.shape[1], origin_img.shape[0]]))
return np.array(boxes_list)
class ObjectRecognition():
"""
pp-shitu object recognion for all objects detected by MainbodyDetect.
include preprocess, process, postprocess
preprocess include preprocess for each image and batching.
Batch process
postprocess include retrieval and nms
"""
def __init__(self):
self.client = Client()
self.client.load_client_config(
"../../models/general_PPLCNet_x2_5_lite_v1.0_client/serving_client_conf.prototxt"
)
self.client.connect(["127.0.0.1:9294"])
self.seq = Sequential([
BGR2RGB(), Resize((224, 224)), Div(255),
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225],
False), Transpose((2, 0, 1))
])
self.searcher, self.id_map = self.init_index()
self.rec_nms_thresold = 0.05 def init_index(index_dir):
self.rec_score_thres = 0.5
self.feature_normalize = True
self.return_k = 1
def init_index(self):
index_dir = "../../drink_dataset_v1.0/index"
assert os.path.exists(os.path.join( assert os.path.exists(os.path.join(
index_dir, "vector.index")), "vector.index not found ..." index_dir, "vector.index")), "vector.index not found ..."
assert os.path.exists(os.path.join( assert os.path.exists(os.path.join(
...@@ -128,42 +40,11 @@ class ObjectRecognition(): ...@@ -128,42 +40,11 @@ class ObjectRecognition():
id_map = pickle.load(fd) id_map = pickle.load(fd)
return searcher, id_map return searcher, id_map
def predict(self, det_boxes, imgpath):
#1. preprocess
batch_imgs = []
origin_img = cv2.imread(imgpath)
for i in range(det_boxes.shape[0]):
box = det_boxes[i]
x1, y1, x2, y2 = [int(x) for x in box[2:]]
cropped_img = origin_img[y1:y2, x1:x2, :].copy()
tmp = self.seq(cropped_img)
batch_imgs.append(tmp)
batch_imgs = np.array(batch_imgs)
#2. process
fetch_map = self.client.predict(
feed={"x": batch_imgs}, fetch=["features"], batch=True)
batch_features = fetch_map["features"]
#3. postprocess
if self.feature_normalize:
feas_norm = np.sqrt(
np.sum(np.square(batch_features), axis=1, keepdims=True))
batch_features = np.divide(batch_features, feas_norm)
scores, docs = self.searcher.search(batch_features, self.return_k)
results = []
for i in range(scores.shape[0]):
pred = {}
if scores[i][0] >= self.rec_score_thres:
pred["bbox"] = [int(x) for x in det_boxes[i, 2:]]
pred["rec_docs"] = self.id_map[docs[i][0]].split()[1]
pred["rec_scores"] = scores[i][0]
results.append(pred)
return self.nms_to_rec_results(results)
def nms_to_rec_results(self, results): #get box
def nms_to_rec_results(results, thresh=0.1):
filtered_results = [] filtered_results = []
x1 = np.array([r["bbox"][0] for r in results]).astype("float32") x1 = np.array([r["bbox"][0] for r in results]).astype("float32")
y1 = np.array([r["bbox"][1] for r in results]).astype("float32") y1 = np.array([r["bbox"][1] for r in results]).astype("float32")
x2 = np.array([r["bbox"][2] for r in results]).astype("float32") x2 = np.array([r["bbox"][2] for r in results]).astype("float32")
...@@ -183,20 +64,58 @@ class ObjectRecognition(): ...@@ -183,20 +64,58 @@ class ObjectRecognition():
h = np.maximum(0.0, yy2 - yy1 + 1) h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter) ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= self.rec_nms_thresold)[0] inds = np.where(ovr <= thresh)[0]
order = order[inds + 1] order = order[inds + 1]
filtered_results.append(results[i]) filtered_results.append(results[i])
return filtered_results return filtered_results
if __name__ == "__main__": def postprocess(fetch_dict, feature_normalize, det_boxes, searcher, id_map,
det = MainbodyDetect() return_k, rec_score_thres, rec_nms_thresold):
rec = ObjectRecognition() batch_features = fetch_dict["features"]
#1. get det_results #do feature norm
imgpath = "../../drink_dataset_v1.0/test_images/001.jpeg" if feature_normalize:
det_results = det.predict(imgpath) feas_norm = np.sqrt(
np.sum(np.square(batch_features), axis=1, keepdims=True))
batch_features = np.divide(batch_features, feas_norm)
scores, docs = searcher.search(batch_features, return_k)
results = []
for i in range(scores.shape[0]):
pred = {}
if scores[i][0] >= rec_score_thres:
pred["bbox"] = [int(x) for x in det_boxes[i, 2:]]
pred["rec_docs"] = id_map[docs[i][0]].split()[1]
pred["rec_scores"] = scores[i][0]
results.append(pred)
#do nms
results = nms_to_rec_results(results, rec_nms_thresold)
return results
#do client
if __name__ == "__main__":
client = Client()
client.load_client_config([
"../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client",
"../../models/general_PPLCNet_x2_5_lite_v1.0_client"
])
client.connect(['127.0.0.1:9400'])
im = cv2.imread("../../drink_dataset_v1.0/test_images/001.jpeg")
im_shape = np.array(im.shape[:2]).reshape(-1)
fetch_map = client.predict(
feed={"image": im,
"im_shape": im_shape},
fetch=["features", "boxes"],
batch=False)
#2. get rec_results #add retrieval procedure
rec_results = rec.predict(det_results, imgpath) det_boxes = fetch_map["boxes"]
print(rec_results) searcher, id_map = init_index(index_dir)
results = postprocess(fetch_map, feature_normalize, det_boxes, searcher,
id_map, return_k, rec_score_thres, rec_nms_thresold)
print(results)
#run cls server: gpu_id=$1
nohup python3 -m paddle_serving_server.serve --model ResNet50_vd_serving --port 9292 &
# ResNet50_vd CPP serving script
if [[ -n "${gpu_id}" ]]; then
nohup python3.7 -m paddle_serving_server.serve \
--model ./ResNet50_vd_serving \
--op GeneralClasOp \
--port 9292 &
else
nohup python3.7 -m paddle_serving_server.serve \
--model ./ResNet50_vd_serving \
--op GeneralClasOp \
--port 9292 --gpu_id="${gpu_id}" &
fi
...@@ -12,16 +12,20 @@ ...@@ -12,16 +12,20 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import sys import base64
import time
from paddle_serving_client import Client from paddle_serving_client import Client
#app
from paddle_serving_app.reader import Sequential, URL2Image, Resize def bytes_to_base64(image: bytes) -> str:
from paddle_serving_app.reader import CenterCrop, RGB2BGR, Transpose, Div, Normalize """encode bytes into base64 string
import time """
return base64.b64encode(image).decode('utf8')
client = Client() client = Client()
client.load_client_config("./ResNet50_vd_serving/serving_server_conf.prototxt") client.load_client_config("./ResNet50_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9292"]) client.connect(["127.0.0.1:9292"])
label_dict = {} label_dict = {}
...@@ -31,22 +35,17 @@ with open("imagenet.label") as fin: ...@@ -31,22 +35,17 @@ with open("imagenet.label") as fin:
label_dict[label_idx] = line.strip() label_dict[label_idx] = line.strip()
label_idx += 1 label_idx += 1
#preprocess image_file = "./daisy.jpg"
seq = Sequential([
URL2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
start = time.time()
image_file = "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"
for i in range(1): for i in range(1):
img = seq(image_file) start = time.time()
fetch_map = client.predict( with open(image_file, 'rb') as img_file:
feed={"inputs": img}, fetch=["prediction"], batch=False) image_data = img_file.read()
image = bytes_to_base64(image_data)
prob = max(fetch_map["prediction"][0]) fetch_dict = client.predict(
label = label_dict[fetch_map["prediction"][0].tolist().index(prob)].strip( feed={"inputs": image}, fetch=["prediction"], batch=False)
).replace(",", "") prob = max(fetch_dict["prediction"][0])
label = label_dict[fetch_dict["prediction"][0].tolist().index(
prob)].strip().replace(",", "")
print("prediction: {}, probability: {}".format(label, prob)) print("prediction: {}, probability: {}".format(label, prob))
end = time.time() end = time.time()
print(end - start) print(end - start)
...@@ -53,6 +53,34 @@ class PostProcesser(object): ...@@ -53,6 +53,34 @@ class PostProcesser(object):
return rtn return rtn
class ThreshOutput(object):
def __init__(self, threshold, label_0="0", label_1="1"):
self.threshold = threshold
self.label_0 = label_0
self.label_1 = label_1
def __call__(self, x, file_names=None):
y = []
for idx, probs in enumerate(x):
score = probs[1]
if score < self.threshold:
result = {
"class_ids": [0],
"scores": [1 - score],
"label_names": [self.label_0]
}
else:
result = {
"class_ids": [1],
"scores": [score],
"label_names": [self.label_1]
}
if file_names is not None:
result["file_name"] = file_names[idx]
y.append(result)
return y
class Topk(object): class Topk(object):
def __init__(self, topk=1, class_id_map_file=None): def __init__(self, topk=1, class_id_map_file=None):
assert isinstance(topk, (int, )) assert isinstance(topk, (int, ))
...@@ -159,3 +187,136 @@ class Binarize(object): ...@@ -159,3 +187,136 @@ class Binarize(object):
byte[:, i:i + 1] = np.dot(x[:, i * 8:(i + 1) * 8], self.unit) byte[:, i:i + 1] = np.dot(x[:, i * 8:(i + 1) * 8], self.unit)
return byte return byte
class PersonAttribute(object):
def __init__(self,
threshold=0.5,
glasses_threshold=0.3,
hold_threshold=0.6):
self.threshold = threshold
self.glasses_threshold = glasses_threshold
self.hold_threshold = hold_threshold
def __call__(self, batch_preds, file_names=None):
# postprocess output of predictor
age_list = ['AgeLess18', 'Age18-60', 'AgeOver60']
direct_list = ['Front', 'Side', 'Back']
bag_list = ['HandBag', 'ShoulderBag', 'Backpack']
upper_list = ['UpperStride', 'UpperLogo', 'UpperPlaid', 'UpperSplice']
lower_list = [
'LowerStripe', 'LowerPattern', 'LongCoat', 'Trousers', 'Shorts',
'Skirt&Dress'
]
batch_res = []
for res in batch_preds:
res = res.tolist()
label_res = []
# gender
gender = 'Female' if res[22] > self.threshold else 'Male'
label_res.append(gender)
# age
age = age_list[np.argmax(res[19:22])]
label_res.append(age)
# direction
direction = direct_list[np.argmax(res[23:])]
label_res.append(direction)
# glasses
glasses = 'Glasses: '
if res[1] > self.glasses_threshold:
glasses += 'True'
else:
glasses += 'False'
label_res.append(glasses)
# hat
hat = 'Hat: '
if res[0] > self.threshold:
hat += 'True'
else:
hat += 'False'
label_res.append(hat)
# hold obj
hold_obj = 'HoldObjectsInFront: '
if res[18] > self.hold_threshold:
hold_obj += 'True'
else:
hold_obj += 'False'
label_res.append(hold_obj)
# bag
bag = bag_list[np.argmax(res[15:18])]
bag_score = res[15 + np.argmax(res[15:18])]
bag_label = bag if bag_score > self.threshold else 'No bag'
label_res.append(bag_label)
# upper
upper_res = res[4:8]
upper_label = 'Upper:'
sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve'
upper_label += ' {}'.format(sleeve)
for i, r in enumerate(upper_res):
if r > self.threshold:
upper_label += ' {}'.format(upper_list[i])
label_res.append(upper_label)
# lower
lower_res = res[8:14]
lower_label = 'Lower: '
has_lower = False
for i, l in enumerate(lower_res):
if l > self.threshold:
lower_label += ' {}'.format(lower_list[i])
has_lower = True
if not has_lower:
lower_label += ' {}'.format(lower_list[np.argmax(lower_res)])
label_res.append(lower_label)
# shoe
shoe = 'Boots' if res[14] > self.threshold else 'No boots'
label_res.append(shoe)
threshold_list = [0.5] * len(res)
threshold_list[1] = self.glasses_threshold
threshold_list[18] = self.hold_threshold
pred_res = (np.array(res) > np.array(threshold_list)
).astype(np.int8).tolist()
batch_res.append({"attributes": label_res, "output": pred_res})
return batch_res
class VehicleAttribute(object):
def __init__(self, color_threshold=0.5, type_threshold=0.5):
self.color_threshold = color_threshold
self.type_threshold = type_threshold
self.color_list = [
"yellow", "orange", "green", "gray", "red", "blue", "white",
"golden", "brown", "black"
]
self.type_list = [
"sedan", "suv", "van", "hatchback", "mpv", "pickup", "bus",
"truck", "estate"
]
def __call__(self, batch_preds, file_names=None):
# postprocess output of predictor
batch_res = []
for res in batch_preds:
res = res.tolist()
label_res = []
color_idx = np.argmax(res[:10])
type_idx = np.argmax(res[10:])
if res[color_idx] >= self.color_threshold:
color_info = f"Color: ({self.color_list[color_idx]}, prob: {res[color_idx]})"
else:
color_info = "Color unknown"
if res[type_idx + 10] >= self.type_threshold:
type_info = f"Type: ({self.type_list[type_idx]}, prob: {res[type_idx + 10]})"
else:
type_info = "Type unknown"
label_res = f"{color_info}, {type_info}"
threshold_list = [self.color_threshold
] * 10 + [self.type_threshold] * 9
pred_res = (np.array(res) > np.array(threshold_list)
).astype(np.int8).tolist()
batch_res.append({"attributes": label_res, "output": pred_res})
return batch_res
...@@ -49,10 +49,15 @@ class ClsPredictor(Predictor): ...@@ -49,10 +49,15 @@ class ClsPredictor(Predictor):
pid = os.getpid() pid = os.getpid()
size = config["PreProcess"]["transform_ops"][1]["CropImage"][ size = config["PreProcess"]["transform_ops"][1]["CropImage"][
"size"] "size"]
if config["Global"].get("use_int8", False):
precision = "int8"
elif config["Global"].get("use_fp16", False):
precision = "fp16"
else:
precision = "fp32"
self.auto_logger = auto_log.AutoLogger( self.auto_logger = auto_log.AutoLogger(
model_name=config["Global"].get("model_name", "cls"), model_name=config["Global"].get("model_name", "cls"),
model_precision='fp16' model_precision=precision,
if config["Global"]["use_fp16"] else 'fp32',
batch_size=config["Global"].get("batch_size", 1), batch_size=config["Global"].get("batch_size", 1),
data_shape=[3, size, size], data_shape=[3, size, size],
save_path=config["Global"].get("save_log_path", save_path=config["Global"].get("save_log_path",
...@@ -133,12 +138,19 @@ def main(config): ...@@ -133,12 +138,19 @@ def main(config):
continue continue
batch_results = cls_predictor.predict(batch_imgs) batch_results = cls_predictor.predict(batch_imgs)
for number, result_dict in enumerate(batch_results): for number, result_dict in enumerate(batch_results):
if "PersonAttribute" in config[
"PostProcess"] or "VehicleAttribute" in config[
"PostProcess"]:
filename = batch_names[number]
print("{}:\t {}".format(filename, result_dict))
else:
filename = batch_names[number] filename = batch_names[number]
clas_ids = result_dict["class_ids"] clas_ids = result_dict["class_ids"]
scores_str = "[{}]".format(", ".join("{:.2f}".format( scores_str = "[{}]".format(", ".join("{:.2f}".format(
r) for r in result_dict["scores"])) r) for r in result_dict["scores"]))
label_names = result_dict["label_names"] label_names = result_dict["label_names"]
print("{}:\tclass id(s): {}, score(s): {}, label_name(s): {}". print(
"{}:\tclass id(s): {}, score(s): {}, label_name(s): {}".
format(filename, clas_ids, scores_str, label_names)) format(filename, clas_ids, scores_str, label_names))
batch_imgs = [] batch_imgs = []
batch_names = [] batch_names = []
......
...@@ -128,13 +128,10 @@ class DetPredictor(Predictor): ...@@ -128,13 +128,10 @@ class DetPredictor(Predictor):
results = [] results = []
if reduce(lambda x, y: x * y, np_boxes.shape) < 6: if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
print('[WARNNING] No object detected.') print('[WARNNING] No object detected.')
results = np.array([])
else: else:
results = np_boxes results = self.parse_det_results(
np_boxes, self.config["Global"]["threshold"],
results = self.parse_det_results(results, self.config["Global"]["label_list"])
self.config["Global"]["threshold"],
self.config["Global"]["labe_list"])
return results return results
......
...@@ -27,6 +27,7 @@ import cv2 ...@@ -27,6 +27,7 @@ import cv2
import numpy as np import numpy as np
import importlib import importlib
from PIL import Image from PIL import Image
from paddle.vision.transforms import ToTensor, Normalize
from python.det_preprocess import DetNormalizeImage, DetPadStride, DetPermute, DetResize from python.det_preprocess import DetNormalizeImage, DetPadStride, DetPermute, DetResize
...@@ -53,13 +54,14 @@ def create_operators(params): ...@@ -53,13 +54,14 @@ def create_operators(params):
class UnifiedResize(object): class UnifiedResize(object):
def __init__(self, interpolation=None, backend="cv2"): def __init__(self, interpolation=None, backend="cv2", return_numpy=True):
_cv2_interp_from_str = { _cv2_interp_from_str = {
'nearest': cv2.INTER_NEAREST, 'nearest': cv2.INTER_NEAREST,
'bilinear': cv2.INTER_LINEAR, 'bilinear': cv2.INTER_LINEAR,
'area': cv2.INTER_AREA, 'area': cv2.INTER_AREA,
'bicubic': cv2.INTER_CUBIC, 'bicubic': cv2.INTER_CUBIC,
'lanczos': cv2.INTER_LANCZOS4 'lanczos': cv2.INTER_LANCZOS4,
'random': (cv2.INTER_LINEAR, cv2.INTER_CUBIC)
} }
_pil_interp_from_str = { _pil_interp_from_str = {
'nearest': Image.NEAREST, 'nearest': Image.NEAREST,
...@@ -67,13 +69,26 @@ class UnifiedResize(object): ...@@ -67,13 +69,26 @@ class UnifiedResize(object):
'bicubic': Image.BICUBIC, 'bicubic': Image.BICUBIC,
'box': Image.BOX, 'box': Image.BOX,
'lanczos': Image.LANCZOS, 'lanczos': Image.LANCZOS,
'hamming': Image.HAMMING 'hamming': Image.HAMMING,
'random': (Image.BILINEAR, Image.BICUBIC)
} }
def _pil_resize(src, size, resample): def _cv2_resize(src, size, resample):
if isinstance(resample, tuple):
resample = random.choice(resample)
return cv2.resize(src, size, interpolation=resample)
def _pil_resize(src, size, resample, return_numpy=True):
if isinstance(resample, tuple):
resample = random.choice(resample)
if isinstance(src, np.ndarray):
pil_img = Image.fromarray(src) pil_img = Image.fromarray(src)
else:
pil_img = src
pil_img = pil_img.resize(size, resample) pil_img = pil_img.resize(size, resample)
if return_numpy:
return np.asarray(pil_img) return np.asarray(pil_img)
return pil_img
if backend.lower() == "cv2": if backend.lower() == "cv2":
if isinstance(interpolation, str): if isinstance(interpolation, str):
...@@ -81,11 +96,12 @@ class UnifiedResize(object): ...@@ -81,11 +96,12 @@ class UnifiedResize(object):
# compatible with opencv < version 4.4.0 # compatible with opencv < version 4.4.0
elif interpolation is None: elif interpolation is None:
interpolation = cv2.INTER_LINEAR interpolation = cv2.INTER_LINEAR
self.resize_func = partial(cv2.resize, interpolation=interpolation) self.resize_func = partial(_cv2_resize, resample=interpolation)
elif backend.lower() == "pil": elif backend.lower() == "pil":
if isinstance(interpolation, str): if isinstance(interpolation, str):
interpolation = _pil_interp_from_str[interpolation.lower()] interpolation = _pil_interp_from_str[interpolation.lower()]
self.resize_func = partial(_pil_resize, resample=interpolation) self.resize_func = partial(
_pil_resize, resample=interpolation, return_numpy=return_numpy)
else: else:
logger.warning( logger.warning(
f"The backend of Resize only support \"cv2\" or \"PIL\". \"f{backend}\" is unavailable. Use \"cv2\" instead." f"The backend of Resize only support \"cv2\" or \"PIL\". \"f{backend}\" is unavailable. Use \"cv2\" instead."
...@@ -93,6 +109,8 @@ class UnifiedResize(object): ...@@ -93,6 +109,8 @@ class UnifiedResize(object):
self.resize_func = cv2.resize self.resize_func = cv2.resize
def __call__(self, src, size): def __call__(self, src, size):
if isinstance(size, list):
size = tuple(size)
return self.resize_func(src, size) return self.resize_func(src, size)
...@@ -137,7 +155,8 @@ class ResizeImage(object): ...@@ -137,7 +155,8 @@ class ResizeImage(object):
size=None, size=None,
resize_short=None, resize_short=None,
interpolation=None, interpolation=None,
backend="cv2"): backend="cv2",
return_numpy=True):
if resize_short is not None and resize_short > 0: if resize_short is not None and resize_short > 0:
self.resize_short = resize_short self.resize_short = resize_short
self.w = None self.w = None
...@@ -151,10 +170,18 @@ class ResizeImage(object): ...@@ -151,10 +170,18 @@ class ResizeImage(object):
'both 'size' and 'resize_short' are None") 'both 'size' and 'resize_short' are None")
self._resize_func = UnifiedResize( self._resize_func = UnifiedResize(
interpolation=interpolation, backend=backend) interpolation=interpolation,
backend=backend,
return_numpy=return_numpy)
def __call__(self, img): def __call__(self, img):
if isinstance(img, np.ndarray):
# numpy input
img_h, img_w = img.shape[:2] img_h, img_w = img.shape[:2]
else:
# PIL image input
img_w, img_h = img.size
if self.resize_short is not None: if self.resize_short is not None:
percent = float(self.resize_short) / min(img_w, img_h) percent = float(self.resize_short) / min(img_w, img_h)
w = int(round(img_w * percent)) w = int(round(img_w * percent))
......
...@@ -41,8 +41,11 @@ def main(): ...@@ -41,8 +41,11 @@ def main():
'inference.pdmodel')) and os.path.exists( 'inference.pdmodel')) and os.path.exists(
os.path.join(config["Global"]["save_inference_dir"], os.path.join(config["Global"]["save_inference_dir"],
'inference.pdiparams')) 'inference.pdiparams'))
if "Query" in config["DataLoader"]["Eval"]:
config["DataLoader"]["Eval"] = config["DataLoader"]["Eval"]["Query"]
config["DataLoader"]["Eval"]["sampler"]["batch_size"] = 1 config["DataLoader"]["Eval"]["sampler"]["batch_size"] = 1
config["DataLoader"]["Eval"]["loader"]["num_workers"] = 0 config["DataLoader"]["Eval"]["loader"]["num_workers"] = 0
init_logger() init_logger()
device = paddle.set_device("cpu") device = paddle.set_device("cpu")
train_dataloader = build_dataloader(config["DataLoader"], "Eval", device, train_dataloader = build_dataloader(config["DataLoader"], "Eval", device,
...@@ -67,6 +70,7 @@ def main(): ...@@ -67,6 +70,7 @@ def main():
quantize_model_path=os.path.join( quantize_model_path=os.path.join(
config["Global"]["save_inference_dir"], "quant_post_static_model"), config["Global"]["save_inference_dir"], "quant_post_static_model"),
sample_generator=sample_generator(train_dataloader), sample_generator=sample_generator(train_dataloader),
batch_size=config["DataLoader"]["Eval"]["sampler"]["batch_size"],
batch_nums=10) batch_nums=10)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册