提交 0e9b347c 编写于 作者: H HydrogenSulfate

Merge branch 'udpate_docs_remote' into udpate_docs

......@@ -12,3 +12,4 @@ build/
log/
nohup.out
.DS_Store
.idea
include LICENSE.txt
include README.md
include docs/en/whl_en.md
recursive-include deploy/python predict_cls.py preprocess.py postprocess.py det_preprocess.py
recursive-include deploy/python *.py
recursive-include deploy/configs *.yaml
recursive-include deploy/utils get_image_list.py config.py logger.py predictor.py
recursive-include ppcls/ *.py *.txt
\ No newline at end of file
README_ch.md
\ No newline at end of file
README_en.md
\ No newline at end of file
......@@ -4,106 +4,130 @@
## 简介
飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。
飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别和图像分类任务的工具集,助力使用者训练出更好的视觉模型和应用落地。
**近期更新**
- 🔥️ 2022.5.26 [飞桨产业实践范例直播课](http://aglc.cn/v-c4FAR),解读**超轻量重点区域人员出入管理方案**,欢迎报名来交流。
<div align="center">
<img src="https://user-images.githubusercontent.com/80816848/170166458-767a01ca-1429-437f-a628-dd184732ef53.png" width = "150" />
</div>
- 2022.5.23 新增[人员出入管理范例库](https://aistudio.baidu.com/aistudio/projectdetail/4094475),具体内容可以在 AI Stuio 上体验。
- 2022.5.20 上线[PP-HGNet](./docs/zh_CN/models/PP-HGNet.md), [PP-LCNet v2](./docs/zh_CN/models/PP-LCNetV2.md)
- 2022.4.21 新增 CVPR2022 oral论文 [MixFormer](https://arxiv.org/pdf/2204.02557.pdf) 相关[代码](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files)
- 2022.1.27 全面升级文档;新增[PaddleServing C++ pipeline部署方式](./deploy/paddleserving)[18M图像识别安卓部署Demo](./deploy/lite_shitu)
- 2021.11.1 发布[PP-ShiTu技术报告](https://arxiv.org/pdf/2111.00775.pdf),新增饮料识别demo
- 2021.10.23 发布轻量级图像识别系统PP-ShiTu,CPU上0.2s即可完成在10w+库的图像识别。
[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)立即体验
- 2021.09.17 发布PP-LCNet系列超轻量骨干网络模型, 在Intel CPU上,单张图像预测速度约5ms,ImageNet-1K数据集上Top1识别准确率达到80.82%,超越ResNet152的模型效果。PP-LCNet的介绍可以参考[论文](https://arxiv.org/pdf/2109.15099.pdf), 或者[PP-LCNet模型介绍](docs/zh_CN/models/PP-LCNet.md),相关指标和预训练权重可以从 [这里](docs/zh_CN/algorithm_introduction/ImageNet_models.md)下载。
- [more](./docs/zh_CN/others/update_history.md)
## 特性
- PP-ShiTu轻量图像识别系统:集成了目标检测、特征学习、图像检索等模块,广泛适用于各类图像识别任务。cpu上0.2s即可完成在10w+库的图像识别。
<div align="center">
<img src="./docs/images/class_simple.gif" width = "600" />
<p>PULC实用图像分类模型效果展示</p>
</div>
&nbsp;
- PP-LCNet轻量级CPU骨干网络:专门为CPU设备打造轻量级骨干网络,速度、精度均远超竞品。
- 丰富的预训练模型库:提供了36个系列共175个ImageNet预训练模型,其中7个精选系列模型支持结构快速修改。
<div align="center">
<img src="./docs/images/recognition.gif" width = "400" />
<p>PP-ShiTu图像识别系统效果展示</p>
</div>
- 全面易用的特征学习组件:集成arcmargin, triplet loss等12度量学习方法,通过配置文件即可随意组合切换。
- SSLD知识蒸馏:14个分类预训练模型,精度普遍提升3%以上;其中ResNet50_vd模型在ImageNet-1k数据集上的Top-1精度达到了84.0%,
Res2Net200_vd预训练模型Top-1精度高达85.1%。
## 近期更新
- 📢将于**6月15-6月17日晚20:30** 进行为期三天的课程直播,详细介绍超轻量图像分类方案,对各场景模型优化原理及使用方式进行拆解,之后还有产业案例全流程实操,对各类痛难点解决方案进行手把手教学,加上现场互动答疑,抓紧扫码上车吧!
<div align="center">
<img src="./docs/images/recognition.gif" width = "400" />
<img src="https://user-images.githubusercontent.com/45199522/173483779-2332f990-4941-4f8d-baee-69b62035fc31.png" width = "200" height = "200"/>
</div>
- 🔥️ 2022.6.15 发布[PULC超轻量图像分类实用方案](docs/zh_CN/PULC/PULC_train.md),CPU推理3ms,精度比肩SwinTransformer,覆盖人、车、OCR场景九大常见任务。
- 2022.5.26 [飞桨产业实践范例直播课](http://aglc.cn/v-c4FAR),解读**超轻量重点区域人员出入管理方案**
- 2022.5.23 新增[人员出入管理范例库](https://aistudio.baidu.com/aistudio/projectdetail/4094475),具体内容可以在 AI Stuio 上体验。
- 2022.5.20 上线[PP-HGNet](./docs/zh_CN/models/PP-HGNet.md), [PP-LCNetv2](./docs/zh_CN/models/PP-LCNetV2.md)
- 2022.4.21 新增 CVPR2022 oral论文 [MixFormer](https://arxiv.org/pdf/2204.02557.pdf) 相关[代码](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files)
- [more](./docs/zh_CN/others/update_history.md)
## 特性
PaddleClas发布了[PP-HGNet](docs/zh_CN/models/PP-HGNet.md)[PP-LCNetv2](docs/zh_CN/models/PP-LCNetV2.md)[PP-LCNet](docs/zh_CN/models/PP-LCNet.md)[SSLD半监督知识蒸馏方案](docs/zh_CN/advanced_tutorials/ssld.md)等算法,
并支持多种图像分类、识别相关算法,在此基础上打造[PULC超轻量图像分类方案](docs/zh_CN/PULC/PULC_quickstart.md)[PP-ShiTu图像识别系统](./docs/zh_CN/quick_start/quick_start_recognition.md)
![](https://user-images.githubusercontent.com/19523330/173273046-239a42da-c88d-4c2c-94b1-2134557afa49.png)
## 欢迎加入技术交流群
* 您可以扫描下面的QQ/微信二维码(添加小助手微信并回复“C”),加入PaddleClas微信交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。
* 您可以扫描下面的微信/QQ二维码(添加小助手微信并回复“C”),加入PaddleClas微信交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。
<div align="center">
<img src="https://user-images.githubusercontent.com/80816848/164383225-e375eb86-716e-41b4-a9e0-4b8a3976c1aa.jpg" width="200"/>
<img src="https://user-images.githubusercontent.com/48054808/160531099-9811bbe6-cfbb-47d5-8bdb-c2b40684d7dd.png" width="200"/>
<img src="https://user-images.githubusercontent.com/80816848/164383225-e375eb86-716e-41b4-a9e0-4b8a3976c1aa.jpg" width="200"/>
</div>
## 快速体验
PULC超轻量图像分类方案快速体验:[点击这里](docs/zh_CN/PULC/PULC_quickstart.md)
PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)
## 文档教程
- 安装说明
- [安装Paddle](./docs/zh_CN/installation/install_paddle.md)
- [安装PaddleClas](./docs/zh_CN/installation/install_paddleclas.md)
- 快速体验
- [PP-ShiTu图像识别快速体验](./docs/zh_CN/quick_start/quick_start_recognition.md)
- 图像分类快速体验
- [尝鲜版](./docs/zh_CN/quick_start/quick_start_classification_new_user.md)
- [进阶版](./docs/zh_CN/quick_start/quick_start_classification_professional.md)
- [多标签分类](./docs/zh_CN/quick_start/quick_start_multilabel_classification.md)
- [环境准备](docs/zh_CN/installation/install_paddleclas.md)
- [PULC超轻量图像分类实用方案](docs/zh_CN/PULC/PULC_train.md)
- [超轻量图像分类快速体验](docs/zh_CN/PULC/PULC_quickstart.md)
- [超轻量图像分类模型库](docs/zh_CN/PULC/PULC_model_list.md)
- [PULC有人/无人分类模型](docs/zh_CN/PULC/PULC_person_exists.md)
- [PULC人体属性识别模型](docs/zh_CN/PULC/PULC_person_attribute.md)
- [PULC佩戴安全帽分类模型](docs/zh_CN/PULC/PULC_safety_helmet.md)
- [PULC交通标志分类模型](docs/zh_CN/PULC/PULC_traffic_sign.md)
- [PULC车辆属性识别模型](docs/zh_CN/PULC/PULC_vehicle_attribute.md)
- [PULC有车/无车分类模型](docs/zh_CN/PULC/PULC_car_exists.md)
- [PULC含文字图像方向分类模型](docs/zh_CN/PULC/PULC_text_image_orientation.md)
- [PULC文本行方向分类模型](docs/zh_CN/PULC/PULC_textline_orientation.md)
- [PULC语种分类模型](docs/zh_CN/PULC/PULC_language_classification.md)
- [模型训练](docs/zh_CN/PULC/PULC_train.md)
- 推理部署
- [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#1)
- [基于C++预测引擎推理](docs/zh_CN/inference_deployment/cpp_deploy.md)
- [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md)
- [端侧部署](docs/zh_CN/inference_deployment/paddle_lite_deploy.md)
- [Paddle2ONNX模型转化与预测](deploy/paddle2onnx/readme.md)
- [模型压缩](deploy/slim/README.md)
- [PP-ShiTu图像识别系统介绍](#图像识别系统介绍)
- [图像识别快速体验](docs/zh_CN/quick_start/quick_start_recognition.md)
- 模块介绍
- [主体检测](./docs/zh_CN/image_recognition_pipeline/mainbody_detection.md)
- [特征提取](./docs/zh_CN/image_recognition_pipeline/feature_extraction.md)
- [特征提取模型](./docs/zh_CN/image_recognition_pipeline/feature_extraction.md)
- [向量检索](./docs/zh_CN/image_recognition_pipeline/vector_search.md)
- [骨干网络和预训练模型库](./docs/zh_CN/algorithm_introduction/ImageNet_models.md)
- 数据准备
- [图像分类数据集介绍](./docs/zh_CN/data_preparation/classification_dataset.md)
- [图像识别数据集介绍](./docs/zh_CN/data_preparation/recognition_dataset.md)
- 模型训练
- [图像分类任务](./docs/zh_CN/models_training/classification.md)
- [图像识别任务](./docs/zh_CN/models_training/recognition.md)
- [训练参数调整策略](./docs/zh_CN/models_training/train_strategy.md)
- [配置文件说明](./docs/zh_CN/models_training/config_description.md)
- 模型预测部署
- [模型导出](./docs/zh_CN/inference_deployment/export_model.md)
- Python/C++ 预测引擎
- [基于Python预测引擎预测推理](./docs/zh_CN/inference_deployment/python_deploy.md)
- [基于C++分类预测引擎预测推理](./docs/zh_CN/inference_deployment/cpp_deploy.md)[基于C++的PP-ShiTu预测引擎预测推理](deploy/cpp_shitu/readme.md)
- 服务化部署
- [Paddle Serving服务化部署(推荐)](./docs/zh_CN/inference_deployment/paddle_serving_deploy.md)
- [Hub serving服务化部署](./docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md)
- [端侧部署](./deploy/lite/readme.md)
- [whl包预测](./docs/zh_CN/inference_deployment/whl_deploy.md)
- 算法介绍
- [图像分类任务介绍](./docs/zh_CN/algorithm_introduction/image_classification.md)
- [度量学习介绍](./docs/zh_CN/algorithm_introduction/metric_learning.md)
- 高阶使用
- [数据增广](./docs/zh_CN/advanced_tutorials/DataAugmentation.md)
- [模型量化](./docs/zh_CN/advanced_tutorials/model_prune_quantization.md)
- [知识蒸馏](./docs/zh_CN/advanced_tutorials/knowledge_distillation.md)
- [PaddleClas结构解析](./docs/zh_CN/advanced_tutorials/code_overview.md)
- [社区贡献指南](./docs/zh_CN/advanced_tutorials/how_to_contribute.md)
- [哈希编码](docs/zh_CN/image_recognition_pipeline/)
- [模型训练](docs/zh_CN/models_training/recognition.md)
- 推理部署
- [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#2)
- [基于C++预测引擎推理](deploy/cpp_shitu/readme.md)
- [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md)
- [端侧部署](deploy/lite_shitu/README.md)
- PP系列骨干网络模型
- [PP-HGNet](docs/zh_CN/models/PP-HGNet.md)
- [PP-LCNetv2](docs/zh_CN/models/PP-LCNetV2.md)
- [PP-LCNet](docs/zh_CN/models/PP-LCNet.md)
- [SSLD半监督知识蒸馏方案](docs/zh_CN/advanced_tutorials/ssld.md)
- 前沿算法
- [骨干网络和预训练模型库](docs/zh_CN/algorithm_introduction/ImageNet_models.md)
- [度量学习](docs/zh_CN/algorithm_introduction/metric_learning.md)
- [模型压缩](docs/zh_CN/algorithm_introduction/model_prune_quantization.md)
- [模型蒸馏](docs/zh_CN/algorithm_introduction/knowledge_distillation.md)
- [数据增强](docs/zh_CN/advanced_tutorials/DataAugmentation.md)
- [产业实用范例库](docs/zh_CN/samples)
- [30分钟快速体验图像分类](docs/zh_CN/quick_start/quick_start_classification_new_user.md)
- FAQ
- [图像识别精选问题](docs/zh_CN/faq_series/faq_2021_s2.md)
- [图像分类精选问题](docs/zh_CN/faq_series/faq_selected_30.md)
- [图像分类FAQ第一季](docs/zh_CN/faq_series/faq_2020_s1.md)
- [图像分类FAQ第二季](docs/zh_CN/faq_series/faq_2021_s1.md)
- [图像识别精选问题](docs/zh_CN/faq_series/faq_2021_s2.md)
- [图像分类精选问题](docs/zh_CN/faq_series/faq_selected_30.md)
- [图像分类FAQ第一季](docs/zh_CN/faq_series/faq_2020_s1.md)
- [图像分类FAQ第二季](docs/zh_CN/faq_series/faq_2021_s1.md)
- [社区贡献指南](./docs/zh_CN/advanced_tutorials/how_to_contribute.md)
- [许可证书](#许可证书)
- [贡献代码](#贡献代码)
<a name="PULC超轻量图像分类方案"></a>
## PULC超轻量图像分类方案
<div align="center">
<img src="https://user-images.githubusercontent.com/19523330/173011854-b10fcd7a-b799-4dfd-a1cf-9504952a3c44.png" width = "800" />
</div>
PULC融合了骨干网络、数据增广、蒸馏等多种前沿算法,可以自动训练得到轻量且高精度的图像分类模型。
PaddleClas提供了覆盖人、车、OCR场景九大常见任务的分类模型,CPU推理3ms,精度比肩SwinTransformer。
<a name="图像识别系统介绍"></a>
## PP-ShiTu图像识别系统介绍
## PP-ShiTu图像识别系统
<div align="center">
<img src="./docs/images/structure.jpg" width = "800" />
......@@ -111,6 +135,11 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick
PP-ShiTu是一个实用的轻量级通用图像识别系统,主要由主体检测、特征学习和向量检索三个模块组成。该系统从骨干网络选择和调整、损失函数的选择、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型裁剪量化8个方面,采用多种策略,对各个模块的模型进行优化,最终得到在CPU上仅0.2s即可完成10w+库的图像识别的系统。更多细节请参考[PP-ShiTu技术方案](https://arxiv.org/pdf/2111.00775.pdf)
<a name="分类效果展示"></a>
## PULC实用图像分类模型效果展示
<div align="center">
<img src="docs/images/classification.gif">
</div>
<a name="识别效果展示"></a>
## PP-ShiTu图像识别系统效果展示
......
......@@ -4,39 +4,41 @@
## Introduction
PaddleClas is an image recognition toolset for industry and academia, helping users train better computer vision models and apply them in real scenarios.
PaddleClas is an image classification and image recognition toolset for industry and academia, helping users train better computer vision models and apply them in real scenarios.
**Recent updates**
- 2022.4.21 Added the related [code](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files) of the CVPR2022 oral paper [MixFormer](https://arxiv.org/pdf/2204.02557.pdf).
- 2021.09.17 Add PP-LCNet series model developed by PaddleClas, these models show strong competitiveness on Intel CPUs.
For the introduction of PP-LCNet, please refer to [paper](https://arxiv.org/pdf/2109.15099.pdf) or [PP-LCNet model introduction](docs/en/models/PP-LCNet_en.md). The metrics and pretrained model are available [here](docs/en/ImageNet_models_en.md).
- 2021.06.29 Add Swin-transformer series model,Highest top1 acc on ImageNet1k dataset reaches 87.2%, training, evaluation and inference are all supported. Pretrained models can be downloaded [here](docs/en/models/models_intro_en.md).
- 2021.06.16 PaddleClas release/2.2. Add metric learning and vector search modules. Add product recognition, animation character recognition, vehicle recognition and logo recognition. Added 30 pretrained models of LeViT, Twins, TNT, DLA, HarDNet, and RedNet, and the accuracy is roughly the same as that of the paper.
- [more](./docs/en/update_history_en.md)
<div align="center">
<img src="./docs/images/class_simple_en.gif" width = "600" />
## Features
PULC demo images
</div>
&nbsp;
- A practical image recognition system consist of detection, feature learning and retrieval modules, widely applicable to all types of image recognition tasks.
Four sample solutions are provided, including product recognition, vehicle recognition, logo recognition and animation character recognition.
- Rich library of pre-trained models: Provide a total of 164 ImageNet pre-trained models in 35 series, among which 6 selected series of models support fast structural modification.
<div align="center">
<img src="./docs/images/recognition.gif" width = "400" />
- Comprehensive and easy-to-use feature learning components: 12 metric learning methods are integrated and can be combined and switched at will through configuration files.
PP-ShiTu demo images
</div>
- SSLD knowledge distillation: The 14 classification pre-training models generally improved their accuracy by more than 3%; among them, the ResNet50_vd model achieved a Top-1 accuracy of 84.0% on the Image-Net-1k dataset and the Res2Net200_vd pre-training model achieved a Top-1 accuracy of 85.1%.
**Recent updates**
- 2022.6.15 Release [**P**ractical **U**ltra **L**ight-weight image **C**lassification solutions](./docs/en/PULC/PULC_quickstart_en.md). PULC models inference within 3ms on CPU devices, with accuracy on par with SwinTransformer. We also release 9 practical classification models covering pedestrian, vehicle and OCR scenario.
- 2022.4.21 Added the related [code](https://github.com/PaddlePaddle/PaddleClas/pull/1820/files) of the CVPR2022 oral paper [MixFormer](https://arxiv.org/pdf/2204.02557.pdf).
- Data augmentation: Provide 8 data augmentation algorithms such as AutoAugment, Cutout, Cutmix, etc. with detailed introduction, code replication and evaluation of effectiveness in a unified experimental environment.
- 2021.09.17 Add PP-LCNet series model developed by PaddleClas, these models show strong competitiveness on Intel CPUs.
For the introduction of PP-LCNet, please refer to [paper](https://arxiv.org/pdf/2109.15099.pdf) or [PP-LCNet model introduction](docs/en/models/PP-LCNet_en.md). The metrics and pretrained model are available [here](docs/en/algorithm_introduction/ImageNet_models_en.md).
- 2021.06.29 Add [Swin-transformer](docs/en/models/SwinTransformer_en.md)) series model,Highest top1 acc on ImageNet1k dataset reaches 87.2%, training, evaluation and inference are all supported. Pretrained models can be downloaded [here](docs/en/algorithm_introduction/ImageNet_models_en.md#16).
- 2021.06.16 PaddleClas release/2.2. Add metric learning and vector search modules. Add product recognition, animation character recognition, vehicle recognition and logo recognition. Added 30 pretrained models of LeViT, Twins, TNT, DLA, HarDNet, and RedNet, and the accuracy is roughly the same as that of the paper.
- [more](./docs/en/others/update_history_en.md)
## Features
PaddleClas release PP-HGNet、PP-LCNetv2、 PP-LCNet and **S**imple **S**emi-supervised **L**abel **D**istillation algorithms, and support plenty of
image classification and image recognition algorithms.
Based on th algorithms above, PaddleClas release PP-ShiTu image recognition system and [**P**ractical **U**ltra **L**ight-weight image **C**lassification solutions](docs/en/PULC/PULC_quickstart_en.md).
<div align="center">
<img src="./docs/images/recognition_en.gif" width = "400" />
</div>
![](https://user-images.githubusercontent.com/19523330/173539361-68cf7ab1-7e3b-4e5e-b00f-1500719bd2a2.png)
## Welcome to Join the Technical Exchange Group
......@@ -48,41 +50,57 @@ Four sample solutions are provided, including product recognition, vehicle recog
</div>
## Quick Start
Quick experience of image recognition:[Link](./docs/en/tutorials/quick_start_recognition_en.md)
Quick experience of PP-ShiTu image recognition system:[Link](./docs/en/quick_start/quick_start_recognition_en.md)
Quick experience of **P**ractical **U**ltra **L**ight-weight image **C**lassification models:[Link](docs/en/PULC/PULC_quickstart_en.md)
## Tutorials
- [Quick Installation](./docs/en/tutorials/install_en.md)
- [Quick Start of Recognition](./docs/en/tutorials/quick_start_recognition_en.md)
- [Install Paddle](./docs/en/installation/install_paddle_en.md)
- [Install PaddleClas Environment](./docs/en/installation/install_paddleclas_en.md)
- [Practical Ultra Light-weight image Classification solutions](./docs/en/PULC/PULC_train_en.md)
- [PULC Quick Start](docs/en/PULC/PULC_quickstart_en.md)
- [PULC Model Zoo](docs/en/PULC/PULC_model_list_en.md)
- [PULC Classification Model of Someone or Nobody](docs/en/PULC/PULC_person_exists_en.md)
- [PULC Recognition Model of Person Attribute](docs/en/PULC/PULC_person_attribute_en.md)
- [PULC Classification Model of Wearing or Unwearing Safety Helmet](docs/en/PULC/PULC_safety_helmet_en.md)
- [PULC Classification Model of Traffic Sign](docs/en/PULC/PULC_traffic_sign_en.md)
- [PULC Recognition Model of Vehicle Attribute](docs/en/PULC/PULC_vehicle_attribute_en.md)
- [PULC Classification Model of Containing or Uncontaining Car](docs/en/PULC/PULC_car_exists_en.md)
- [PULC Classification Model of Text Image Orientation](docs/en/PULC/PULC_text_image_orientation_en.md)
- [PULC Classification Model of Textline Orientation](docs/en/PULC/PULC_textline_orientation_en.md)
- [PULC Classification Model of Language](docs/en/PULC/PULC_language_classification_en.md)
- [Quick Start of Recognition](./docs/en/quick_start/quick_start_recognition_en.md)
- [Introduction to Image Recognition Systems](#Introduction_to_Image_Recognition_Systems)
- [Demo images](#Demo_images)
- [Image Recognition Demo images](#Rec_Demo_images)
- [PULC demo images](#Clas_Demo_images)
- Algorithms Introduction
- [Backbone Network and Pre-trained Model Library](./docs/en/ImageNet_models_en.md)
- [Mainbody Detection](./docs/en/application/mainbody_detection_en.md)
- [Image Classification](./docs/en/tutorials/image_classification_en.md)
- [Feature Learning](./docs/en/application/feature_learning_en.md)
- [Product Recognition](./docs/en/application/product_recognition_en.md)
- [Vehicle Recognition](./docs/en/application/vehicle_recognition_en.md)
- [Logo Recognition](./docs/en/application/logo_recognition_en.md)
- [Animation Character Recognition](./docs/en/application/cartoon_character_recognition_en.md)
- [Backbone Network and Pre-trained Model Library](./docs/en/algorithm_introduction/ImageNet_models_en.md)
- [Mainbody Detection](./docs/en/image_recognition_pipeline/mainbody_detection_en.md)
- [Feature Learning](./docs/en/image_recognition_pipeline/feature_extraction_en.md)
- [Vector Search](./deploy/vector_search/README.md)
- Models Training/Evaluation
- [Image Classification](./docs/en/tutorials/getting_started_en.md)
- [Feature Learning](./docs/en/tutorials/getting_started_retrieval_en.md)
- Inference Model Prediction
- [Python Inference](./docs/en/inference.md)
- [Python Inference](./docs/en/inference_deployment/python_deploy_en.md)
- [C++ Classfication Inference](./deploy/cpp/readme_en.md)[C++ PP-ShiTu Inference](deploy/cpp_shitu/readme_en.md)
- Model Deploy (only support classification for now, recognition coming soon)
- [Hub Serving Deployment](./deploy/hubserving/readme_en.md)
- [Mobile Deployment](./deploy/lite/readme_en.md)
- [Inference Using whl](./docs/en/whl_en.md)
- [Inference Using whl](./docs/en/inference_deployment/whl_deploy_en.md)
- Advanced Tutorial
- [Knowledge Distillation](./docs/en/advanced_tutorials/distillation/distillation_en.md)
- [Model Quantization](./docs/en/extension/paddle_quantization_en.md)
- [Data Augmentation](./docs/en/advanced_tutorials/image_augmentation/ImageAugment_en.md)
- [Model Quantization](./docs/en/algorithm_introduction/model_prune_quantization_en.md)
- [Data Augmentation](./docs/en/advanced_tutorials/DataAugmentation_en.md)
- [License](#License)
- [Contribution](#Contribution)
<a name="Introduction_to_PULC"></a>
## Introduction to Practical Ultra Light-weight image Classification solutions
<div align="center">
<img src="https://user-images.githubusercontent.com/19523330/173011854-b10fcd7a-b799-4dfd-a1cf-9504952a3c44.png" width = "800" />
</div>
PULC solutions consists of PP-LCNet light-weight backbone, SSLD pretrained models, Ensemble of Data Augmentation strategy and SKL-UGI knowledge distillation.
PULC models inference within 3ms on CPU devices, with accuracy comparable with SwinTransformer. We also release 9 practical models covering pedestrian, vehicle and OCR.
<a name="Introduction_to_Image_Recognition_Systems"></a>
## Introduction to Image Recognition Systems
......@@ -97,8 +115,14 @@ Image recognition can be divided into three steps:
For a new unknown category, there is no need to retrain the model, just prepare images of new category, extract features and update retrieval database and the category can be recognised.
<a name="Demo_images"></a>
## Demo images [more](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.2/docs/images/recognition/more_demo_images)
<a name="Clas_Demo_images"></a>
## PULC demo images
<div align="center">
<img src="docs/images/classification_en.gif">
</div>
<a name="Rec_Demo_images"></a>
## Image Recognition Demo images [more](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.2/docs/images/recognition/more_demo_images)
- Product recognition
<div align="center">
<img src="https://user-images.githubusercontent.com/18028216/122769644-51604f80-d2d7-11eb-8290-c53b12a5c1f6.gif" width = "400" />
......
Global:
infer_imgs: "./images/PULC/car_exists/objects365_00001507.jpeg"
inference_model_dir: "./models/car_exists_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.5
label_0: no_car
label_1: contains_car
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/language_classification/word_35404.png"
inference_model_dir: "./models/language_classification_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [160, 80]
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 2
class_id_map_file: "../ppcls/utils/PULC_label_list/language_classification_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/person_attribute/090004.jpg"
inference_model_dir: "./models/person_attribute_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [192, 256]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: PersonAttribute
PersonAttribute:
threshold: 0.5 #default threshold
glasses_threshold: 0.3 #threshold only for glasses
hold_threshold: 0.6 #threshold only for hold
Global:
infer_imgs: "./images/PULC/person/objects365_02035329.jpg"
inference_model_dir: "./models/person_cls_infer"
infer_imgs: "./images/PULC/person_exists/objects365_02035329.jpg"
inference_model_dir: "./models/person_exists_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
......@@ -29,7 +29,7 @@ PreProcess:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.9
threshold: 0.5
label_0: nobody
label_1: someone
SavePreLabel:
......
Global:
infer_imgs: "./images/PULC/safety_helmet/safety_helmet_test_1.png"
inference_model_dir: "./models/safety_helmet_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: ThreshOutput
ThreshOutput:
threshold: 0.5
label_0: wearing_helmet
label_1: unwearing_helmet
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/text_image_orientation/img_rot0_demo.jpg"
inference_model_dir: "./models/text_image_orientation_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 2
class_id_map_file: "../ppcls/utils/PULC_label_list/text_image_orientation_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/textline_orientation/textline_orientation_test_0_0.png"
inference_model_dir: "./models/textline_orientation_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [160, 80]
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 1
class_id_map_file: "../ppcls/utils/PULC_label_list/textline_orientation_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/traffic_sign/99603_17806.jpg"
inference_model_dir: "./models/traffic_sign_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
resize_short: 256
- CropImage:
size: 224
- NormalizeImage:
scale: 0.00392157
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: Topk
Topk:
topk: 5
class_id_map_file: "../ppcls/utils/PULC_label_list/traffic_sign_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
Global:
infer_imgs: "./images/PULC/vehicle_attribute/0002_c002_00030670_0.jpg"
inference_model_dir: "./models/vehicle_attribute_infer"
batch_size: 1
use_gpu: True
enable_mkldnn: True
cpu_num_threads: 10
benchmark: False
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [256, 192]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: VehicleAttribute
VehicleAttribute:
color_threshold: 0.5
type_threshold: 0.5
Global:
infer_imgs: "./images/Pedestrain_Attr.jpg"
inference_model_dir: "../inference/"
batch_size: 1
use_gpu: True
enable_mkldnn: False
cpu_num_threads: 10
enable_benchmark: True
use_fp16: False
ir_optim: True
use_tensorrt: False
gpu_mem: 8000
enable_profile: False
PreProcess:
transform_ops:
- ResizeImage:
size: [192, 256]
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
channel_num: 3
- ToCHWImage:
PostProcess:
main_indicator: PersonAttribute
PersonAttribute:
threshold: 0.5 #default threshold
glasses_threshold: 0.3 #threshold only for glasses
hold_threshold: 0.6 #threshold only for hold
Global:
infer_imgs: "./images/ILSVRC2012_val_00000010.jpeg"
infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg"
inference_model_dir: "./models"
batch_size: 1
use_gpu: True
......@@ -32,4 +32,4 @@ PostProcess:
topk: 5
class_id_map_file: "../ppcls/utils/imagenet1k_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
\ No newline at end of file
save_dir: ./pre_label/
Global:
infer_imgs: "./images/ILSVRC2012_val_00000010.jpeg"
infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg"
inference_model_dir: "./models"
batch_size: 1
use_gpu: True
......@@ -32,4 +32,4 @@ PostProcess:
topk: 5
class_id_map_file: "../ppcls/utils/imagenet1k_label_list.txt"
SavePreLabel:
save_dir: ./pre_label/
\ No newline at end of file
save_dir: ./pre_label/
......@@ -92,9 +92,9 @@ PaddleClas 提供了转换并优化后的推理模型,可以直接参考下方
```shell
# 进入lite_ppshitu目录
cd $PaddleClas/deploy/lite_shitu
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.1.tar
tar -xf ppshitu_lite_models_v1.1.tar
rm -f ppshitu_lite_models_v1.1.tar
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.2.tar
tar -xf ppshitu_lite_models_v1.2.tar
rm -f ppshitu_lite_models_v1.2.tar
```
#### 2.1.2 使用其他模型
......@@ -162,7 +162,7 @@ git clone https://github.com/PaddlePaddle/PaddleDetection.git
# 进入PaddleDetection根目录
cd PaddleDetection
# 将预训练模型导出为inference模型
python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams --output_dir=inference
python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams export_post_process=False --output_dir=inference
# 将inference模型转化为Paddle-Lite优化模型
paddle_lite_opt --model_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdmodel --param_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdiparams --optimize_out=inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det
# 将转好的模型复制到lite_shitu目录下
......@@ -183,24 +183,56 @@ cd deploy/lite_shitu
**注意**`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb``--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。
### 2.2 将yaml文件转换成json文件
### 2.2 生成新的检索库
由于lite 版本的检索库用的是`faiss1.5.3`版本,与新版本不兼容,因此需要重新生成index库
#### 2.2.1 数据及环境配置
```shell
# 进入上级目录
cd ..
# 下载瓶装饮料数据集
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0/index
# 安装1.5.3版本的faiss
pip install faiss-cpu==1.5.3
# 下载通用识别模型,可替换成自己的inference model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
rm -rf general_PPLCNet_x2_5_lite_v1.0_infer.tar
```
#### 2.2.2 生成新的index文件
```shell
# 生成新的index库,注意指定好识别模型的路径,同时将index_mothod修改成Flat,HNSW32和IVF在此版本中可能存在bug,请慎重使用。
# 如果使用自己的识别模型,对应的修改inference model的目录
python python/build_gallery.py -c configs/inference_drink.yaml -o Global.rec_inference_model_dir=general_PPLCNet_x2_5_lite_v1.0_infer -o IndexProcess.index_method=Flat
# 进入到lite_shitu目录
cd lite_shitu
mv ../drink_dataset_v1.0 .
```
### 2.3 将yaml文件转换成json文件
```shell
# 如果测试单张图像
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_path images/demo.jpg
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_path images/demo.jpeg
# or
# 如果测试多张图像
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_dir images
python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_dir images
# 执行完成后,会在lit_shitu下生成shitu_config.json配置文件
```
### 2.3 index字典转换
### 2.4 index字典转换
由于python的检索库字典,使用`pickle`进行的序列化存储,导致C++不方便读取,因此需要进行转换
```shell
# 下载瓶装饮料数据集
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
rm -rf drink_dataset_v1.0.tar
# 转化id_map.pkl为id_map.txt
python transform_id_map.py -c ../configs/inference_drink.yaml
......@@ -208,7 +240,7 @@ python transform_id_map.py -c ../configs/inference_drink.yaml
转换成功后,会在`IndexProcess.index_dir`目录下生成`id_map.txt`
### 2.4 与手机联调
### 2.5 与手机联调
首先需要进行一些准备工作。
1. 准备一台arm8的安卓手机,如果编译的预测库是armv7,则需要arm7的手机,并修改Makefile中`ARM_ABI=arm7`
......@@ -308,8 +340,9 @@ chmod 777 pp_shitu
运行效果如下:
```
images/demo.jpg:
result0: bbox[253, 275, 1146, 872], score: 0.974196, label: 伊藤园_果蔬汁
images/demo.jpeg:
result0: bbox[344, 98, 527, 593], score: 0.811656, label: 红牛-强化型
result1: bbox[0, 0, 600, 600], score: 0.729664, label: 红牛-强化型
```
## FAQ
......
......@@ -29,16 +29,16 @@
namespace PPShiTu {
void load_jsonf(std::string jsonfile, Json::Value& jsondata);
void load_jsonf(std::string jsonfile, Json::Value &jsondata);
// Inference model configuration parser
class ConfigPaser {
public:
public:
ConfigPaser() {}
~ConfigPaser() {}
bool load_config(const Json::Value& config) {
bool load_config(const Json::Value &config) {
// Get model arch : YOLO, SSD, RetinaNet, RCNN, Face
if (config["Global"].isMember("det_arch")) {
......@@ -89,4 +89,4 @@ class ConfigPaser {
std::vector<int> fpn_stride_;
};
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -18,6 +18,7 @@
#include <arm_neon.h>
#include <chrono>
#include <fstream>
#include <include/preprocess_op.h>
#include <iostream>
#include <math.h>
#include <opencv2/opencv.hpp>
......@@ -48,10 +49,6 @@ public:
config_file["Global"]["rec_model_path"].as<std::string>());
this->predictor = CreatePaddlePredictor<MobileConfig>(config);
if (config_file["Global"]["rec_label_path"].as<std::string>().empty()) {
std::cout << "Please set [rec_label_path] in config file" << std::endl;
exit(-1);
}
SetPreProcessParam(config_file["RecPreProcess"]["transform_ops"]);
printf("feature extract model create!\n");
}
......@@ -68,24 +65,29 @@ public:
this->mean.emplace_back(tmp.as<float>());
}
for (auto tmp : item["std"]) {
this->std.emplace_back(1 / tmp.as<float>());
this->std.emplace_back(tmp.as<float>());
}
this->scale = item["scale"].as<double>();
}
}
}
void RunRecModel(const cv::Mat &img, double &cost_time, std::vector<float> &feature);
//void PostProcess(std::vector<float> &feature);
cv::Mat ResizeImage(const cv::Mat &img);
void NeonMeanScale(const float *din, float *dout, int size);
void RunRecModel(const cv::Mat &img, double &cost_time,
std::vector<float> &feature);
// void PostProcess(std::vector<float> &feature);
void FeatureNorm(std::vector<float> &featuer);
private:
std::shared_ptr<PaddlePredictor> predictor;
//std::vector<std::string> label_list;
// std::vector<std::string> label_list;
std::vector<float> mean = {0.485f, 0.456f, 0.406f};
std::vector<float> std = {1 / 0.229f, 1 / 0.224f, 1 / 0.225f};
std::vector<float> std = {0.229f, 0.224f, 0.225f};
double scale = 0.00392157;
float size = 224;
int size = 224;
// pre-process
Resize resize_op_;
NormalizeImage normalize_op_;
Permute permute_op_;
};
} // namespace PPShiTu
......@@ -16,24 +16,24 @@
#include <ctime>
#include <memory>
#include <stdlib.h>
#include <string>
#include <utility>
#include <vector>
#include <stdlib.h>
#include "json/json.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "json/json.h"
#include "paddle_api.h" // NOLINT
#include "paddle_api.h" // NOLINT
#include "include/config_parser.h"
#include "include/picodet_postprocess.h"
#include "include/preprocess_op.h"
#include "include/utils.h"
#include "include/picodet_postprocess.h"
using namespace paddle::lite_api; // NOLINT
using namespace paddle::lite_api; // NOLINT
namespace PPShiTu {
......@@ -41,53 +41,51 @@ namespace PPShiTu {
std::vector<int> GenerateColorMap(int num_class);
// Visualiztion Detection Result
cv::Mat VisualizeResult(const cv::Mat& img,
const std::vector<PPShiTu::ObjectResult>& results,
const std::vector<std::string>& lables,
const std::vector<int>& colormap,
const bool is_rbox);
cv::Mat VisualizeResult(const cv::Mat &img,
const std::vector<PPShiTu::ObjectResult> &results,
const std::vector<std::string> &lables,
const std::vector<int> &colormap, const bool is_rbox);
class ObjectDetector {
public:
explicit ObjectDetector(const Json::Value& config,
const std::string& model_dir,
int cpu_threads = 1,
public:
explicit ObjectDetector(const Json::Value &config,
const std::string &model_dir, int cpu_threads = 1,
const int batch_size = 1) {
config_.load_config(config);
printf("config created\n");
preprocessor_.Init(config_.preprocess_info_);
printf("before object detector\n");
if(config["Global"]["det_model_path"].as<std::string>().empty()){
std::cout << "Please set [det_model_path] in config file" << std::endl;
exit(-1);
if (config["Global"]["det_model_path"].as<std::string>().empty()) {
std::cout << "Please set [det_model_path] in config file" << std::endl;
exit(-1);
}
LoadModel(config["Global"]["det_model_path"].as<std::string>(), cpu_threads);
printf("create object detector\n"); }
LoadModel(config["Global"]["det_model_path"].as<std::string>(),
cpu_threads);
printf("create object detector\n");
}
// Load Paddle inference model
void LoadModel(std::string model_file, int num_theads);
// Run predictor
void Predict(const std::vector<cv::Mat>& imgs,
const int warmup = 0,
void Predict(const std::vector<cv::Mat> &imgs, const int warmup = 0,
const int repeats = 1,
std::vector<PPShiTu::ObjectResult>* result = nullptr,
std::vector<int>* bbox_num = nullptr,
std::vector<double>* times = nullptr);
std::vector<PPShiTu::ObjectResult> *result = nullptr,
std::vector<int> *bbox_num = nullptr,
std::vector<double> *times = nullptr);
// Get Model Label list
const std::vector<std::string>& GetLabelList() const {
const std::vector<std::string> &GetLabelList() const {
return config_.label_list_;
}
private:
private:
// Preprocess image and copy data to input buffer
void Preprocess(const cv::Mat& image_mat);
void Preprocess(const cv::Mat &image_mat);
// Postprocess result
void Postprocess(const std::vector<cv::Mat> mats,
std::vector<PPShiTu::ObjectResult>* result,
std::vector<int> bbox_num,
bool is_rbox);
std::vector<PPShiTu::ObjectResult> *result,
std::vector<int> bbox_num, bool is_rbox);
std::shared_ptr<PaddlePredictor> predictor_;
Preprocessor preprocessor_;
......@@ -96,7 +94,6 @@ class ObjectDetector {
std::vector<int> out_bbox_num_data_;
float threshold_;
ConfigPaser config_;
};
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -14,25 +14,23 @@
#pragma once
#include <string>
#include <vector>
#include <memory>
#include <utility>
#include <ctime>
#include <memory>
#include <numeric>
#include <string>
#include <utility>
#include <vector>
#include "include/utils.h"
namespace PPShiTu {
void PicoDetPostProcess(std::vector<PPShiTu::ObjectResult>* results,
std::vector<const float *> outs,
std::vector<int> fpn_stride,
std::vector<float> im_shape,
std::vector<float> scale_factor,
float score_threshold = 0.3,
float nms_threshold = 0.5,
int num_class = 80,
int reg_max = 7);
void PicoDetPostProcess(std::vector<PPShiTu::ObjectResult> *results,
std::vector<const float *> outs,
std::vector<int> fpn_stride,
std::vector<float> im_shape,
std::vector<float> scale_factor,
float score_threshold = 0.3, float nms_threshold = 0.5,
int num_class = 80, int reg_max = 7);
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -21,16 +21,16 @@
#include <utility>
#include <vector>
#include "json/json.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "json/json.h"
namespace PPShiTu {
// Object for storing all preprocessed data
class ImageBlob {
public:
public:
// image width and height
std::vector<float> im_shape_;
// Buffer for image data after preprocessing
......@@ -45,20 +45,20 @@ class ImageBlob {
// Abstraction of preprocessing opration class
class PreprocessOp {
public:
virtual void Init(const Json::Value& item) = 0;
virtual void Run(cv::Mat* im, ImageBlob* data) = 0;
public:
virtual void Init(const Json::Value &item) = 0;
virtual void Run(cv::Mat *im, ImageBlob *data) = 0;
};
class InitInfo : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {}
virtual void Run(cv::Mat* im, ImageBlob* data);
public:
virtual void Init(const Json::Value &item) {}
virtual void Run(cv::Mat *im, ImageBlob *data);
};
class NormalizeImage : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {
public:
virtual void Init(const Json::Value &item) {
mean_.clear();
scale_.clear();
for (auto tmp : item["mean"]) {
......@@ -70,9 +70,11 @@ class NormalizeImage : public PreprocessOp {
is_scale_ = item["is_scale"].as<bool>();
}
virtual void Run(cv::Mat* im, ImageBlob* data);
virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale);
private:
private:
// CHW or HWC
std::vector<float> mean_;
std::vector<float> scale_;
......@@ -80,14 +82,15 @@ class NormalizeImage : public PreprocessOp {
};
class Permute : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {}
virtual void Run(cv::Mat* im, ImageBlob* data);
public:
virtual void Init(const Json::Value &item) {}
virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(const cv::Mat *im, float *data);
};
class Resize : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {
public:
virtual void Init(const Json::Value &item) {
interp_ = item["interp"].as<int>();
// max_size_ = item["target_size"].as<int>();
keep_ratio_ = item["keep_ratio"].as<bool>();
......@@ -98,11 +101,13 @@ class Resize : public PreprocessOp {
}
// Compute best resize scale for x-dimension, y-dimension
std::pair<float, float> GenerateScale(const cv::Mat& im);
std::pair<float, float> GenerateScale(const cv::Mat &im);
virtual void Run(cv::Mat* im, ImageBlob* data);
virtual void Run(cv::Mat *im, ImageBlob *data);
void Run_feature(const cv::Mat &img, cv::Mat &resize_img, int max_size_len,
int size = 0);
private:
private:
int interp_;
bool keep_ratio_;
std::vector<int> target_size_;
......@@ -111,46 +116,43 @@ class Resize : public PreprocessOp {
// Models with FPN need input shape % stride == 0
class PadStride : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {
public:
virtual void Init(const Json::Value &item) {
stride_ = item["stride"].as<int>();
}
virtual void Run(cv::Mat* im, ImageBlob* data);
virtual void Run(cv::Mat *im, ImageBlob *data);
private:
private:
int stride_;
};
class TopDownEvalAffine : public PreprocessOp {
public:
virtual void Init(const Json::Value& item) {
public:
virtual void Init(const Json::Value &item) {
trainsize_.clear();
for (auto tmp : item["trainsize"]) {
trainsize_.emplace_back(tmp.as<int>());
}
}
virtual void Run(cv::Mat* im, ImageBlob* data);
virtual void Run(cv::Mat *im, ImageBlob *data);
private:
private:
int interp_ = 1;
std::vector<int> trainsize_;
};
void CropImg(cv::Mat& img,
cv::Mat& crop_img,
std::vector<int>& area,
std::vector<float>& center,
std::vector<float>& scale,
void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector<int> &area,
std::vector<float> &center, std::vector<float> &scale,
float expandratio = 0.15);
class Preprocessor {
public:
void Init(const Json::Value& config_node) {
public:
void Init(const Json::Value &config_node) {
// initialize image info at first
ops_["InitInfo"] = std::make_shared<InitInfo>();
for (const auto& item : config_node) {
for (const auto &item : config_node) {
auto op_name = item["type"].as<std::string>();
ops_[op_name] = CreateOp(op_name);
......@@ -158,7 +160,7 @@ class Preprocessor {
}
}
std::shared_ptr<PreprocessOp> CreateOp(const std::string& name) {
std::shared_ptr<PreprocessOp> CreateOp(const std::string &name) {
if (name == "DetResize") {
return std::make_shared<Resize>();
} else if (name == "DetPermute") {
......@@ -176,13 +178,13 @@ class Preprocessor {
return nullptr;
}
void Run(cv::Mat* im, ImageBlob* data);
void Run(cv::Mat *im, ImageBlob *data);
public:
public:
static const std::vector<std::string> RUN_ORDER;
private:
private:
std::unordered_map<std::string, std::shared_ptr<PreprocessOp>> ops_;
};
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -38,6 +38,23 @@ struct ObjectResult {
std::vector<RESULT> rec_result;
};
void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold, bool rec_nms=false);
void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold,
bool rec_nms = false);
template <typename T>
static inline bool SortScorePairDescend(const std::pair<float, T> &pair1,
const std::pair<float, T> &pair2) {
return pair1.first > pair2.first;
}
float RectOverlap(const ObjectResult &a, const ObjectResult &b);
inline void
GetMaxScoreIndex(const std::vector<ObjectResult> &det_result,
const float threshold,
std::vector<std::pair<float, int>> &score_index_vec);
void NMSBoxes(const std::vector<ObjectResult> det_result,
const float score_threshold, const float nms_threshold,
std::vector<int> &indices);
} // namespace PPShiTu
......@@ -70,4 +70,4 @@ private:
std::vector<faiss::Index::idx_t> I;
SearchResult sr;
};
}
} // namespace PPShiTu
......@@ -29,4 +29,4 @@ void load_jsonf(std::string jsonfile, Json::Value &jsondata) {
}
}
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -13,24 +13,29 @@
// limitations under the License.
#include "include/feature_extractor.h"
#include <cmath>
#include <numeric>
namespace PPShiTu {
void FeatureExtract::RunRecModel(const cv::Mat &img,
double &cost_time,
void FeatureExtract::RunRecModel(const cv::Mat &img, double &cost_time,
std::vector<float> &feature) {
// Read img
cv::Mat resize_image = ResizeImage(img);
cv::Mat img_fp;
resize_image.convertTo(img_fp, CV_32FC3, scale);
this->resize_op_.Run_feature(img, img_fp, this->size, this->size);
this->normalize_op_.Run_feature(&img_fp, this->mean, this->std, this->scale);
std::vector<float> input(1 * 3 * img_fp.rows * img_fp.cols, 0.0f);
this->permute_op_.Run_feature(&img_fp, input.data());
// Prepare input data from image
std::unique_ptr<Tensor> input_tensor(std::move(this->predictor->GetInput(0)));
input_tensor->Resize({1, 3, img_fp.rows, img_fp.cols});
input_tensor->Resize({1, 3, this->size, this->size});
auto *data0 = input_tensor->mutable_data<float>();
const float *dimg = reinterpret_cast<const float *>(img_fp.data);
NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols);
// const float *dimg = reinterpret_cast<const float *>(img_fp.data);
// NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols);
for (int i = 0; i < input.size(); ++i) {
data0[i] = input[i];
}
auto start = std::chrono::system_clock::now();
// Run predictor
......@@ -38,7 +43,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
// Get output and post process
std::unique_ptr<const Tensor> output_tensor(
std::move(this->predictor->GetOutput(0))); //only one output
std::move(this->predictor->GetOutput(0))); // only one output
auto end = std::chrono::system_clock::now();
auto duration =
std::chrono::duration_cast<std::chrono::microseconds>(end - start);
......@@ -46,7 +51,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
std::chrono::microseconds::period::num /
std::chrono::microseconds::period::den;
//do postprocess
// do postprocess
int output_size = 1;
for (auto dim : output_tensor->shape()) {
output_size *= dim;
......@@ -54,63 +59,15 @@ void FeatureExtract::RunRecModel(const cv::Mat &img,
feature.resize(output_size);
output_tensor->CopyToCpu(feature.data());
//postprocess include sqrt or binarize.
//PostProcess(feature);
// postprocess include sqrt or binarize.
FeatureNorm(feature);
return;
}
// void FeatureExtract::PostProcess(std::vector<float> &feature){
// float feature_sqrt = std::sqrt(std::inner_product(
// feature.begin(), feature.end(), feature.begin(), 0.0f));
// for (int i = 0; i < feature.size(); ++i)
// feature[i] /= feature_sqrt;
// }
void FeatureExtract::NeonMeanScale(const float *din, float *dout, int size) {
if (this->mean.size() != 3 || this->std.size() != 3) {
std::cerr << "[ERROR] mean or scale size must equal to 3\n";
exit(1);
}
float32x4_t vmean0 = vdupq_n_f32(mean[0]);
float32x4_t vmean1 = vdupq_n_f32(mean[1]);
float32x4_t vmean2 = vdupq_n_f32(mean[2]);
float32x4_t vscale0 = vdupq_n_f32(std[0]);
float32x4_t vscale1 = vdupq_n_f32(std[1]);
float32x4_t vscale2 = vdupq_n_f32(std[2]);
float *dout_c0 = dout;
float *dout_c1 = dout + size;
float *dout_c2 = dout + size * 2;
int i = 0;
for (; i < size - 3; i += 4) {
float32x4x3_t vin3 = vld3q_f32(din);
float32x4_t vsub0 = vsubq_f32(vin3.val[0], vmean0);
float32x4_t vsub1 = vsubq_f32(vin3.val[1], vmean1);
float32x4_t vsub2 = vsubq_f32(vin3.val[2], vmean2);
float32x4_t vs0 = vmulq_f32(vsub0, vscale0);
float32x4_t vs1 = vmulq_f32(vsub1, vscale1);
float32x4_t vs2 = vmulq_f32(vsub2, vscale2);
vst1q_f32(dout_c0, vs0);
vst1q_f32(dout_c1, vs1);
vst1q_f32(dout_c2, vs2);
din += 12;
dout_c0 += 4;
dout_c1 += 4;
dout_c2 += 4;
}
for (; i < size; i++) {
*(dout_c0++) = (*(din++) - this->mean[0]) * this->std[0];
*(dout_c1++) = (*(din++) - this->mean[1]) * this->std[1];
*(dout_c2++) = (*(din++) - this->mean[2]) * this->std[2];
}
}
cv::Mat FeatureExtract::ResizeImage(const cv::Mat &img) {
cv::Mat resize_img;
cv::resize(img, resize_img, cv::Size(this->size, this->size));
return resize_img;
}
void FeatureExtract::FeatureNorm(std::vector<float> &feature) {
float feature_sqrt = std::sqrt(std::inner_product(
feature.begin(), feature.end(), feature.begin(), 0.0f));
for (int i = 0; i < feature.size(); ++i)
feature[i] /= feature_sqrt;
}
} // namespace PPShiTu
......@@ -27,6 +27,7 @@
#include "include/feature_extractor.h"
#include "include/object_detector.h"
#include "include/preprocess_op.h"
#include "include/utils.h"
#include "include/vector_search.h"
#include "json/json.h"
......@@ -158,6 +159,11 @@ int main(int argc, char **argv) {
<< " [image_dir]>" << std::endl;
return -1;
}
float rec_nms_threshold = 0.05;
if (RT_Config["Global"]["rec_nms_thresold"].isDouble())
rec_nms_threshold = RT_Config["Global"]["rec_nms_thresold"].as<float>();
// Load model and create a object detector
PPShiTu::ObjectDetector det(
RT_Config, RT_Config["Global"]["det_model_path"].as<std::string>(),
......@@ -174,6 +180,7 @@ int main(int argc, char **argv) {
// for vector search
std::vector<float> feature;
std::vector<float> features;
std::vector<int> indeices;
double rec_time;
if (!RT_Config["Global"]["infer_imgs"].as<std::string>().empty() ||
!img_dir.empty()) {
......@@ -208,9 +215,9 @@ int main(int argc, char **argv) {
RT_Config["Global"]["max_det_results"].as<int>(), false, &det);
// add the whole image for recognition to improve recall
// PPShiTu::ObjectResult result_whole_img = {
// {0, 0, srcimg.cols, srcimg.rows}, 0, 1.0};
// det_result.push_back(result_whole_img);
PPShiTu::ObjectResult result_whole_img = {
{0, 0, srcimg.cols, srcimg.rows}, 0, 1.0};
det_result.push_back(result_whole_img);
// get rec result
PPShiTu::SearchResult search_result;
......@@ -225,10 +232,18 @@ int main(int argc, char **argv) {
// do vectore search
search_result = searcher.Search(features.data(), det_result.size());
for (int i = 0; i < det_result.size(); ++i) {
det_result[i].confidence = search_result.D[search_result.return_k * i];
}
NMSBoxes(det_result, searcher.GetThreshold(), rec_nms_threshold,
indeices);
PrintResult(img_path, det_result, searcher, search_result);
batch_imgs.clear();
det_result.clear();
features.clear();
feature.clear();
indeices.clear();
}
}
return 0;
......
......@@ -13,9 +13,9 @@
// limitations under the License.
#include <sstream>
// for setprecision
#include "include/object_detector.h"
#include <chrono>
#include <iomanip>
#include "include/object_detector.h"
namespace PPShiTu {
......@@ -30,10 +30,10 @@ void ObjectDetector::LoadModel(std::string model_file, int num_theads) {
}
// Visualiztion MaskDetector results
cv::Mat VisualizeResult(const cv::Mat& img,
const std::vector<PPShiTu::ObjectResult>& results,
const std::vector<std::string>& lables,
const std::vector<int>& colormap,
cv::Mat VisualizeResult(const cv::Mat &img,
const std::vector<PPShiTu::ObjectResult> &results,
const std::vector<std::string> &lables,
const std::vector<int> &colormap,
const bool is_rbox = false) {
cv::Mat vis_img = img.clone();
for (int i = 0; i < results.size(); ++i) {
......@@ -75,24 +75,18 @@ cv::Mat VisualizeResult(const cv::Mat& img,
origin.y = results[i].rect[1];
// Configure text background
cv::Rect text_back = cv::Rect(results[i].rect[0],
results[i].rect[1] - text_size.height,
text_size.width,
text_size.height);
cv::Rect text_back =
cv::Rect(results[i].rect[0], results[i].rect[1] - text_size.height,
text_size.width, text_size.height);
// Draw text, and background
cv::rectangle(vis_img, text_back, roi_color, -1);
cv::putText(vis_img,
text,
origin,
font_face,
font_scale,
cv::Scalar(255, 255, 255),
thickness);
cv::putText(vis_img, text, origin, font_face, font_scale,
cv::Scalar(255, 255, 255), thickness);
}
return vis_img;
}
void ObjectDetector::Preprocess(const cv::Mat& ori_im) {
void ObjectDetector::Preprocess(const cv::Mat &ori_im) {
// Clone the image : keep the original mat for postprocess
cv::Mat im = ori_im.clone();
// cv::cvtColor(im, im, cv::COLOR_BGR2RGB);
......@@ -100,7 +94,7 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) {
}
void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats,
std::vector<PPShiTu::ObjectResult>* result,
std::vector<PPShiTu::ObjectResult> *result,
std::vector<int> bbox_num,
bool is_rbox = false) {
result->clear();
......@@ -156,12 +150,11 @@ void ObjectDetector::Postprocess(const std::vector<cv::Mat> mats,
}
}
void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
const int warmup,
void ObjectDetector::Predict(const std::vector<cv::Mat> &imgs, const int warmup,
const int repeats,
std::vector<PPShiTu::ObjectResult>* result,
std::vector<int>* bbox_num,
std::vector<double>* times) {
std::vector<PPShiTu::ObjectResult> *result,
std::vector<int> *bbox_num,
std::vector<double> *times) {
auto preprocess_start = std::chrono::steady_clock::now();
int batch_size = imgs.size();
......@@ -180,29 +173,29 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1];
// TODO: reduce cost time
in_data_all.insert(
in_data_all.end(), inputs_.im_data_.begin(), inputs_.im_data_.end());
in_data_all.insert(in_data_all.end(), inputs_.im_data_.begin(),
inputs_.im_data_.end());
}
auto preprocess_end = std::chrono::steady_clock::now();
std::vector<const float *> output_data_list_;
// Prepare input tensor
auto input_names = predictor_->GetInputNames();
for (const auto& tensor_name : input_names) {
for (const auto &tensor_name : input_names) {
auto in_tensor = predictor_->GetInputByName(tensor_name);
if (tensor_name == "image") {
int rh = inputs_.in_net_shape_[0];
int rw = inputs_.in_net_shape_[1];
in_tensor->Resize({batch_size, 3, rh, rw});
auto* inptr = in_tensor->mutable_data<float>();
auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(in_data_all.data(), in_data_all.size(), inptr);
} else if (tensor_name == "im_shape") {
in_tensor->Resize({batch_size, 2});
auto* inptr = in_tensor->mutable_data<float>();
auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(im_shape_all.data(), im_shape_all.size(), inptr);
} else if (tensor_name == "scale_factor") {
in_tensor->Resize({batch_size, 2});
auto* inptr = in_tensor->mutable_data<float>();
auto *inptr = in_tensor->mutable_data<float>();
std::copy_n(scale_factor_all.data(), scale_factor_all.size(), inptr);
}
}
......@@ -216,7 +209,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
if (config_.arch_ == "PicoDet") {
for (int j = 0; j < output_names.size(); j++) {
auto output_tensor = predictor_->GetTensor(output_names[j]);
const float* outptr = output_tensor->data<float>();
const float *outptr = output_tensor->data<float>();
std::vector<int64_t> output_shape = output_tensor->shape();
output_data_list_.push_back(outptr);
}
......@@ -242,7 +235,7 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
if (config_.arch_ == "PicoDet") {
for (int i = 0; i < output_names.size(); i++) {
auto output_tensor = predictor_->GetTensor(output_names[i]);
const float* outptr = output_tensor->data<float>();
const float *outptr = output_tensor->data<float>();
std::vector<int64_t> output_shape = output_tensor->shape();
if (i == 0) {
num_class = output_shape[2];
......@@ -268,16 +261,15 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
std::cerr << "[WARNING] No object detected." << std::endl;
}
output_data_.resize(output_size);
std::copy_n(
output_tensor->mutable_data<float>(), output_size, output_data_.data());
std::copy_n(output_tensor->mutable_data<float>(), output_size,
output_data_.data());
int out_bbox_num_size = 1;
for (int j = 0; j < out_bbox_num_shape.size(); ++j) {
out_bbox_num_size *= out_bbox_num_shape[j];
}
out_bbox_num_data_.resize(out_bbox_num_size);
std::copy_n(out_bbox_num->mutable_data<int>(),
out_bbox_num_size,
std::copy_n(out_bbox_num->mutable_data<int>(), out_bbox_num_size,
out_bbox_num_data_.data());
}
// Postprocessing result
......@@ -285,9 +277,8 @@ void ObjectDetector::Predict(const std::vector<cv::Mat>& imgs,
result->clear();
if (config_.arch_ == "PicoDet") {
PPShiTu::PicoDetPostProcess(
result, output_data_list_, config_.fpn_stride_,
inputs_.im_shape_, inputs_.scale_factor_,
config_.nms_info_["score_threshold"].as<float>(),
result, output_data_list_, config_.fpn_stride_, inputs_.im_shape_,
inputs_.scale_factor_, config_.nms_info_["score_threshold"].as<float>(),
config_.nms_info_["nms_threshold"].as<float>(), num_class, reg_max);
bbox_num->push_back(result->size());
} else {
......@@ -326,4 +317,4 @@ std::vector<int> GenerateColorMap(int num_class) {
return colormap;
}
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -47,9 +47,9 @@ int activation_function_softmax(const _Tp *src, _Tp *dst, int length) {
}
// PicoDet decode
PPShiTu::ObjectResult
disPred2Bbox(const float *&dfl_det, int label, float score, int x, int y,
int stride, std::vector<float> im_shape, int reg_max) {
PPShiTu::ObjectResult disPred2Bbox(const float *&dfl_det, int label,
float score, int x, int y, int stride,
std::vector<float> im_shape, int reg_max) {
float ct_x = (x + 0.5) * stride;
float ct_y = (y + 0.5) * stride;
std::vector<float> dis_pred;
......
......@@ -20,7 +20,7 @@
namespace PPShiTu {
void InitInfo::Run(cv::Mat* im, ImageBlob* data) {
void InitInfo::Run(cv::Mat *im, ImageBlob *data) {
data->im_shape_ = {static_cast<float>(im->rows),
static_cast<float>(im->cols)};
data->scale_factor_ = {1., 1.};
......@@ -28,10 +28,10 @@ void InitInfo::Run(cv::Mat* im, ImageBlob* data) {
static_cast<float>(im->cols)};
}
void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) {
void NormalizeImage::Run(cv::Mat *im, ImageBlob *data) {
double e = 1.0;
if (is_scale_) {
e *= 1./255.0;
e *= 1. / 255.0;
}
(*im).convertTo(*im, CV_32FC3, e);
for (int h = 0; h < im->rows; h++) {
......@@ -46,35 +46,61 @@ void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) {
}
}
void Permute::Run(cv::Mat* im, ImageBlob* data) {
void NormalizeImage::Run_feature(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale) {
(*im).convertTo(*im, CV_32FC3, scale);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / std[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / std[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / std[2];
}
}
}
void Permute::Run(cv::Mat *im, ImageBlob *data) {
(*im).convertTo(*im, CV_32FC3);
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
(data->im_data_).resize(rc * rh * rw);
float* base = (data->im_data_).data();
float *base = (data->im_data_).data();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i);
}
}
void Resize::Run(cv::Mat* im, ImageBlob* data) {
void Permute::Run_feature(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Resize::Run(cv::Mat *im, ImageBlob *data) {
auto resize_scale = GenerateScale(*im);
data->im_shape_ = {static_cast<float>(im->cols * resize_scale.first),
static_cast<float>(im->rows * resize_scale.second)};
data->in_net_shape_ = {static_cast<float>(im->cols * resize_scale.first),
static_cast<float>(im->rows * resize_scale.second)};
cv::resize(
*im, *im, cv::Size(), resize_scale.first, resize_scale.second, interp_);
cv::resize(*im, *im, cv::Size(), resize_scale.first, resize_scale.second,
interp_);
data->im_shape_ = {
static_cast<float>(im->rows), static_cast<float>(im->cols),
static_cast<float>(im->rows),
static_cast<float>(im->cols),
};
data->scale_factor_ = {
resize_scale.second, resize_scale.first,
resize_scale.second,
resize_scale.first,
};
}
std::pair<float, float> Resize::GenerateScale(const cv::Mat& im) {
std::pair<float, float> Resize::GenerateScale(const cv::Mat &im) {
std::pair<float, float> resize_scale;
int origin_w = im.cols;
int origin_h = im.rows;
......@@ -101,7 +127,30 @@ std::pair<float, float> Resize::GenerateScale(const cv::Mat& im) {
return resize_scale;
}
void PadStride::Run(cv::Mat* im, ImageBlob* data) {
void Resize::Run_feature(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size, int size) {
int resize_h = 0;
int resize_w = 0;
if (size > 0) {
resize_h = size;
resize_w = size;
} else {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
resize_h = round(float(h) * ratio);
resize_w = round(float(w) * ratio);
}
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
void PadStride::Run(cv::Mat *im, ImageBlob *data) {
if (stride_ <= 0) {
return;
}
......@@ -110,48 +159,44 @@ void PadStride::Run(cv::Mat* im, ImageBlob* data) {
int rw = im->cols;
int nh = (rh / stride_) * stride_ + (rh % stride_ != 0) * stride_;
int nw = (rw / stride_) * stride_ + (rw % stride_ != 0) * stride_;
cv::copyMakeBorder(
*im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT, cv::Scalar(0));
cv::copyMakeBorder(*im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT,
cv::Scalar(0));
data->in_net_shape_ = {
static_cast<float>(im->rows), static_cast<float>(im->cols),
static_cast<float>(im->rows),
static_cast<float>(im->cols),
};
}
void TopDownEvalAffine::Run(cv::Mat* im, ImageBlob* data) {
void TopDownEvalAffine::Run(cv::Mat *im, ImageBlob *data) {
cv::resize(*im, *im, cv::Size(trainsize_[0], trainsize_[1]), 0, 0, interp_);
// todo: Simd::ResizeBilinear();
data->in_net_shape_ = {
static_cast<float>(trainsize_[1]), static_cast<float>(trainsize_[0]),
static_cast<float>(trainsize_[1]),
static_cast<float>(trainsize_[0]),
};
}
// Preprocessor op running order
const std::vector<std::string> Preprocessor::RUN_ORDER = {"InitInfo",
"DetTopDownEvalAffine",
"DetResize",
"DetNormalizeImage",
"DetPadStride",
"DetPermute"};
void Preprocessor::Run(cv::Mat* im, ImageBlob* data) {
for (const auto& name : RUN_ORDER) {
const std::vector<std::string> Preprocessor::RUN_ORDER = {
"InitInfo", "DetTopDownEvalAffine", "DetResize",
"DetNormalizeImage", "DetPadStride", "DetPermute"};
void Preprocessor::Run(cv::Mat *im, ImageBlob *data) {
for (const auto &name : RUN_ORDER) {
if (ops_.find(name) != ops_.end()) {
ops_[name]->Run(im, data);
}
}
}
void CropImg(cv::Mat& img,
cv::Mat& crop_img,
std::vector<int>& area,
std::vector<float>& center,
std::vector<float>& scale,
void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector<int> &area,
std::vector<float> &center, std::vector<float> &scale,
float expandratio) {
int crop_x1 = std::max(0, area[0]);
int crop_y1 = std::max(0, area[1]);
int crop_x2 = std::min(img.cols - 1, area[2]);
int crop_y2 = std::min(img.rows - 1, area[3]);
int center_x = (crop_x1 + crop_x2) / 2.;
int center_y = (crop_y1 + crop_y2) / 2.;
int half_h = (crop_y2 - crop_y1) / 2.;
......@@ -182,4 +227,4 @@ void CropImg(cv::Mat& img,
scale.emplace_back((crop_y2 - crop_y1));
}
} // namespace PPShiTu
} // namespace PPShiTu
......@@ -54,4 +54,53 @@ void nms(std::vector<ObjectResult> &input_boxes, float nms_threshold,
}
}
float RectOverlap(const ObjectResult &a, const ObjectResult &b) {
float Aa = (a.rect[2] - a.rect[0] + 1) * (a.rect[3] - a.rect[1] + 1);
float Ab = (b.rect[2] - b.rect[0] + 1) * (b.rect[3] - b.rect[1] + 1);
int iou_w = max(min(a.rect[2], b.rect[2]) - max(a.rect[0], b.rect[0]) + 1, 0);
int iou_h = max(min(a.rect[3], b.rect[3]) - max(a.rect[1], b.rect[1]) + 1, 0);
float Aab = iou_w * iou_h;
return Aab / (Aa + Ab - Aab);
}
inline void
GetMaxScoreIndex(const std::vector<ObjectResult> &det_result,
const float threshold,
std::vector<std::pair<float, int>> &score_index_vec) {
// Generate index score pairs.
for (size_t i = 0; i < det_result.size(); ++i) {
if (det_result[i].confidence > threshold) {
score_index_vec.push_back(std::make_pair(det_result[i].confidence, i));
}
}
// Sort the score pair according to the scores in descending order
std::stable_sort(score_index_vec.begin(), score_index_vec.end(),
SortScorePairDescend<int>);
}
void NMSBoxes(const std::vector<ObjectResult> det_result,
const float score_threshold, const float nms_threshold,
std::vector<int> &indices) {
int a = 1;
// Get top_k scores (with corresponding indices).
std::vector<std::pair<float, int>> score_index_vec;
GetMaxScoreIndex(det_result, score_threshold, score_index_vec);
// Do nms
indices.clear();
for (size_t i = 0; i < score_index_vec.size(); ++i) {
const int idx = score_index_vec[i].second;
bool keep = true;
for (int k = 0; k < (int)indices.size() && keep; ++k) {
const int kept_idx = indices[k];
float overlap = RectOverlap(det_result[idx], det_result[kept_idx]);
keep = overlap <= nms_threshold;
}
if (keep)
indices.push_back(idx);
}
}
} // namespace PPShiTu
......@@ -64,4 +64,4 @@ const SearchResult &VectorSearch::Search(float *feature, int query_number) {
const std::string &VectorSearch::GetLabel(faiss::Index::idx_t ind) {
return this->id_map.at(ind);
}
}
\ No newline at end of file
} // namespace PPShiTu
# paddle2onnx 模型转化与预测
本章节介绍 ResNet50_vd 模型如何转化为 ONNX 模型,并基于 ONNX 引擎预测。
## 目录
- [paddle2onnx 模型转化与预测](#paddle2onnx-模型转化与预测)
- [1. 环境准备](#1-环境准备)
- [2. 模型转换](#2-模型转换)
- [3. onnx 预测](#3-onnx-预测)
## 1. 环境准备
需要准备 Paddle2ONNX 模型转化环境,和 ONNX 模型预测环境。
Paddle2ONNX 支持将 PaddlePaddle 模型格式转化到 ONNX 模型格式,算子目前稳定支持导出 ONNX Opset 9~11,部分Paddle算子支持更低的ONNX Opset转换
更多细节可参考 [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md)
Paddle2ONNX 支持将 PaddlePaddle inference 模型格式转化到 ONNX 模型格式,算子目前稳定支持导出 ONNX Opset 9~11
更多细节可参考 [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX#paddle2onnx)
- 安装 Paddle2ONNX
```
python3.7 -m pip install paddle2onnx
```
```shell
python3.7 -m pip install paddle2onnx
```
- 安装 ONNX 运行时
```
python3.7 -m pip install onnxruntime
```
- 安装 ONNX 推理引擎
```shell
python3.7 -m pip install onnxruntime
```
下面以 ResNet50_vd 为例,介绍如何将 PaddlePaddle inference 模型转换为 ONNX 模型,并基于 ONNX 引擎预测。
## 2. 模型转换
- ResNet50_vd inference模型下载
```
cd deploy
mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd ..
```
```shell
cd deploy
mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd ..
```
- 模型转换
使用 Paddle2ONNX 将 Paddle 静态图模型转换为 ONNX 模型格式:
```
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \
--enable_onnx_checker=True
```
使用 Paddle2ONNX 将 Paddle 静态图模型转换为 ONNX 模型格式:
```shell
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \
--enable_onnx_checker=True
```
执行完毕后,ONNX 模型 `inference.onnx` 会被保存在 `./models/ResNet50_vd_infer/` 路径下
转换完毕后,生成的ONNX 模型 `inference.onnx` 会被保存在 `./models/ResNet50_vd_infer/` 路径下
## 3. onnx 预测
执行如下命令:
```
```shell
python3.7 python/predict_cls.py \
-c configs/inference_cls.yaml \
-o Global.use_onnx=True \
......
# Paddle2ONNX: Converting To ONNX and Deployment
This section introduce that how to convert the Paddle Inference Model ResNet50_vd to ONNX model and deployment based on ONNX engine.
## 1. Installation
First, you need to install Paddle2ONNX and onnxruntime. Paddle2ONNX is a toolkit to convert Paddle Inference Model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_en.md) for more information.
- Paddle2ONNX Installation
```
python3.7 -m pip install paddle2onnx
```
- ONNX Installation
```
python3.7 -m pip install onnxruntime
```
## 2. Converting to ONNX
Download the Paddle Inference Model ResNet50_vd:
```
cd deploy
mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd ..
```
Converting to ONNX model:
```
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \
--enable_onnx_checker=True
```
After running the above command, the ONNX model file converted would be save in `./models/ResNet50_vd_infer/`.
## 3. Deployment
Deployment with ONNX model, command is as shown below.
```
python3.7 python/predict_cls.py \
-c configs/inference_cls.yaml \
-o Global.use_onnx=True \
-o Global.use_gpu=False \
-o Global.inference_model_dir=./models/ResNet50_vd_infer
```
The prediction results:
```
ILSVRC2012_val_00000010.jpeg: class id(s): [153, 204, 229, 332, 155], score(s): [0.69, 0.10, 0.02, 0.01, 0.01], label_name(s): ['Maltese dog, Maltese terrier, Maltese', 'Lhasa, Lhasa apso', 'Old English sheepdog, bobtail', 'Angora, Angora rabbit', 'Shih-Tzu']
```
......@@ -30,4 +30,4 @@ op:
client_type: local_predictor
#Fetch结果列表,以client_config中fetch_var的alias_name为准
fetch_list: ["prediction"]
fetch_list: ["prediction"]
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "core/general-server/op/general_clas_op.h"
#include "core/predictor/framework/infer.h"
#include "core/predictor/framework/memory.h"
#include "core/predictor/framework/resource.h"
#include "core/util/include/timer.h"
#include <algorithm>
#include <iostream>
#include <memory>
#include <sstream>
namespace baidu {
namespace paddle_serving {
namespace serving {
using baidu::paddle_serving::Timer;
using baidu::paddle_serving::predictor::MempoolWrapper;
using baidu::paddle_serving::predictor::general_model::Tensor;
using baidu::paddle_serving::predictor::general_model::Response;
using baidu::paddle_serving::predictor::general_model::Request;
using baidu::paddle_serving::predictor::InferManager;
using baidu::paddle_serving::predictor::PaddleGeneralModelConfig;
int GeneralClasOp::inference() {
VLOG(2) << "Going to run inference";
const std::vector<std::string> pre_node_names = pre_names();
if (pre_node_names.size() != 1) {
LOG(ERROR) << "This op(" << op_name()
<< ") can only have one predecessor op, but received "
<< pre_node_names.size();
return -1;
}
const std::string pre_name = pre_node_names[0];
const GeneralBlob *input_blob = get_depend_argument<GeneralBlob>(pre_name);
if (!input_blob) {
LOG(ERROR) << "input_blob is nullptr,error";
return -1;
}
uint64_t log_id = input_blob->GetLogId();
VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name;
GeneralBlob *output_blob = mutable_data<GeneralBlob>();
if (!output_blob) {
LOG(ERROR) << "output_blob is nullptr,error";
return -1;
}
output_blob->SetLogId(log_id);
if (!input_blob) {
LOG(ERROR) << "(logid=" << log_id
<< ") Failed mutable depended argument, op:" << pre_name;
return -1;
}
const TensorVector *in = &input_blob->tensor_vector;
TensorVector *out = &output_blob->tensor_vector;
int batch_size = input_blob->_batch_size;
output_blob->_batch_size = batch_size;
VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size;
Timer timeline;
int64_t start = timeline.TimeStampUS();
timeline.Start();
// only support string type
char *total_input_ptr = static_cast<char *>(in->at(0).data.data());
std::string base64str = total_input_ptr;
cv::Mat img = Base2Mat(base64str);
// RGB2BGR
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
// Resize
cv::Mat resize_img;
resize_op_.Run(img, resize_img, resize_short_size_);
// CenterCrop
crop_op_.Run(resize_img, crop_size_);
// Normalize
normalize_op_.Run(&resize_img, mean_, scale_, is_scale_);
// Permute
std::vector<float> input(1 * 3 * resize_img.rows * resize_img.cols, 0.0f);
permute_op_.Run(&resize_img, input.data());
float maxValue = *max_element(input.begin(), input.end());
float minValue = *min_element(input.begin(), input.end());
TensorVector *real_in = new TensorVector();
if (!real_in) {
LOG(ERROR) << "real_in is nullptr,error";
return -1;
}
std::vector<int> input_shape;
int in_num = 0;
void *databuf_data = NULL;
char *databuf_char = NULL;
size_t databuf_size = 0;
input_shape = {1, 3, resize_img.rows, resize_img.cols};
in_num = std::accumulate(input_shape.begin(), input_shape.end(), 1,
std::multiplies<int>());
databuf_size = in_num * sizeof(float);
databuf_data = MempoolWrapper::instance().malloc(databuf_size);
if (!databuf_data) {
LOG(ERROR) << "Malloc failed, size: " << databuf_size;
return -1;
}
memcpy(databuf_data, input.data(), databuf_size);
databuf_char = reinterpret_cast<char *>(databuf_data);
paddle::PaddleBuf paddleBuf(databuf_char, databuf_size);
paddle::PaddleTensor tensor_in;
tensor_in.name = in->at(0).name;
tensor_in.dtype = paddle::PaddleDType::FLOAT32;
tensor_in.shape = {1, 3, resize_img.rows, resize_img.cols};
tensor_in.lod = in->at(0).lod;
tensor_in.data = paddleBuf;
real_in->push_back(tensor_in);
if (InferManager::instance().infer(engine_name().c_str(), real_in, out,
batch_size)) {
LOG(ERROR) << "(logid=" << log_id
<< ") Failed do infer in fluid model: " << engine_name().c_str();
return -1;
}
int64_t end = timeline.TimeStampUS();
CopyBlobInfo(input_blob, output_blob);
AddBlobInfo(output_blob, start);
AddBlobInfo(output_blob, end);
return 0;
}
cv::Mat GeneralClasOp::Base2Mat(std::string &base64_data) {
cv::Mat img;
std::string s_mat;
s_mat = base64Decode(base64_data.data(), base64_data.size());
std::vector<char> base64_img(s_mat.begin(), s_mat.end());
img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR
return img;
}
std::string GeneralClasOp::base64Decode(const char *Data, int DataByte) {
const char DecodeTable[] = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
62, // '+'
0, 0, 0,
63, // '/'
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9'
0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z'
0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36,
37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z'
};
std::string strDecode;
int nValue;
int i = 0;
while (i < DataByte) {
if (*Data != '\r' && *Data != '\n') {
nValue = DecodeTable[*Data++] << 18;
nValue += DecodeTable[*Data++] << 12;
strDecode += (nValue & 0x00FF0000) >> 16;
if (*Data != '=') {
nValue += DecodeTable[*Data++] << 6;
strDecode += (nValue & 0x0000FF00) >> 8;
if (*Data != '=') {
nValue += DecodeTable[*Data++];
strDecode += nValue & 0x000000FF;
}
}
i += 4;
} else // 回车换行,跳过
{
Data++;
i++;
}
}
return strDecode;
}
DEFINE_OP(GeneralClasOp);
} // namespace serving
} // namespace paddle_serving
} // namespace baidu
// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include "core/general-server/general_model_service.pb.h"
#include "core/general-server/op/general_infer_helper.h"
#include "core/predictor/tools/pp_shitu_tools/preprocess_op.h"
#include "paddle_inference_api.h" // NOLINT
#include <string>
#include <vector>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <numeric>
namespace baidu {
namespace paddle_serving {
namespace serving {
class GeneralClasOp
: public baidu::paddle_serving::predictor::OpWithChannel<GeneralBlob> {
public:
typedef std::vector<paddle::PaddleTensor> TensorVector;
DECLARE_OP(GeneralClasOp);
int inference();
private:
// clas preprocess
std::vector<float> mean_ = {0.485f, 0.456f, 0.406f};
std::vector<float> scale_ = {0.229f, 0.224f, 0.225f};
bool is_scale_ = true;
int resize_short_size_ = 256;
int crop_size_ = 224;
PaddleClas::ResizeImg resize_op_;
PaddleClas::Normalize normalize_op_;
PaddleClas::Permute permute_op_;
PaddleClas::CenterCropImg crop_op_;
// read pics
cv::Mat Base2Mat(std::string &base64_data);
std::string base64Decode(const char *Data, int DataByte);
};
} // namespace serving
} // namespace paddle_serving
} // namespace baidu
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "paddle_api.h"
#include "paddle_inference_api.h"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <math.h>
#include <numeric>
#include "preprocess_op.h"
namespace Feature {
void Permute::Run(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Normalize::Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale) {
(*im).convertTo(*im, CV_32FC3, scale);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / std[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / std[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / std[2];
}
}
}
void CenterCropImg::Run(cv::Mat &img, const int crop_size) {
int resize_w = img.cols;
int resize_h = img.rows;
int w_start = int((resize_w - crop_size) / 2);
int h_start = int((resize_h - crop_size) / 2);
cv::Rect rect(w_start, h_start, crop_size, crop_size);
img = img(rect);
}
void ResizeImg::Run(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size, int size) {
int resize_h = 0;
int resize_w = 0;
if (size > 0) {
resize_h = size;
resize_w = size;
} else {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
resize_h = round(float(h) * ratio);
resize_w = round(float(w) * ratio);
}
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
} // namespace Feature
namespace PaddleClas {
void Permute::Run(const cv::Mat *im, float *data) {
int rh = im->rows;
int rw = im->cols;
int rc = im->channels();
for (int i = 0; i < rc; ++i) {
cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i);
}
}
void Normalize::Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &scale, const bool is_scale) {
double e = 1.0;
if (is_scale) {
e /= 255.0;
}
(*im).convertTo(*im, CV_32FC3, e);
for (int h = 0; h < im->rows; h++) {
for (int w = 0; w < im->cols; w++) {
im->at<cv::Vec3f>(h, w)[0] =
(im->at<cv::Vec3f>(h, w)[0] - mean[0]) / scale[0];
im->at<cv::Vec3f>(h, w)[1] =
(im->at<cv::Vec3f>(h, w)[1] - mean[1]) / scale[1];
im->at<cv::Vec3f>(h, w)[2] =
(im->at<cv::Vec3f>(h, w)[2] - mean[2]) / scale[2];
}
}
}
void CenterCropImg::Run(cv::Mat &img, const int crop_size) {
int resize_w = img.cols;
int resize_h = img.rows;
int w_start = int((resize_w - crop_size) / 2);
int h_start = int((resize_h - crop_size) / 2);
cv::Rect rect(w_start, h_start, crop_size, crop_size);
img = img(rect);
}
void ResizeImg::Run(const cv::Mat &img, cv::Mat &resize_img,
int resize_short_size) {
int w = img.cols;
int h = img.rows;
float ratio = 1.f;
if (h < w) {
ratio = float(resize_short_size) / float(h);
} else {
ratio = float(resize_short_size) / float(w);
}
int resize_h = round(float(h) * ratio);
int resize_w = round(float(w) * ratio);
cv::resize(img, resize_img, cv::Size(resize_w, resize_h));
}
} // namespace PaddleClas
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include <chrono>
#include <iomanip>
#include <iostream>
#include <ostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <numeric>
namespace Feature {
class Normalize {
public:
virtual void Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &std, float scale);
};
// RGB -> CHW
class Permute {
public:
virtual void Run(const cv::Mat *im, float *data);
};
class CenterCropImg {
public:
virtual void Run(cv::Mat &im, const int crop_size = 224);
};
class ResizeImg {
public:
virtual void Run(const cv::Mat &img, cv::Mat &resize_img, int max_size_len,
int size = 0);
};
} // namespace Feature
namespace PaddleClas {
class Normalize {
public:
virtual void Run(cv::Mat *im, const std::vector<float> &mean,
const std::vector<float> &scale, const bool is_scale = true);
};
// RGB -> CHW
class Permute {
public:
virtual void Run(const cv::Mat *im, float *data);
};
class CenterCropImg {
public:
virtual void Run(cv::Mat &im, const int crop_size = 224);
};
class ResizeImg {
public:
virtual void Run(const cv::Mat &img, cv::Mat &resize_img, int max_size_len);
};
} // namespace PaddleClas
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
feed_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
feed_type: 1
shape: 6
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
fetch_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
fetch_type: 1
shape: 6
}
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
feed_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
feed_type: 1
shape: 6
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "features"
is_lod_tensor: false
fetch_type: 1
shape: 512
}
fetch_var {
name: "boxes"
alias_name: "boxes"
is_lod_tensor: false
fetch_type: 1
shape: 6
}
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 7
shape: -1
shape: -1
shape: 3
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: false
fetch_type: 2
}
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 7
shape: -1
shape: -1
shape: 3
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: false
fetch_type: 2
}
......@@ -12,16 +12,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import base64
import time
from paddle_serving_client import Client
#app
from paddle_serving_app.reader import Sequential, URL2Image, Resize
from paddle_serving_app.reader import CenterCrop, RGB2BGR, Transpose, Div, Normalize
import time
def bytes_to_base64(image: bytes) -> str:
"""encode bytes into base64 string
"""
return base64.b64encode(image).decode('utf8')
client = Client()
client.load_client_config("./ResNet50_vd_serving/serving_server_conf.prototxt")
client.load_client_config("./ResNet50_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9292"])
label_dict = {}
......@@ -31,22 +35,17 @@ with open("imagenet.label") as fin:
label_dict[label_idx] = line.strip()
label_idx += 1
#preprocess
seq = Sequential([
URL2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
start = time.time()
image_file = "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"
image_file = "./daisy.jpg"
for i in range(1):
img = seq(image_file)
fetch_map = client.predict(
feed={"inputs": img}, fetch=["prediction"], batch=False)
prob = max(fetch_map["prediction"][0])
label = label_dict[fetch_map["prediction"][0].tolist().index(prob)].strip(
).replace(",", "")
print("prediction: {}, probability: {}".format(label, prob))
end = time.time()
print(end - start)
start = time.time()
with open(image_file, 'rb') as img_file:
image_data = img_file.read()
image = bytes_to_base64(image_data)
fetch_dict = client.predict(
feed={"inputs": image}, fetch=["prediction"], batch=False)
prob = max(fetch_dict["prediction"][0])
label = label_dict[fetch_dict["prediction"][0].tolist().index(
prob)].strip().replace(",", "")
print("prediction: {}, probability: {}".format(label, prob))
end = time.time()
print(end - start)
......@@ -64,9 +64,17 @@ class ThreshOutput(object):
for idx, probs in enumerate(x):
score = probs[1]
if score < self.threshold:
result = {"class_ids": [0], "scores": [1 - score], "label_names": [self.label_0]}
result = {
"class_ids": [0],
"scores": [1 - score],
"label_names": [self.label_0]
}
else:
result = {"class_ids": [1], "scores": [score], "label_names": [self.label_1]}
result = {
"class_ids": [1],
"scores": [score],
"label_names": [self.label_1]
}
if file_names is not None:
result["file_name"] = file_names[idx]
y.append(result)
......@@ -179,3 +187,136 @@ class Binarize(object):
byte[:, i:i + 1] = np.dot(x[:, i * 8:(i + 1) * 8], self.unit)
return byte
class PersonAttribute(object):
def __init__(self,
threshold=0.5,
glasses_threshold=0.3,
hold_threshold=0.6):
self.threshold = threshold
self.glasses_threshold = glasses_threshold
self.hold_threshold = hold_threshold
def __call__(self, batch_preds, file_names=None):
# postprocess output of predictor
age_list = ['AgeLess18', 'Age18-60', 'AgeOver60']
direct_list = ['Front', 'Side', 'Back']
bag_list = ['HandBag', 'ShoulderBag', 'Backpack']
upper_list = ['UpperStride', 'UpperLogo', 'UpperPlaid', 'UpperSplice']
lower_list = [
'LowerStripe', 'LowerPattern', 'LongCoat', 'Trousers', 'Shorts',
'Skirt&Dress'
]
batch_res = []
for res in batch_preds:
res = res.tolist()
label_res = []
# gender
gender = 'Female' if res[22] > self.threshold else 'Male'
label_res.append(gender)
# age
age = age_list[np.argmax(res[19:22])]
label_res.append(age)
# direction
direction = direct_list[np.argmax(res[23:])]
label_res.append(direction)
# glasses
glasses = 'Glasses: '
if res[1] > self.glasses_threshold:
glasses += 'True'
else:
glasses += 'False'
label_res.append(glasses)
# hat
hat = 'Hat: '
if res[0] > self.threshold:
hat += 'True'
else:
hat += 'False'
label_res.append(hat)
# hold obj
hold_obj = 'HoldObjectsInFront: '
if res[18] > self.hold_threshold:
hold_obj += 'True'
else:
hold_obj += 'False'
label_res.append(hold_obj)
# bag
bag = bag_list[np.argmax(res[15:18])]
bag_score = res[15 + np.argmax(res[15:18])]
bag_label = bag if bag_score > self.threshold else 'No bag'
label_res.append(bag_label)
# upper
upper_res = res[4:8]
upper_label = 'Upper:'
sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve'
upper_label += ' {}'.format(sleeve)
for i, r in enumerate(upper_res):
if r > self.threshold:
upper_label += ' {}'.format(upper_list[i])
label_res.append(upper_label)
# lower
lower_res = res[8:14]
lower_label = 'Lower: '
has_lower = False
for i, l in enumerate(lower_res):
if l > self.threshold:
lower_label += ' {}'.format(lower_list[i])
has_lower = True
if not has_lower:
lower_label += ' {}'.format(lower_list[np.argmax(lower_res)])
label_res.append(lower_label)
# shoe
shoe = 'Boots' if res[14] > self.threshold else 'No boots'
label_res.append(shoe)
threshold_list = [0.5] * len(res)
threshold_list[1] = self.glasses_threshold
threshold_list[18] = self.hold_threshold
pred_res = (np.array(res) > np.array(threshold_list)
).astype(np.int8).tolist()
batch_res.append({"attributes": label_res, "output": pred_res})
return batch_res
class VehicleAttribute(object):
def __init__(self, color_threshold=0.5, type_threshold=0.5):
self.color_threshold = color_threshold
self.type_threshold = type_threshold
self.color_list = [
"yellow", "orange", "green", "gray", "red", "blue", "white",
"golden", "brown", "black"
]
self.type_list = [
"sedan", "suv", "van", "hatchback", "mpv", "pickup", "bus",
"truck", "estate"
]
def __call__(self, batch_preds, file_names=None):
# postprocess output of predictor
batch_res = []
for res in batch_preds:
res = res.tolist()
label_res = []
color_idx = np.argmax(res[:10])
type_idx = np.argmax(res[10:])
if res[color_idx] >= self.color_threshold:
color_info = f"Color: ({self.color_list[color_idx]}, prob: {res[color_idx]})"
else:
color_info = "Color unknown"
if res[type_idx + 10] >= self.type_threshold:
type_info = f"Type: ({self.type_list[type_idx]}, prob: {res[type_idx + 10]})"
else:
type_info = "Type unknown"
label_res = f"{color_info}, {type_info}"
threshold_list = [self.color_threshold
] * 10 + [self.type_threshold] * 9
pred_res = (np.array(res) > np.array(threshold_list)
).astype(np.int8).tolist()
batch_res.append({"attributes": label_res, "output": pred_res})
return batch_res
......@@ -138,13 +138,20 @@ def main(config):
continue
batch_results = cls_predictor.predict(batch_imgs)
for number, result_dict in enumerate(batch_results):
filename = batch_names[number]
clas_ids = result_dict["class_ids"]
scores_str = "[{}]".format(", ".join("{:.2f}".format(
r) for r in result_dict["scores"]))
label_names = result_dict["label_names"]
print("{}:\tclass id(s): {}, score(s): {}, label_name(s): {}".
format(filename, clas_ids, scores_str, label_names))
if "PersonAttribute" in config[
"PostProcess"] or "VehicleAttribute" in config[
"PostProcess"]:
filename = batch_names[number]
print("{}:\t {}".format(filename, result_dict))
else:
filename = batch_names[number]
clas_ids = result_dict["class_ids"]
scores_str = "[{}]".format(", ".join("{:.2f}".format(
r) for r in result_dict["scores"]))
label_names = result_dict["label_names"]
print(
"{}:\tclass id(s): {}, score(s): {}, label_name(s): {}".
format(filename, clas_ids, scores_str, label_names))
batch_imgs = []
batch_names = []
if cls_predictor.benchmark:
......
......@@ -41,8 +41,11 @@ def main():
'inference.pdmodel')) and os.path.exists(
os.path.join(config["Global"]["save_inference_dir"],
'inference.pdiparams'))
if "Query" in config["DataLoader"]["Eval"]:
config["DataLoader"]["Eval"] = config["DataLoader"]["Eval"]["Query"]
config["DataLoader"]["Eval"]["sampler"]["batch_size"] = 1
config["DataLoader"]["Eval"]["loader"]["num_workers"] = 0
init_logger()
device = paddle.set_device("cpu")
train_dataloader = build_dataloader(config["DataLoader"], "Eval", device,
......@@ -67,6 +70,7 @@ def main():
quantize_model_path=os.path.join(
config["Global"]["save_inference_dir"], "quant_post_static_model"),
sample_generator=sample_generator(train_dataloader),
batch_size=config["DataLoader"]["Eval"]["sampler"]["batch_size"],
batch_nums=10)
......
此差异已折叠。
# PULC Classification Model of Language
------
## Catalogue
- [1. Introduction](#1)
- [2. Quick Start](#2)
- [2.1 PaddlePaddle Installation](#2.1)
- [2.2 PaddleClas Installation](#2.2)
- [2.3 Prediction](#2.3)
- [3. Training, Evaluation and Inference](#3)
- [3.1 Installation](#3.1)
- [3.2 Dataset](#3.2)
- [3.2.1 Dataset Introduction](#3.2.1)
- [3.2.2 Getting Dataset](#3.2.2)
- [3.3 Training](#3.3)
- [3.4 Evaluation](#3.4)
- [3.5 Inference](#3.5)
- [4. Model Compression](#4)
- [4.1 SKL-UGI Knowledge Distillation](#4.1)
- [4.1.1 Teacher Model Training](#4.1.1)
- [4.1.2 Knowledge Distillation Training](#4.1.2)
- [5. SHAS](#5)
- [6. Inference Deployment](#6)
- [6.1 Getting Paddle Inference Model](#6.1)
- [6.1.1 Exporting Paddle Inference Model](#6.1.1)
- [6.1.2 Downloading Inference Model](#6.1.2)
- [6.2 Prediction with Python](#6.2)
- [6.2.1 Image Prediction](#6.2.1)
- [6.2.2 Images Prediction](#6.2.2)
- [6.3 Deployment with C++](#6.3)
- [6.4 Deployment as Service](#6.4)
- [6.5 Deployment on Mobile](#6.5)
- [6.6 Converting To ONNX and Deployment](#6.6)
<a name="1"></a>
## 1. Introduction
This case provides a way for users to quickly build a lightweight, high-precision and practical classification model of language in the image using PaddleClas PULC (Practical Ultra Lightweight image Classification). The model can be widely used in various scenarios involving multilingual OCR processing, such as finance and government affairs.
The following table lists the relevant indicators of the model. The first two lines means that using SwinTransformer_tiny and MobileNetV3_small_x0_35 as the backbone to training. The third to sixth lines means that the backbone is replaced by PPLCNet, additional use of EDA strategy and additional use of EDA strategy and SKL-UGI knowledge distillation strategy. When replacing the backbone with PPLCNet_x1_0, the input shape of model is changed to [192, 48], and the stride of the network is changed to [2, [2, 1], [2, 1], [2, 1]].
| Backbone | Top1-Acc(%) | Latency(ms) | Size(M)| Training Strategy |
| ----------------------- | --------- | -------- | ------- | ---------------------------------------------- |
| SwinTranformer_tiny | 98.12 | 89.09 | 111 | using ImageNet pretrained model |
| MobileNetV3_small_x0_35 | 95.92 | 2.98 | 3.7 | using ImageNet pretrained model |
| PPLCNet_x1_0 | 98.35 | 2.58 | 7.1 | using ImageNet pretrained model |
| PPLCNet_x1_0 | 98.7 | 2.58 | 7.1 | using SSLD pretrained model |
| PPLCNet_x1_0 | 99.12 | 2.58 | 7.1 | using SSLD pretrained model + EDA strategy |
| **PPLCNet_x1_0** | **99.26** | **2.58** | **7.1** | using SSLD pretrained model + EDA strategy + SKL-UGI knowledge distillation strategy|
It can be seen that high accuracy can be getted when backbone is SwinTranformer_tiny, but the speed is slow. Replacing backbone with the lightweight model MobileNetV3_small_x0_35, the speed can be greatly improved, but the accuracy will be greatly reduced. Replacing backbone with faster backbone PPLCNet_x1_0 and changing the input shape and stride of network, the accuracy is higher more 2.43 percentage points than MobileNetv3_small_x0_35. At the same time, the speed can be more than 20% faster. After additional using the SSLD pretrained model, the accuracy can be improved by about 0.35 percentage points without affecting the inference speed. Further, additional using the EDA strategy, the accuracy can be increased by 0.42 percentage points. Finally, after additional using the SKL-UGI knowledge distillation, the accuracy can be further improved by 0.14 percentage points. At this point, the accuracy is higher than that of SwinTranformer_tiny, but the speed is more faster. The training method and deployment instructions of PULC will be introduced in detail below.
**Note**:
* The Latency is tested on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz. The MKLDNN is enabled and the number of threads is 10.
* About PP-LCNet, please refer to [PP-LCNet Introduction](../models/PP-LCNet_en.md) and [PP-LCNet Paper](https://arxiv.org/abs/2109.15099).
<a name="2"></a>
## 2. Quick Start
<a name="2.1"></a>
### 2.1 PaddlePaddle Installation
- Run the following command to install if CUDA9 or CUDA10 is available.
```bash
python3 -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple
```
- Run the following command to install if GPU device is unavailable.
```bash
python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
Please refer to [PaddlePaddle Installation](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/en/install/pip/linux-pip_en.html) for more information about installation, for examples other versions.
<a name="2.2"></a>
### 2.2 PaddleClas wheel Installation
The command of PaddleClas installation as bellow:
```bash
pip3 install paddleclas
```
<a name="2.3"></a>
### 2.3 Prediction
First, please click [here](https://paddleclas.bj.bcebos.com/data/PULC/pulc_demo_imgs.zip) to download and unzip to get the test demo images.
* Prediction with CLI
```bash
paddleclas --model_name=language_classification --infer_imgs=pulc_demo_imgs/language_classification/word_35404.png
```
Results:
```
>>> result
class_ids: [4, 6], scores: [0.88672, 0.01434], label_names: ['japan', 'korean'], filename: pulc_demo_imgs/language_classification/word_35404.png
Predict complete!
```
**Note**: If you want to test other images, only need to specify the `--infer_imgs` argument, and the directory containing images is also supported.
* Prediction in Python
```python
import paddleclas
model = paddleclas.PaddleClas(model_name="language_classification")
result = model.predict(input_data="pulc_demo_imgs/language_classification/word_35404.png")
print(next(result))
```
**Note**: The `result` returned by `model.predict()` is a generator, so you need to use the `next()` function to call it or `for` loop to loop it. And it will predict with `batch_size` size batch and return the prediction results when called. The default `batch_size` is 1, and you also specify the `batch_size` when instantiating, such as `model = paddleclas.PaddleClas(model_name="language_classification", batch_size=2)`. The result of demo above:
```
>>> result
[{'class_ids': [4, 6], 'scores': [0.88672, 0.01434], 'label_names': ['japan', 'korean'], 'filename': 'pulc_demo_imgs/language_classification/word_35404.png'}]
```
<a name="3"></a>
## 3. Training, Evaluation and Inference
<a name="3.1"></a>
### 3.1 Installation
Please refer to [Installation](../installation/install_paddleclas_en.md) to get the description about installation.
<a name="3.2"></a>
### 3.2 Dataset
<a name="3.2.1"></a>
#### 3.2.1 Dataset Introduction
The models wo provided are trained with internal data, which is not open source yet. So it is suggested that constructing dataset based on open source dataset [Multi-lingual scene text detection and recognition](https://rrc.cvc.uab.es/?ch=15&com=downloads) to experience the this case.
Some image of the processed dataset is as follows:
![](../../images/PULC/docs/language_classification_original_data.png)
<a name="3.2.2"></a>
#### 3.2.2 Getting Dataset
The models provided support to classcify 10 languages, which as shown in the following list:
`0` : means Arabic
`1` : means chinese_cht
`2` : means cyrillic
`3` : means devanagari
`4` : means Japanese
`5` : means ka
`6` : means Korean
`7` : means ta
`8` : means te
`9` : means Latin
In the `Multi-lingual scene text detection and recognition`, only Arabic, Japanese, Korean and Latin data are included. 1600 images from each of the four languages are taken as the training data of this case, 300 images as the evaluation data, and 400 images as the supplementary data is used for the `SKL-UGI Knowledge Distillation`.
Therefore, for the demo dataset in this case, the language categories are shown in following list:
`0` : means arabic
`4` : means japan
`6` : means korean
`9` : means latin
**Note**: The images used in this task should be cropped by text from original image. Only the text line part is used as the image data.
If you want to create your own dataset, you can collect and sort out the data of the required languages in your task as required. And you can also download the data processed directly.
```
cd path_to_PaddleClas
```
Enter the `dataset/` directory, download and unzip the dataset.
```shell
cd dataset
wget https://paddleclas.bj.bcebos.com/data/PULC/language_classification.tar
tar -xf language_classification.tar
cd ../
```
The datas under `language_classification` directory:
```
├── img
│ ├── word_1.png
│ ├── word_2.png
...
├── train_list.txt
├── train_list_for_distill.txt
├── test_list.txt
└── label_list.txt
```
Where `img/` is the directory including 9200 images in 4 languages. The `train_list.txt` and `test_list.txt` are label files of training data and validation data respectively. `label_list.txt` is the mapping file corresponding to the four languages. `train_list_for_distill.txt` is the label list of images used for `SKL-UGI Knowledge Distillation`.
**Note**:
* About the contents format of `train_list.txt` and `val_list.txt`, please refer to [Description about Classification Dataset in PaddleClas](../data_preparation/classification_dataset_en.md).
* About the `train_list_for_distill.txt`, please refer to [Knowledge Distillation Label](../advanced_tutorials/distillation/distillation_en.md).
<a name="3.3"></a>
### 3.3 Training
The details of training config in `ppcls/configs/PULC/person_exists/PPLCNet_x1_0.yaml`. The command about training as follows:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0.yaml \
-o Arch.class_num=4
```
**Note**: Because the class num of demo dataset is 4, the argument `-o Arch.class_num=4` should be specifed to change the prediction class num of model to 4.
<a name="3.4"></a>
### 3.4 Evaluation
After training, you can use the following commands to evaluate the model.
```bash
python3 tools/eval.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0.yaml \
-o Global.pretrained_model="output/PPLCNet_x1_0/best_model" \
-o Arch.class_num=4
```
Among the above command, the argument `-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"` specify the path of the best model weight file. You can specify other path if needed.
<a name="3.5"></a>
### 3.5 Inference
After training, you can use the model that trained to infer. Command is as follow:
```bash
python3 tools/infer.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0.yaml \
-o Global.pretrained_model="output/PPLCNet_x1_0/best_model" \
-o Arch.class_num=4
```
The results:
```
[{'class_ids': [4, 9], 'scores': [0.96809, 0.01001], 'file_name': 'deploy/images/PULC/language_classification/word_35404.png', 'label_names': ['japan', 'latin']}]
```
**Note**:
* Among the above command, argument `-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"` specify the path of the best model weight file. You can specify other path if needed.
* The default test image is `deploy/images/PULC/person_exists/objects365_02035329.jpg`. And you can test other image, only need to specify the argument `-o Infer.infer_imgs=path_to_test_image`.
* Among the prediction results, `japan` means japanese and `korean` means korean.
<a name="4"></a>
## 4. Model Compression
<a name="4.1"></a>
### 4.1 SKL-UGI Knowledge Distillation
SKL-UGI is a simple but effective knowledge distillation algrithem proposed by PaddleClas.
<!-- todo -->
<!-- Please refer to [SKL-UGI](../advanced_tutorials/distillation/distillation_en.md) for more details. -->
<a name="4.1.1"></a>
#### 4.1.1 Teacher Model Training
Training the teacher model with hyperparameters specified in `ppcls/configs/PULC/language_classification/PPLCNet/PPLCNet_x1_0.yaml`. The command is as follow:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0.yaml \
-o Arch.name=ResNet101_vd \
-o Arch.class_num=4
```
The best teacher model weight would be saved in file `output/ResNet101_vd/best_model.pdparams`.
**Note**: Training the ResNet101_vd model requires more GPU memory. If the memory is not enough, you can reduce the learning rate and batch size in the same proportion.
<a name="4.1.2"></a>
#### 4.1.2 Knowledge Distillation Training
The training strategy, specified in training config file `ppcls/configs/PULC/language_classification/PPLCNet_x1_0_distillation.yaml`, the teacher model is `ResNet101_vd`, the student model is `PPLCNet_x1_0`. The command is as follow:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0_distillation.yaml \
-o Arch.models.0.Teacher.pretrained=output/ResNet101_vd/best_model \
-o Arch.class_num=4
```
The best student model weight would be saved in file `output/DistillationModel/best_model_student.pdparams`.
<a name="5"></a>
## 5. Hyperparameters Searching
The hyperparameters used by [3.2 section](#3.2) and [4.1 section](#4.1) are according by `Hyperparameters Searching` in PaddleClas. If you want to get better results on your own dataset, you can refer to [Hyperparameters Searching](PULC_train_en.md#4) to get better hyperparameters.
**Note**: This section is optional. Because the search process will take a long time, you can selectively run according to your specific. If not replace the dataset, you can ignore this section.
<a name="6"></a>
## 6. Inference Deployment
<a name="6.1"></a>
### 6.1 Getting Paddle Inference Model
Paddle Inference is the original Inference Library of the PaddlePaddle, provides high-performance inference for server deployment. And compared with directly based on the pretrained model, Paddle Inference can use tools to accelerate prediction, so as to achieve better inference performance. Please refer to [Paddle Inference](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/infer/inference/inference_cn.html) for more information.
Paddle Inference need Paddle Inference Model to predict. Two process provided to get Paddle Inference Model. If want to use the provided by PaddleClas, you can download directly, click [Downloading Inference Model](#6.1.2).
<a name="6.1.1"></a>
### 6.1.1 Exporting Paddle Inference Model
The command about exporting Paddle Inference Model is as follow:
```bash
python3 tools/export_model.py \
-c ./ppcls/configs/PULC/language_classification/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=output/DistillationModel/best_model_student \
-o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_language_classification_infer
```
After running above command, the inference model files would be saved in `deploy/models/PPLCNet_x1_0_language_classification_infer`, as shown below:
```
├── PPLCNet_x1_0_language_classification_infer
│ ├── inference.pdiparams
│ ├── inference.pdiparams.info
│ └── inference.pdmodel
```
**Note**: The best model is from knowledge distillation training. If knowledge distillation training is not used, the best model would be saved in `output/PPLCNet_x1_0/best_model.pdparams`.
<a name="6.1.2"></a>
### 6.1.2 Downloading Inference Model
You can also download directly.
```
cd deploy/models
# download the inference model and decompression
wget https://paddleclas.bj.bcebos.com/models/PULC/language_classification_infer.tar && tar -xf language_classification_infer.tar
```
After decompression, the directory `models` should be shown below.
```
├── language_classification_infer
│ ├── inference.pdiparams
│ ├── inference.pdiparams.info
│ └── inference.pdmodel
```
<a name="6.2"></a>
### 6.2 Prediction with Python
<a name="6.2.1"></a>
#### 6.2.1 Image Prediction
Return the directory `deploy`:
```
cd ../
```
Run the following command to classify language about the image `./images/PULC/language_classification/word_35404.png`.
```shell
# Use the following command to predict with GPU.
python3.7 python/predict_cls.py -c configs/PULC/language_classification/inference_language_classification.yaml
# Use the following command to predict with CPU.
python3.7 python/predict_cls.py -c configs/PULC/language_classification/inference_language_classification.yaml -o Global.use_gpu=False
```
The prediction results:
```
word_35404.png: class id(s): [4, 6], score(s): [0.89, 0.01], label_name(s): ['japan', 'korean']
```
**Note**: Among the prediction results, `japan` means japanese and `korean` means korean.
<a name="6.2.2"></a>
#### 6.2.2 Images Prediction
If you want to predict images in directory, please specify the argument `Global.infer_imgs` as directory path by `-o Global.infer_imgs`. The command is as follow.
```shell
# Use the following command to predict with GPU. If want to replace with CPU, you can add argument -o Global.use_gpu=False
python3.7 python/predict_cls.py -c configs/PULC/language_classification/inference_language_classification.yaml -o Global.infer_imgs="./images/PULC/language_classification/"
```
All prediction results will be printed, as shown below.
```
word_17.png: class id(s): [9, 4], score(s): [0.80, 0.09], label_name(s): ['latin', 'japan']
word_20.png: class id(s): [0, 4], score(s): [0.91, 0.02], label_name(s): ['arabic', 'japan']
word_35404.png: class id(s): [4, 6], score(s): [0.89, 0.01], label_name(s): ['japan', 'korean']
```
Among the prediction results above, `japan` means japanese, `latin` means latin, `arabic` means arabic and `korean` means korean.
<a name="6.3"></a>
### 6.3 Deployment with C++
PaddleClas provides an example about how to deploy with C++. Please refer to [Deployment with C++](../inference_deployment/cpp_deploy_en.md).
<a name="6.4"></a>
### 6.4 Deployment as Service
Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information.
PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md).
<a name="6.5"></a>
### 6.5 Deployment on Mobile
Paddle-Lite is an open source deep learning framework that designed to make easy to perform inference on mobile, embeded, and IoT devices. Please refer to [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) for more information.
PaddleClas provides an example of how to deploy on mobile by Paddle-Lite. Please refer to [Paddle-Lite deployment](../inference_deployment/paddle_lite_deploy_en.md).
<a name="6.6"></a>
### 6.6 Converting To ONNX and Deployment
Paddle2ONNX support convert Paddle Inference model to ONNX model. And you can deploy with ONNX model on different inference engine, such as TensorRT, OpenVINO, MNN/TNN, NCNN and so on. About Paddle2ONNX details, please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
PaddleClas provides an example of how to convert Paddle Inference model to ONNX model by paddle2onnx toolkit and predict by ONNX model. You can refer to [paddle2onnx](../../../deploy/paddle2onnx/readme_en.md) for deployment details.
# PULC Model Zoo
------
The PULC model zoo is provided here, mainly providing indicators, model storage size, and download links of the model. The pre-trained model can be used for fine-tuning training, and the inference model can be directly used for prediction and deployment.
|Model name| Model Description | Metrics |Storage Size| Latency| Download Address|
| --- | --- | --- | --- | --- | --- |
| person_exists |[Human Exists Classification](PULC_person_exists_en.md)| 96.23 |7.0M|2.58ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/person_exists_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/person_exists_pretrained.pdparams)|
| person_attribute |[Pedestrian Attribute Classification](PULC_person_attribute_en.md)| 78.59 |7.2M|2.01ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/person_attribute_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/person_attribute_pretrained.pdparams)|
| safety_helmet |[Classification of Wheather Wearing Safety Helmet](PULC_safety_helmet_en.md)| 99.38 |7.1M|2.03ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/safety_helmet_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/safety_helmet_pretrained.pdparams)|
| traffic_sign |[Traffic Sign Classification](PULC_traffic_sign_en.md)| 98.35 |8.2M|2.10ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/traffic_sign_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/traffic_sign_pretrained.pdparams)|
| vehicle_attribute |[Vehicle Attribute Classification](PULC_vehicle_attribute_en.md)| 90.81 |7.2M|2.36ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/vehicle_attribute_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/vehicle_attribute_pretrained.pdparams)|
| car_exists |[Car Exists Classification](PULC_car_exists_en.md) | 95.92 | 7.1M | 2.38ms |[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/car_exists_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/car_exists_pretrained.pdparams)|
| text_image_orientation |[Text Image Orientation Classification](PULC_text_image_orientation_en.md)| 99.06 | 7.1M | 2.16ms |[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/text_image_orientation_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/text_image_orientation_pretrained.pdparams)|
| textline_orientation |[Text-line Orientation Classification](PULC_textline_orientation_en.md)| 96.01 |7.0M|2.72ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/textline_orientation_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/textline_orientation_pretrained.pdparams)|
| language_classification |[Language Classification](PULC_language_classification_en.md)| 99.26 |7.1M|2.58ms|[inference model](https://paddleclas.bj.bcebos.com/models/PULC/inference/language_classification_infer.tar) / [pretrained model](https://paddleclas.bj.bcebos.com/models/PULC/pretrained/language_classification_pretrained.pdparams)|
**Note:**
* The backbone of all the above models is PPLCNet_x1_0. The different sizes of some models are caused by the different output sizes of the classification layer. The inference time is tested on the Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz. During the test process, the MKLDNN acceleration strategy is turned on, and the number of threads is 10. There will be slight fluctuations during the speed test process.
* The evaluation indicators of person_exists, safety_helmet, and car_exists are TprAtFpr. The evaluation indicators of person_attribute and vehicle_attribute are ma. The evaluation indicators of traffic_sign, text_image_orientation, textline_orientation and language_classification are Top-1 Acc.
# PULC Recognition Model of Person Attribute
------
## Catalogue
- [1. Introduction](#1)
- [2. Quick Start](#2)
- [2.1 PaddlePaddle Installation](#2.1)
- [2.2 PaddleClas Installation](#2.2)
- [2.3 Prediction](#2.3)
- [3. Training, Evaluation and Inference](#3)
- [3.1 Installation](#3.1)
- [3.2 Dataset](#3.2)
- [3.2.1 Dataset Introduction](#3.2.1)
- [3.2.2 Getting Dataset](#3.2.2)
- [3.3 Training](#3.3)
- [3.4 Evaluation](#3.4)
- [3.5 Inference](#3.5)
- [4. Model Compression](#4)
- [4.1 SKL-UGI Knowledge Distillation](#4.1)
- [4.1.1 Teacher Model Training](#4.1.1)
- [4.1.2 Knowledge Distillation Training](#4.1.2)
- [5. SHAS](#5)
- [6. Inference Deployment](#6)
- [6.1 Getting Paddle Inference Model](#6.1)
- [6.1.1 Exporting Paddle Inference Model](#6.1.1)
- [6.1.2 Downloading Inference Model](#6.1.2)
- [6.2 Prediction with Python](#6.2)
- [6.2.1 Image Prediction](#6.2.1)
- [6.2.2 Images Prediction](#6.2.2)
- [6.3 Deployment with C++](#6.3)
- [6.4 Deployment as Service](#6.4)
- [6.5 Deployment on Mobile](#6.5)
- [6.6 Converting To ONNX and Deployment](#6.6)
<a name="1"></a>
## 1. Introduction
This case provides a way for users to quickly build a lightweight, high-precision and practical classification model of person attribute using PaddleClas PULC (Practical Ultra Lightweight image Classification). The model can be widely used in
Pedestrian analysis scenarios, pedestrian tracking scenarios, etc.
The following table lists the relevant indicators of the model. The first three lines means that using Res2Net200_vd_26w_4s, SwinTransformer_tiny and MobileNetV3_small_x0_35 as the backbone to training. The fourth to seventh lines means that the backbone is replaced by PPLCNet, additional use of EDA strategy and additional use of EDA strategy and SKL-UGI knowledge distillation strategy.
| Backbone | ma(%) | Latency(ms) | Size(M) | Training Strategy |
|-------|-----------|----------|---------------|---------------|
| Res2Net200_vd_26w_4s | 81.25 | 77.51 | 293 | using ImageNet pretrained |
| SwinTransformer_tiny | 80.17 | 89.51 | 111 | using ImageNet pretrained |
| MobileNetV3_small_x0_35 | 70.79 | 2.90 | 1.7 | using ImageNet pretrained |
| PPLCNet_x1_0 | 76.31 | 2.01 | 7.1 | using ImageNet pretrained |
| PPLCNet_x1_0 | 77.31 | 2.01 | 7.1 | using SSLD pretrained |
| PPLCNet_x1_0 | 77.71 | 2.01 | 7.1 | using SSLD pretrained + EDA strategy|
| <b>PPLCNet_x1_0<b> | <b>78.59<b> | <b>2.01<b> | <b>7.1<b> | using SSLD pretrained + EDA strategy + SKL-UGI knowledge distillation strategy|
It can be seen that high ma metric can be getted when backbone are Res2Net200_vd_26w_4s and SwinTranformer_tiny, but the speed is slow. Replacing backbone with the lightweight model MobileNetV3_small_x0_35, the speed can be greatly improved, but the ma metric will be greatly reduced. Replacing backbone with faster backbone PPLCNet_x1_0, the ma metric is higher more 5.5 percentage points higher than MobileNetv3_small_x0_35. At the same time, the speed can be more than 20% faster. After additional using the SSLD pretrained model, the ma metric can be improved by about 1 percentage points without affecting the inference speed. Further, additional using the EDA strategy, the ma metric can be increased by 0.4 percentage points. Finally, after additional using the SKL-UGI knowledge distillation, the ma matric can be further improved by 0.88 percentage points. At this time, the ma metric of PPLCNet_x1_0 is only 1.58% different from SwinTransformer_tiny, but the speed is more than 44 times faster. The training method and deployment instructions of PULC will be introduced in detail below.
**Note**:
* The Latency is tested on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz. The MKLDNN is enabled and the number of threads is 10.
* About PP-LCNet, please refer to [PP-LCNet Introduction](../models/PP-LCNet_en.md) and [PP-LCNet Paper](https://arxiv.org/abs/2109.15099).
<a name="2"></a>
## 2. Quick Start
<a name="2.1"></a>
### 2.1 PaddlePaddle Installation
- Run the following command to install if CUDA9 or CUDA10 is available.
```bash
python3 -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple
```
- Run the following command to install if GPU device is unavailable.
```bash
python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
Please refer to [PaddlePaddle Installation](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/en/install/pip/linux-pip_en.html) for more information about installation, for examples other versions.
<a name="2.2"></a>
### 2.2 PaddleClas wheel Installation
The command of PaddleClas installation as bellow:
```bash
pip3 install paddleclas
```
<a name="2.3"></a>
### 2.3 Prediction
First, please click [here](https://paddleclas.bj.bcebos.com/data/PULC/pulc_demo_imgs.zip) to download and unzip to get the test demo images.
* Prediction with CLI
```bash
paddleclas --model_name=person_attribute --infer_imgs=pulc_demo_imgs/person_attribute/090004.jpg
```
Results:
```
>>> result
attributes: ['Male', 'Age18-60', 'Back', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'Backpack', 'Upper: LongSleeve UpperPlaid', 'Lower: Trousers', 'No boots'], output: [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1], filename: pulc_demo_imgs/person_attribute/090004.jpg
Predict complete!
```
**Note**: If you want to test other images, only need to specify the `--infer_imgs` argument, and the directory containing images is also supported.
* Prediction in Python
```python
import paddleclas
model = paddleclas.PaddleClas(model_name="person_attribute")
result = model.predict(input_data="pulc_demo_imgs/person_attribute/090004.jpg")
print(next(result))
```
**Note**: The `result` returned by `model.predict()` is a generator, so you need to use the `next()` function to call it or `for` loop to loop it. And it will predict with `batch_size` size batch and return the prediction results when called. The default `batch_size` is 1, and you also specify the `batch_size` when instantiating, such as `model = paddleclas.PaddleClas(model_name="person_attribute", batch_size=2)`. The result of demo above:
```
>>> result
[{'attributes': ['Male', 'Age18-60', 'Back', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'Backpack', 'Upper: LongSleeve UpperPlaid', 'Lower: Trousers', 'No boots'], 'output': [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1], 'filename': 'pulc_demo_imgs/person_attribute/090004.jpg'}]
```
<a name="3"></a>
## 3. Training, Evaluation and Inference
<a name="3.1"></a>
### 3.1 Installation
Please refer to [Installation](../installation/install_paddleclas_en.md) to get the description about installation.
<a name="3.2"></a>
### 3.2 Dataset
<a name="3.2.1"></a>
#### 3.2.1 Dataset Introduction
The data used in this case is the [pa100k dataset](https://www.v7labs.com/open-datasets/pa-100k).
<a name="3.2.2"></a>
#### 3.2.2 Getting Dataset
Some image of the processed dataset is as follows:
![](../../images/PULC/docs/person_attribute_data_demo.png)
We converted the data into a PaddleClas multi-label readable data format that can be downloaded directly.
```
cd path_to_PaddleClas
```
Enter the `dataset/` directory, download and unzip the dataset.
```shell
cd dataset
wget https://paddleclas.bj.bcebos.com/data/PULC/pa100k.tar
tar -xf pa100k.tar
cd ../
```
The datas under `pa100k` directory:
```
pa100k
├── train
│   ├── 000001.jpg
│   ├── 000002.jpg
...
├── val
│   ├── 080001.jpg
│   ├── 080002.jpg
...
├── test
│   ├── 090001.jpg
│   ├── 090002.jpg
...
...
├── train_list.txt
├── train_val_list.txt
├── val_list.txt
├── test_list.txt
```
Where `train/`, `val/`, `test/` are training set, validation set and test set respectively. `train_list.txt`, `val_list.txt`, `test_list.txt` are the label files of the training set, validation set, and test set, respectively. In this example, `test_list.txt` is not used for now.
<a name="3.3"></a>
### 3.3 Training
The details of training config in ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`. The command about training as follows:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml
```
The best metric for the validation set is around `77.71%` (the dataset is small and generally fluctuates around 0.3%).
<a name="3.4"></a>
### 3.4 Evaluation
After training, you can use the following commands to evaluate the model.
```bash
python3 tools/eval.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"
```
Among the above command, the argument `-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"` specify the path of the best model weight file. You can specify other path if needed.
<a name="3.5"></a>
### 3.5 Inference
After training, you can use the model that trained to infer. Command is as follow:
```python
python3 tools/infer.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=output/PPLCNet_x1_0/best_model
```
The results:
```
[{'attributes': ['Male', 'Age18-60', 'Back', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'Backpack', 'Upper: LongSleeve UpperPlaid', 'Lower: Trousers', 'No boots'], 'output': [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1]}]
```
**Note**:
* Among the above command, argument `-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"` specify the path of the best model weight file. You can specify other path if needed.
* The default test image is `deploy/images/PULC/person_attribute/090004.jpg`. And you can test other image, only need to specify the argument `-o Infer.infer_imgs=path_to_test_image`.
<a name="4"></a>
## 4. Model Compression
<a name="4.1"></a>
### 4.1 SKL-UGI Knowledge Distillation
SKL-UGI is a simple but effective knowledge distillation algrithem proposed by PaddleClas.
<!-- todo -->
<!-- Please refer to [SKL-UGI](../advanced_tutorials/distillation/distillation_en.md) for more details. -->
<a name="4.1.1"></a>
#### 4.1.1 Teacher Model Training
Training the teacher model with hyperparameters specified in `ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`. The command is as follow:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Arch.name=ResNet101_vd
```
The best metric for the validation set is around `80.10%`. The best teacher model weight would be saved in file `output/ResNet101_vd/best_model.pdparams`.
<a name="4.1.2"></a>
#### 4.1.2 Knowledge Distillation Training
The training strategy, specified in training config file `ppcls/configs/PULC/person_attribute/PPLCNet_x1_0_Distillation.yaml`, the teacher model is `ResNet101_vd`, the student model is `PPLCNet_x1_0`. The command is as follow:
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0_Distillation.yaml \
-o Arch.models.0.Teacher.pretrained=output/ResNet101_vd/best_model
```
The best metric for the validation set is around `78.5%`. The best student model weight would be saved in file `output/DistillationModel/best_model_student.pdparams`.
<a name="5"></a>
## 5. Hyperparameters Searching
The hyperparameters used by [3.2 section](#3.2) and [4.1 section](#4.1) are according by `Hyperparameters Searching` in PaddleClas. If you want to get better results on your own dataset, you can refer to [Hyperparameters Searching](PULC_train_en.md#4) to get better hyperparameters.
**Note**: This section is optional. Because the search process will take a long time, you can selectively run according to your specific. If not replace the dataset, you can ignore this section.
<a name="6"></a>
## 6. Inference Deployment
<a name="6.1"></a>
### 6.1 Getting Paddle Inference Model
Paddle Inference is the original Inference Library of the PaddlePaddle, provides high-performance inference for server deployment. And compared with directly based on the pretrained model, Paddle Inference can use tools to accelerate prediction, so as to achieve better inference performance. Please refer to [Paddle Inference](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/infer/inference/inference_cn.html) for more information.
Paddle Inference need Paddle Inference Model to predict. Two process provided to get Paddle Inference Model. If want to use the provided by PaddleClas, you can download directly, click [Downloading Inference Model](#6.1.2).
<a name="6.1.1"></a>
### 6.1.1 Exporting Paddle Inference Model
The command about exporting Paddle Inference Model is as follow:
```bash
python3 tools/export_model.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=output/DistillationModel/best_model_student \
-o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer
```
After running above command, the inference model files would be saved in `PPLCNet_x1_0_person_attribute_infer`, as shown below:
```
├── PPLCNet_x1_0_person_attribute_infer
│ ├── inference.pdiparams
│ ├── inference.pdiparams.info
│ └── inference.pdmodel
```
**Note**: The best model is from knowledge distillation training. If knowledge distillation training is not used, the best model would be saved in `output/PPLCNet_x1_0/best_model.pdparams`.
<a name="6.1.2"></a>
### 6.1.2 Downloading Inference Model
You can also download directly.
```
cd deploy/models
# download the inference model and decompression
wget https://paddleclas.bj.bcebos.com/models/PULC/person_attribute_infer.tar && tar -xf person_attribute_infer.tar
```
After decompression, the directory `models` should be shown below.
```
├── person_attribute_infer
│ ├── inference.pdiparams
│ ├── inference.pdiparams.info
│ └── inference.pdmodel
```
<a name="6.2"></a>
### 6.2 Prediction with Python
<a name="6.2.1"></a>
#### 6.2.1 Image Prediction
Return the directory `deploy`:
```
cd ../
```
Run the following command to classify whether there are human in the image `./images/PULC/person_attribute/090004.jpg`.
```shell
# Use the following command to predict with GPU.
python3.7 python/predict_cls.py -c configs/PULC/person_attribute/inference_person_attribute.yaml -o Global.use_gpu=True
# Use the following command to predict with CPU.
python3.7 python/predict_cls.py -c configs/PULC/person_attribute/inference_person_attribute.yaml -o Global.use_gpu=False
```
The prediction results:
```
090004.jpg: {'attributes': ['Male', 'Age18-60', 'Back', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'Backpack', 'Upper: LongSleeve UpperPlaid', 'Lower: Trousers', 'No boots'], 'output': [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1]}
```
<a name="6.2.2"></a>
#### 6.2.2 Images Prediction
If you want to predict images in directory, please specify the argument `Global.infer_imgs` as directory path by `-o Global.infer_imgs`. The command is as follow.
```shell
# Use the following command to predict with GPU. If want to replace with CPU, you can add argument -o Global.use_gpu=False
python3.7 python/predict_cls.py -c configs/PULC/person_attribute/inference_person_attribute.yaml -o Global.infer_imgs="./images/PULC/person_attribute/"
```
All prediction results will be printed, as shown below.
```
090004.jpg: {'attributes': ['Male', 'Age18-60', 'Back', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'Backpack', 'Upper: LongSleeve UpperPlaid', 'Lower: Trousers', 'No boots'], 'output': [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1]}
090007.jpg: {'attributes': ['Female', 'Age18-60', 'Side', 'Glasses: False', 'Hat: False', 'HoldObjectsInFront: False', 'No bag', 'Upper: ShortSleeve', 'Lower: Skirt&Dress', 'No boots'], 'output': [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0]}
```
Among the prediction results above, `someone` means that there is a human in the image, `nobody` means that there is no human in the image.
<a name="6.3"></a>
### 6.3 Deployment with C++
PaddleClas provides an example about how to deploy with C++. Please refer to [Deployment with C++](../inference_deployment/cpp_deploy_en.md).
<a name="6.4"></a>
### 6.4 Deployment as Service
Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information.
PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md).
<a name="6.5"></a>
### 6.5 Deployment on Mobile
Paddle-Lite is an open source deep learning framework that designed to make easy to perform inference on mobile, embeded, and IoT devices. Please refer to [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) for more information.
PaddleClas provides an example of how to deploy on mobile by Paddle-Lite. Please refer to [Paddle-Lite deployment](../inference_deployment/paddle_lite_deploy_en.md).
<a name="6.6"></a>
### 6.6 Converting To ONNX and Deployment
Paddle2ONNX support convert Paddle Inference model to ONNX model. And you can deploy with ONNX model on different inference engine, such as TensorRT, OpenVINO, MNN/TNN, NCNN and so on. About Paddle2ONNX details, please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
PaddleClas provides an example of how to convert Paddle Inference model to ONNX model by paddle2onnx toolkit and predict by ONNX model. You can refer to [paddle2onnx](../../../deploy/paddle2onnx/readme_en.md) for deployment details.
此差异已折叠。
# PULC Quick Start
------
This document introduces the prediction using PULC series model based on PaddleClas wheel.
## Catalogue
- [1. Installation](#1)
- [1.1 PaddlePaddle Installation](#11)
- [1.2 PaddleClas wheel Installation](#12)
- [2. Quick Start](#2)
- [2.1 Predicion with Command Line](#2.1)
- [2.2 Predicion with Python](#2.2)
- [2.3 Supported Model List](#2.3)
- [3. Summary](#3)
<a name="1"></a>
## 1. Installation
<a name="1.1"></a>
### 1.1 PaddlePaddle Installation
- Run the following command to install if CUDA9 or CUDA10 is available.
```bash
python3 -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple
```
- Run the following command to install if GPU device is unavailable.
```bash
python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
Please refer to [PaddlePaddle Installation](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/en/install/pip/linux-pip_en.html) for more information about installation, for examples other versions.
<a name="1.2"></a>
### 1.2 PaddleClas wheel Installation
```bash
pip3 install paddleclas
```
<a name="2"></a>
## 2. Quick Start
PaddleClas provides a series of test cases, which contain demos of different scenes about people, cars, OCR, etc. Click [here](https://paddleclas.bj.bcebos.com/data/PULC/pulc_demo_imgs.zip) to download the data.
<a name="2.1"></a>
### 2.1 Predicion with Command Line
```
cd /path/to/pulc_demo_imgs
```
The prediction command:
```bash
paddleclas --model_name=person_exists --infer_imgs=pulc_demo_imgs/person_exists/objects365_01780782.jpg
```
Result:
```
>>> result
class_ids: [0], scores: [0.9955421453341842], label_names: ['nobody'], filename: pulc_demo_imgs/person_exists/objects365_01780782.jpg
Predict complete!
```
`Nobody` means there is no one in the image, `someone` means there is someone in the image. Therefore, the prediction result indicates that there is no one in the figure.
**Note**: The "--infer_imgs" argument specify the image(s) to be predict, and you can also specify a directoy contains images. If use other model, you can specify the `--model_name` argument. Please refer to [2.3 Supported Model List](#2.3) for the supported model list.
<a name="2.2"></a>
### 2.2 Predicion with Python
You can also use in Python:
```python
import paddleclas
model = paddleclas.PaddleClas(model_name="person_exists")
result = model.predict(input_data="pulc_demo_imgs/person_exists/objects365_01780782.jpg")
print(next(result))
```
The printed result information:
```
>>> result
[{'class_ids': [0], 'scores': [0.9955421453341842], 'label_names': ['nobody'], 'filename': 'pulc_demo_imgs/person_exists/objects365_01780782.jpg'}]
```
**Note**: `model.predict()` is a generator, so `next()` or `for` is needed to call it. This would to predict by batch that length is `batch_size`, default by 1. You can specify the argument `batch_size` and `model_name` when instantiating PaddleClas object, for example: `model = paddleclas.PaddleClas(model_name="person_exists", batch_size=2)`. Please refer to [2.3 Supported Model List](#2.3) for the supported model list.
<a name="2.3"></a>
### 2.3 Supported Model List
The name of PULC series models are as follows:
| Name | Intro |
| --- | --- |
| person_exists | Human Exists Classification |
| person_attribute | Pedestrian Attribute Classification |
| safety_helmet | Classification of Wheather Wearing Safety Helmet |
| traffic_sign | Traffic Sign Classification |
| vehicle_attribute | Vehicle Attribute Classification |
| car_exists | Car Exists Classification |
| text_image_orientation | Text Image Orientation Classification |
| textline_orientation | Text-line Orientation Classification |
| language_classification | Language Classification |
<a name="3"></a>
## 3. Summary
The PULC series models have been verified to be effective in different scenarios about people, vehicles, OCR, etc. The ultra lightweight model can achieve the accuracy close to SwinTransformer model, and the speed is increased by 40+ times. And PULC also provides the whole process of dataset getting, model training, model compression and deployment. Please refer to [Human Exists Classification](PULC_person_exists_en.md)[Pedestrian Attribute Classification](PULC_person_attribute_en.md)[Classification of Wheather Wearing Safety Helmet](PULC_safety_helmet_en.md)[Traffic Sign Classification](PULC_traffic_sign_en.md)[Vehicle Attribute Classification](PULC_vehicle_attribute_en.md)[Car Exists Classification](PULC_car_exists_en.md)[Text Image Orientation Classification](PULC_text_image_orientation_en.md)[Text-line Orientation Classification](PULC_textline_orientation_en.md)[Language Classification](PULC_language_classification_en.md) for more information about different scenarios.
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -541,9 +541,9 @@ The accuracy and speed indicators of MobileViT series models are shown in the fo
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | time(ms)<br/>bs=8 | FLOPs(M) | Params(M) | Pretrained Model Download Address | Inference Model Download Address |
| ---------- | --------- | --------- | ---------------- | ---------------- | -------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| MobileViT_XXS | 0.6867 | 0.8878 | - | - | - | 1849.35 | 5.59 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileViT_XXS_pretrained.pdparams) | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileViT_XXS_infer.tar) |
| MobileViT_XXS | 0.6867 | 0.8878 | - | - | - | 337.24 | 1.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileViT_XXS_pretrained.pdparams) | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileViT_XXS_infer.tar) |
| MobileViT_XS | 0.7454 | 0.9227 | - | - | - | 930.75 | 2.33 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileViT_XS_pretrained.pdparams) | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileViT_XS_infer.tar) |
| MobileViT_S | 0.7814 | 0.9413 | - | - | - | 337.24 | 1.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileViT_S_pretrained.pdparams) | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileViT_S_infer.tar) |
| MobileViT_S | 0.7814 | 0.9413 | - | - | - | 1849.35 | 5.59 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileViT_S_pretrained.pdparams) | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileViT_S_infer.tar) |
<a name="26"></a>
......
......@@ -18,6 +18,6 @@ MobileViT is a lightweight visual Transformer network that can be used as a gene
| Models | Top1 | Top5 | Reference<br>top1 | Reference<br>top5 | FLOPs<br>(M) | Params<br>(M) |
|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| MobileViT_XXS | 0.6867 | 0.8878 | 0.690 | - | 1849.35 | 5.59 |
| MobileViT_XXS | 0.6867 | 0.8878 | 0.690 | - | 337.24 | 1.28 |
| MobileViT_XS | 0.7454 | 0.9227 | 0.747 | - | 930.75 | 2.33 |
| MobileViT_S | 0.7814 | 0.9413 | 0.783 | - | 337.24 | 1.28 |
| MobileViT_S | 0.7814 | 0.9413 | 0.783 | - | 1849.35 | 5.59 |
docs/images/PP-HGNet/PP-HGNet-block.png

104.2 KB | W: | H:

docs/images/PP-HGNet/PP-HGNet-block.png

405.7 KB | W: | H:

docs/images/PP-HGNet/PP-HGNet-block.png
docs/images/PP-HGNet/PP-HGNet-block.png
docs/images/PP-HGNet/PP-HGNet-block.png
docs/images/PP-HGNet/PP-HGNet-block.png
  • 2-up
  • Swipe
  • Onion skin
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册