未验证 提交 8deaf352 编写于 作者: Y YixinKristy 提交者: GitHub

Merge branch 'PaddlePaddle:release/2.4' into release/2.4

...@@ -17,3 +17,4 @@ EvalDataset: ...@@ -17,3 +17,4 @@ EvalDataset:
TestDataset: TestDataset:
!ImageFolder !ImageFolder
anno_path: annotations/instances_val2017.json anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
...@@ -17,3 +17,4 @@ EvalDataset: ...@@ -17,3 +17,4 @@ EvalDataset:
TestDataset: TestDataset:
!ImageFolder !ImageFolder
anno_path: annotations/instances_val2017.json anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
...@@ -17,3 +17,4 @@ EvalDataset: ...@@ -17,3 +17,4 @@ EvalDataset:
TestDataset: TestDataset:
!ImageFolder !ImageFolder
anno_path: trainval_split/s2anet_trainval_paddle_coco.json anno_path: trainval_split/s2anet_trainval_paddle_coco.json
dataset_dir: dataset/DOTA_1024_s2anet/
...@@ -17,6 +17,7 @@ EvalDataset: ...@@ -17,6 +17,7 @@ EvalDataset:
TestDataset: TestDataset:
!ImageFolder !ImageFolder
dataset_dir: dataset/mot/MOT17
anno_path: annotations/val_half.json anno_path: annotations/val_half.json
......
...@@ -17,6 +17,7 @@ EvalDataset: ...@@ -17,6 +17,7 @@ EvalDataset:
TestDataset: TestDataset:
!ImageFolder !ImageFolder
dataset_dir: dataset/mot/MOT17
anno_path: annotations/val_half.json anno_path: annotations/val_half.json
......
English | [简体中文](README_cn.md) 简体中文 | [English](README_en.md)
# PP-PicoDet # PP-PicoDet
![](../../docs/images/picedet_demo.jpeg) ![](../../docs/images/picedet_demo.jpeg)
## News ## 最新动态
- Released a new series of PP-PicoDet models: **(2022.03.20)** - 发布全新系列PP-PicoDet模型:**(2022.03.20)**
- (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy; - (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升;
- (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved; - (2)优化CPU端预测速度,同时训练速度大幅提升;
- (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower. - (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。
### Legacy Model ## 历史版本模型
- Please refer to: [PicoDet 2021.10版本](./legacy_model/) - 详情请参考:[PicoDet 2021.10版本](./legacy_model/)
## Introduction ## 简介
We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902). PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902)
- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416. PP-PicoDet模型有如下特点:
- 🚀 Faster latency: 150FPS on mobile ARM CPU.
- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation. - 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。
- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc. - 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。
- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。
- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
<div align="center"> <div align="center">
<img src="../../docs/images/picodet_map.png" width='600'/> <img src="../../docs/images/picodet_map.png" width='600'/>
</div> </div>
## Benchmark ## 基线
| Model | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[CPU](#latency)</small><sup><br><sup>(ms) | Latency<sup><small>[Lite](#latency)</small><sup><br><sup>(ms) | Download | Config | | 模型 | 输入尺寸 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 参数量<br><sup>(M) | FLOPS<br><sup>(G) | 预测时延<sup><small>[CPU](#latency)</small><sup><br><sup>(ms) | 预测时延<sup><small>[Lite](#latency)</small><sup><br><sup>(ms) | 下载 | 配置文件 |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- |
| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | | PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_xs_320_coco_lcnet.yml) |
| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | | PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_xs_416_coco_lcnet.yml) |
| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | | PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_s_320_coco_lcnet.yml) |
| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | | PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_s_416_coco_lcnet.yml) |
| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | | PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_m_320_coco_lcnet.yml) |
| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | | PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_m_416_coco_lcnet.yml) |
| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | | PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_320_coco_lcnet.yml) |
| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | | PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_416_coco_lcnet.yml) |
| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | | PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_640_coco_lcnet.yml) |
<details open> <details open>
<summary><b>Table Notes:</b></summary> <summary><b>注意事项:</b></summary>
- <a name="latency">Latency:</a> All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite). - <a name="latency">时延测试:</a> 我们所有的模型都在英特尔至强6148的CPU(MKLDNN 10线程)和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用Paddle Inference库测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。
- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters. - PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。
- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12). - Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml#L12)
</details> </details>
#### Benchmark of Other Models #### 其他模型的基线
| Model | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) | | 模型 | 输入尺寸 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 参数量<br><sup>(M) | FLOPS<br><sup>(G) | 预测时延<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: |
| YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 | | YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 |
| YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 | | YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 |
...@@ -68,38 +71,39 @@ We developed a series of lightweight models, named `PP-PicoDet`. Because of the ...@@ -68,38 +71,39 @@ We developed a series of lightweight models, named `PP-PicoDet`. Because of the
| YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 | | YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 |
| YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 | | YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 |
- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark). - ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)
## Quick Start ## 快速开始
<details open> <details open>
<summary>Requirements:</summary> <summary>依赖包:</summary>
- PaddlePaddle >= 2.2.1 - PaddlePaddle == 2.2.2
</details> </details>
<details> <details>
<summary>Installation</summary> <summary>安装</summary>
- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) - [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/INSTALL.md)
- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md) - [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareDataSet_en.md)
</details> </details>
<details> <details>
<summary>Training and Evaluation</summary> <summary>训练&评估</summary>
- Training model on single-GPU: - 单卡GPU上训练:
```shell ```shell
# training on single-GPU # training on single-GPU
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
``` ```
If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally.
- Training model on multi-GPU: **注意:**如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。同时我们发布的config均由4卡训练得到,如果改变GPU卡数为1,那么base_lr需要减小4倍。
- 多卡GPU上训练:
```shell ```shell
...@@ -108,31 +112,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ...@@ -108,31 +112,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
``` ```
- Evaluation: - 评估:
```shell ```shell
python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
``` ```
- Infer: - 测试:
```shell ```shell
python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
``` ```
Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md). 详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED.md).
</details> </details>
## Deployment ## 部署
### Export and Convert Model ### 导出及转换模型
<details> <details>
<summary>1. Export model (click to expand)</summary> <summary>1. 导出模型 (点击展开)</summary>
```shell ```shell
cd PaddleDetection cd PaddleDetection
...@@ -141,18 +145,21 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ ...@@ -141,18 +145,21 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--output_dir=inference_model --output_dir=inference_model
``` ```
- 如无需导出后处理,请指定:`-o export.benchmark=True`(如果-o已出现过,此处删掉-o)或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml) 中相应字段。
- 如无需导出NMS,请指定:`-o export.nms=False`或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml) 中相应字段。
</details> </details>
<details> <details>
<summary>2. Convert to PaddleLite (click to expand)</summary> <summary>2. 转换模型至Paddle Lite (点击展开)</summary>
- Install Paddlelite>=2.10: - 安装Paddlelite>=2.10:
```shell ```shell
pip install paddlelite pip install paddlelite
``` ```
- Convert model: - 转换模型至Paddle Lite格式:
```shell ```shell
# FP32 # FP32
...@@ -164,16 +171,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar ...@@ -164,16 +171,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar
</details> </details>
<details> <details>
<summary>3. Convert to ONNX (click to expand)</summary> <summary>3. 转换模型至ONNX (点击展开)</summary>
- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md) - 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md)
```shell ```shell
pip install onnx pip install onnx
pip install paddle2onnx pip install paddle2onnx==0.9.2
``` ```
- Convert model: - 转换模型:
```shell ```shell
paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
...@@ -183,22 +190,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ ...@@ -183,22 +190,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
--save_file picodet_s_320_coco.onnx --save_file picodet_s_320_coco.onnx
``` ```
- Simplify ONNX model: use onnx-simplifier to simplify onnx model. - 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。
- Install onnx-simplifier >= 0.3.6: - 安装 onnx-simplifier >= 0.3.6:
```shell ```shell
pip install onnx-simplifier pip install onnx-simplifier
``` ```
- simplify onnx model: - 简化ONNX模型:
```shell ```shell
python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx
``` ```
</details> </details>
- Deploy models - 部署用的模型
| Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | | 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: |
| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) | | PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) |
| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) | | PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) |
...@@ -212,31 +219,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ ...@@ -212,31 +219,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) | | PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) |
### Deploy ### 部署
- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp) - PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [PaddleLite C++ demo](../../deploy/lite) - [PaddleLite C++ demo](../../deploy/lite)
- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn) - [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/release/2.4/object_detection/android/app/cxx/picodet_detection_demo)
- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn)
- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino)
- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo)
Android demo visualization: Android demo可视化:
<div align="center"> <div align="center">
<img src="../../docs/images/picodet_android_demo1.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo2.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo3.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo4.jpg" height="500px" > <img src="../../docs/images/picodet_android_demo1.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo2.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo3.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo4.jpg" height="500px" >
</div> </div>
## Quantization ## 量化
<details open> <details open>
<summary>Requirements:</summary> <summary>依赖包:</summary>
- PaddlePaddle >= 2.2.2 - PaddlePaddle >= 2.2.2
- PaddleSlim >= 2.2.1 - PaddleSlim >= 2.2.1
**Install:** **安装:**
```shell ```shell
pip install paddleslim==2.2.1 pip install paddleslim==2.2.1
...@@ -245,61 +249,61 @@ pip install paddleslim==2.2.1 ...@@ -245,61 +249,61 @@ pip install paddleslim==2.2.1
</details> </details>
<details> <details>
<summary>Quant aware (click to expand)</summary> <summary>量化训练 (点击展开)</summary>
Configure the quant config and start training: 开始量化训练:
```shell ```shell
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/quant/picodet_s_quant.yml --eval --slim_config configs/slim/quant/picodet_s_quant.yml --eval
``` ```
- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim) - 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/slim)
</details> </details>
<details> <details>
<summary>Post quant (click to expand)</summary> <summary>离线量化 (点击展开)</summary>
Configure the post quant config and start calibrate model: 校准及导出量化模型:
```shell ```shell
python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/post_quant/picodet_s_ptq.yml --slim_config configs/slim/post_quant/picodet_s_ptq.yml
``` ```
- Notes: Now the accuracy of post quant is abnormal and this problem is being solved. - 注意: 离线量化模型精度问题正在解决中.
</details> </details>
## Unstructured Pruning ## 非结构化剪枝
<details open> <details open>
<summary>Toturial:</summary> <summary>教程:</summary>
Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) for details such as requirements, training and deployment. 训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/pruner/README.md)
</details> </details>
## Application ## 应用
- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) - **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./application/mainbody_detection/README.md) - **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./application/mainbody_detection/README.md)
## FAQ ## FAQ
<details> <details>
<summary>Out of memory error.</summary> <summary>显存爆炸(Out of memory error)</summary>
Please reduce the `batch_size` of `TrainReader` in config. 请减小配置文件中`TrainReader``batch_size`
</details> </details>
<details> <details>
<summary>How to transfer learning.</summary> <summary>如何迁移学习</summary>
Please reset `pretrain_weights` in config, which trained on coco. Such as: 请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练:
```yaml ```yaml
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
``` ```
...@@ -307,17 +311,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne ...@@ -307,17 +311,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
</details> </details>
<details> <details>
<summary>The transpose operator is time-consuming on some hardware.</summary> <summary>`transpose`算子在某些硬件上耗时验证</summary>
Please use `PicoDet-LCNet` model, which has fewer `transpose` operators. 请使用`PicoDet-LCNet`模型,`transpose`较少。
</details> </details>
<details> <details>
<summary>How to count model parameters.</summary> <summary>如何计算模型参数量。</summary>
You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters. 可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/ppdet/engine/trainer.py#L141) 来计算参数量。
```python ```python
params = sum([ params = sum([
...@@ -329,8 +333,8 @@ print('params: ', params) ...@@ -329,8 +333,8 @@ print('params: ', params)
</details> </details>
## Cite PP-PicoDet ## 引用PP-PicoDet
If you use PicoDet in your research, please cite our work by using the following BibTeX entry: 如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告:
``` ```
@misc{yu2021pppicodet, @misc{yu2021pppicodet,
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices}, title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
......
简体中文 | [English](README.md) English | [简体中文](README.md)
# PP-PicoDet # PP-PicoDet
![](../../docs/images/picedet_demo.jpeg) ![](../../docs/images/picedet_demo.jpeg)
## 最新动态 ## News
- 发布全新系列PP-PicoDet模型:**(2022.03.20)** - Released a new series of PP-PicoDet models: **(2022.03.20)**
- (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升; - (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy;
- (2)优化CPU端预测速度,同时训练速度大幅提升; - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
- (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。 - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary release/2.4ment, and the migration cost is lower.
## 历史版本模型 ### Legacy Model
- 详情请参考:[PicoDet 2021.10版本](./legacy_model/) - Please refer to: [PicoDet 2021.10](./legacy_model/)
## 简介 ## Introduction
PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902) We release/2.4ed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
PP-PicoDet模型有如下特点: - 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416.
- 🚀 Faster latency: 150FPS on mobile ARM CPU.
- 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。 - 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation.
- 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。 - 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc.
- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。
- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
<div align="center"> <div align="center">
<img src="../../docs/images/picodet_map.png" width='600'/> <img src="../../docs/images/picodet_map.png" width='600'/>
</div> </div>
## 基线 ## Benchmark
| 模型 | 输入尺寸 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 参数量<br><sup>(M) | FLOPS<br><sup>(G) | 预测时延<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) | 预测时延<sup><small>[Lite](#latency)</small><sup><br><sup>(ms) | 下载 | 配置文件 | | Model | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[CPU](#latency)</small><sup><br><sup>(ms) | Latency<sup><small>[Lite](#latency)</small><sup><br><sup>(ms) | Download | Config |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- |
| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | | PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_xs_320_coco_lcnet.yml) |
| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | | PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_xs_416_coco_lcnet.yml) |
| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | | PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_s_320_coco_lcnet.yml) |
| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | | PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_s_416_coco_lcnet.yml) |
| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | | PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_m_320_coco_lcnet.yml) |
| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | | PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_m_416_coco_lcnet.yml) |
| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | | PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_320_coco_lcnet.yml) |
| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | | PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_416_coco_lcnet.yml) |
| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | | PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/picodet_l_640_coco_lcnet.yml) |
<details open> <details open>
<summary><b>注意事项:</b></summary> <summary><b>Table Notes:</b></summary>
- <a name="latency">时延测试:</a> 我们所有的模型都在英特尔至强6148的CPU(MKLDNN 10线程)和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用Paddle Inference库测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。 - <a name="latency">Latency:</a> All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
- PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。 - PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
- Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12) - Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml#L12).
</details> </details>
#### 其他模型的基线 #### Benchmark of Other Models
| 模型 | 输入尺寸 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 参数量<br><sup>(M) | FLOPS<br><sup>(G) | 预测时延<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) | | Model | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: |
| YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 | | YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 |
| YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 | | YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 |
...@@ -71,39 +68,38 @@ PP-PicoDet模型有如下特点: ...@@ -71,39 +68,38 @@ PP-PicoDet模型有如下特点:
| YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 | | YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 |
| YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 | | YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 |
- ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark) - Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark).
## 快速开始 ## Quick Start
<details open> <details open>
<summary>依赖包:</summary> <summary>Requirements:</summary>
- PaddlePaddle == 2.2.2 - PaddlePaddle >= 2.2.2
</details> </details>
<details> <details>
<summary>安装</summary> <summary>Installation</summary>
- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) - [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/INSTALL.md)
- [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md) - [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareDataSet_en.md)
</details> </details>
<details> <details>
<summary>训练&评估</summary> <summary>Training and Evaluation</summary>
- 单卡GPU上训练: - Training model on single-GPU:
```shell ```shell
# training on single-GPU # training on single-GPU
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
``` ```
If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. At the same time, the configs we published are all trained with 4 GPUs. If the number of GPUs is changed to 1, the base_lr needs to be reduced by a factor of 4.
如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。 - Training model on multi-GPU:
- 多卡GPU上训练:
```shell ```shell
...@@ -112,31 +108,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ...@@ -112,31 +108,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
``` ```
- 评估: - Evaluation:
```shell ```shell
python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
``` ```
- 测试: - Infer:
```shell ```shell
python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
``` ```
详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md). Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED.md).
</details> </details>
## 部署 ## Deployment
### 导出及转换模型 ### Export and Convert Model
<details> <details>
<summary>1. 导出模型 (点击展开)</summary> <summary>1. Export model (click to expand)</summary>
```shell ```shell
cd PaddleDetection cd PaddleDetection
...@@ -145,18 +141,22 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ ...@@ -145,18 +141,22 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--output_dir=inference_model --output_dir=inference_model
``` ```
- If no post processing is required, please specify: `-o export.benchmark=True` (if -o has already appeared, delete -o here) or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml).
- If no NMS is required, please specify: `-o export.nms=True` or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml).
</details> </details>
<details> <details>
<summary>2. 转换模型至Paddle Lite (点击展开)</summary> <summary>2. Convert to PaddleLite (click to expand)</summary>
- 安装Paddlelite>=2.10: - Install Paddlelite>=2.10:
```shell ```shell
pip install paddlelite pip install paddlelite
``` ```
- 转换模型至Paddle Lite格式: - Convert model:
```shell ```shell
# FP32 # FP32
...@@ -168,16 +168,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar ...@@ -168,16 +168,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar
</details> </details>
<details> <details>
<summary>3. 转换模型至ONNX (点击展开)</summary> <summary>3. Convert to ONNX (click to expand)</summary>
- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md) - Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
```shell ```shell
pip install onnx pip install onnx
pip install paddle2onnx pip install paddle2onnx==0.9.2
``` ```
- 转换模型: - Convert model:
```shell ```shell
paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
...@@ -187,22 +187,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ ...@@ -187,22 +187,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
--save_file picodet_s_320_coco.onnx --save_file picodet_s_320_coco.onnx
``` ```
- 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。 - Simplify ONNX model: use onnx-simplifier to simplify onnx model.
- 安装 onnx-simplifier >= 0.3.6: - Install onnx-simplifier >= 0.3.6:
```shell ```shell
pip install onnx-simplifier pip install onnx-simplifier
``` ```
- 简化ONNX模型: - simplify onnx model:
```shell ```shell
python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx
``` ```
</details> </details>
- 部署用的模型 - Deploy models
| 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | | Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: |
| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) | | PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) |
| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) | | PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) |
...@@ -216,31 +216,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ ...@@ -216,31 +216,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) | | PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) |
### 部署 ### Deploy
- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp) - PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [PaddleLite C++ demo](../../deploy/lite) - [PaddleLite C++ demo](../../deploy/lite)
- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn) - [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/release/2.4/object_detection/android/app/cxx/picodet_detection_demo)
- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn)
- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino)
- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo)
Android demo可视化: Android demo visualization:
<div align="center"> <div align="center">
<img src="../../docs/images/picodet_android_demo1.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo2.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo3.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo4.jpg" height="500px" > <img src="../../docs/images/picodet_android_demo1.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo2.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo3.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo4.jpg" height="500px" >
</div> </div>
## 量化 ## Quantization
<details open> <details open>
<summary>依赖包:</summary> <summary>Requirements:</summary>
- PaddlePaddle >= 2.2.2 - PaddlePaddle >= 2.2.2
- PaddleSlim >= 2.2.1 - PaddleSlim >= 2.2.1
**安装:** **Install:**
```shell ```shell
pip install paddleslim==2.2.1 pip install paddleslim==2.2.1
...@@ -249,61 +246,61 @@ pip install paddleslim==2.2.1 ...@@ -249,61 +246,61 @@ pip install paddleslim==2.2.1
</details> </details>
<details> <details>
<summary>量化训练 (点击展开)</summary> <summary>Quant aware (click to expand)</summary>
开始量化训练: Configure the quant config and start training:
```shell ```shell
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/quant/picodet_s_quant.yml --eval --slim_config configs/slim/quant/picodet_s_quant.yml --eval
``` ```
- 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim) - More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/slim)
</details> </details>
<details> <details>
<summary>离线量化 (点击展开)</summary> <summary>Post quant (click to expand)</summary>
校准及导出量化模型: Configure the post quant config and start calibrate model:
```shell ```shell
python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/post_quant/picodet_s_ptq.yml --slim_config configs/slim/post_quant/picodet_s_ptq.yml
``` ```
- 注意: 离线量化模型精度问题正在解决中. - Notes: Now the accuracy of post quant is abnormal and this problem is being solved.
</details> </details>
## 非结构化剪枝 ## Unstructured Pruning
<details open> <details open>
<summary>教程:</summary> <summary>Toturial:</summary>
训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/pruner/README.md) for details such as requirements, training and deployment.
</details> </details>
## 应用 ## Application
- **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) - **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
- **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./application/mainbody_detection/README.md) - **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./application/mainbody_detection/README.md)
## FAQ ## FAQ
<details> <details>
<summary>显存爆炸(Out of memory error)</summary> <summary>Out of memory error.</summary>
请减小配置文件中`TrainReader``batch_size` Please reduce the `batch_size` of `TrainReader` in config.
</details> </details>
<details> <details>
<summary>如何迁移学习</summary> <summary>How to transfer learning.</summary>
请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练: Please reset `pretrain_weights` in config, which trained on coco. Such as:
```yaml ```yaml
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
``` ```
...@@ -311,17 +308,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne ...@@ -311,17 +308,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
</details> </details>
<details> <details>
<summary>`transpose`算子在某些硬件上耗时验证</summary> <summary>The transpose operator is time-consuming on some hardware.</summary>
请使用`PicoDet-LCNet`模型,`transpose`较少。 Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
</details> </details>
<details> <details>
<summary>如何计算模型参数量。</summary> <summary>How to count model parameters.</summary>
可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) 来计算参数量。 You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/ppdet/engine/trainer.py#L141) to count learnable parameters.
```python ```python
params = sum([ params = sum([
...@@ -333,8 +330,8 @@ print('params: ', params) ...@@ -333,8 +330,8 @@ print('params: ', params)
</details> </details>
## 引用PP-PicoDet ## Cite PP-PicoDet
如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告: If you use PicoDet in your research, please cite our work by using the following BibTeX entry:
``` ```
@misc{yu2021pppicodet, @misc{yu2021pppicodet,
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices}, title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
......
...@@ -35,6 +35,9 @@ class Result(object): ...@@ -35,6 +35,9 @@ class Result(object):
return self.res_dict[name] return self.res_dict[name]
return None return None
def clear(self, name):
self.res_dict[name].clear()
class DataCollector(object): class DataCollector(object):
""" """
...@@ -80,7 +83,6 @@ class DataCollector(object): ...@@ -80,7 +83,6 @@ class DataCollector(object):
ids = int(mot_item[0]) ids = int(mot_item[0])
if ids not in self.collector: if ids not in self.collector:
self.collector[ids] = copy.deepcopy(self.mots) self.collector[ids] = copy.deepcopy(self.mots)
self.collector[ids]["frames"].append(frameid) self.collector[ids]["frames"].append(frameid)
self.collector[ids]["rects"].append([mot_item[2:]]) self.collector[ids]["rects"].append([mot_item[2:]])
if attr_res: if attr_res:
......
...@@ -297,10 +297,9 @@ def distill_idfeat(mot_res): ...@@ -297,10 +297,9 @@ def distill_idfeat(mot_res):
feature_new = feature_list feature_new = feature_list
#if available frames number is more than 200, take one frame data per 20 frames #if available frames number is more than 200, take one frame data per 20 frames
if len(qualities_new) > 200: skipf = 1
skipf = 20 if len(qualities_new) > 20:
else: skipf = 2
skipf = max(10, len(qualities_new) // 10)
quality_skip = np.array(qualities_new[::skipf]) quality_skip = np.array(qualities_new[::skipf])
feature_skip = np.array(feature_new[::skipf]) feature_skip = np.array(feature_new[::skipf])
......
...@@ -587,7 +587,7 @@ class PipePredictor(object): ...@@ -587,7 +587,7 @@ class PipePredictor(object):
if self.cfg['visual']: if self.cfg['visual']:
self.action_visual_helper.update(action_res) self.action_visual_helper.update(action_res)
if self.with_mtmct: if self.with_mtmct and frame_id % 10 == 0:
crop_input, img_qualities, rects = self.reid_predictor.crop_image_with_mot( crop_input, img_qualities, rects = self.reid_predictor.crop_image_with_mot(
frame, mot_res) frame, mot_res)
if frame_id > self.warmup_frame: if frame_id > self.warmup_frame:
...@@ -603,6 +603,8 @@ class PipePredictor(object): ...@@ -603,6 +603,8 @@ class PipePredictor(object):
"rects": rects "rects": rects
} }
self.pipeline_res.update(reid_res_dict, 'reid') self.pipeline_res.update(reid_res_dict, 'reid')
else:
self.pipeline_res.clear('reid')
self.collector.append(frame_id, self.pipeline_res) self.collector.append(frame_id, self.pipeline_res)
......
...@@ -231,7 +231,7 @@ class Detector(object): ...@@ -231,7 +231,7 @@ class Detector(object):
self.det_times.preprocess_time_s.end() self.det_times.preprocess_time_s.end()
# model prediction # model prediction
result = self.predict(repeats=repeats) # warmup result = self.predict(repeats=50) # warmup
self.det_times.inference_time_s.start() self.det_times.inference_time_s.start()
result = self.predict(repeats=repeats) result = self.predict(repeats=repeats)
self.det_times.inference_time_s.end(repeats=repeats) self.det_times.inference_time_s.end(repeats=repeats)
...@@ -296,7 +296,7 @@ class Detector(object): ...@@ -296,7 +296,7 @@ class Detector(object):
if not os.path.exists(self.output_dir): if not os.path.exists(self.output_dir):
os.makedirs(self.output_dir) os.makedirs(self.output_dir)
out_path = os.path.join(self.output_dir, video_out_name) out_path = os.path.join(self.output_dir, video_out_name)
fourcc = cv2.VideoWriter_fourcc(* 'mp4v') fourcc = cv2.VideoWriter_fourcc(*'mp4v')
writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height))
index = 1 index = 1
while (1): while (1):
...@@ -790,7 +790,7 @@ def main(): ...@@ -790,7 +790,7 @@ def main():
if FLAGS.image_dir is None and FLAGS.image_file is not None: if FLAGS.image_dir is None and FLAGS.image_file is not None:
assert FLAGS.batch_size == 1, "batch_size should be 1, when image_file is not None" assert FLAGS.batch_size == 1, "batch_size should be 1, when image_file is not None"
img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file) img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file)
detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10) detector.predict_image(img_list, FLAGS.run_benchmark, repeats=100)
if not FLAGS.run_benchmark: if not FLAGS.run_benchmark:
detector.det_times.info(average=True) detector.det_times.info(average=True)
else: else:
......
...@@ -39,6 +39,11 @@ def get_categories(metric_type, anno_file=None, arch=None): ...@@ -39,6 +39,11 @@ def get_categories(metric_type, anno_file=None, arch=None):
if arch == 'keypoint_arch': if arch == 'keypoint_arch':
return (None, {'id': 'keypoint'}) return (None, {'id': 'keypoint'})
if anno_file == None or (not os.path.isfile(anno_file)):
logger.warning("anno_file '{}' is None or not set or not exist, "
"please recheck TrainDataset/EvalDataset/TestDataset.anno_path, "
"otherwise the default categories will be used by metric_type.".format(anno_file))
if metric_type.lower() == 'coco' or metric_type.lower( if metric_type.lower() == 'coco' or metric_type.lower(
) == 'rbox' or metric_type.lower() == 'snipercoco': ) == 'rbox' or metric_type.lower() == 'snipercoco':
if anno_file and os.path.isfile(anno_file): if anno_file and os.path.isfile(anno_file):
...@@ -55,8 +60,9 @@ def get_categories(metric_type, anno_file=None, arch=None): ...@@ -55,8 +60,9 @@ def get_categories(metric_type, anno_file=None, arch=None):
# anno file not exist, load default categories of COCO17 # anno file not exist, load default categories of COCO17
else: else:
if metric_type.lower() == 'rbox': if metric_type.lower() == 'rbox':
logger.warning("metric_type: {}, load default categories of DOTA.".format(metric_type))
return _dota_category() return _dota_category()
logger.warning("metric_type: {}, load default categories of COCO.".format(metric_type))
return _coco17_category() return _coco17_category()
elif metric_type.lower() == 'voc': elif metric_type.lower() == 'voc':
...@@ -77,6 +83,7 @@ def get_categories(metric_type, anno_file=None, arch=None): ...@@ -77,6 +83,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
# anno file not exist, load default categories of # anno file not exist, load default categories of
# VOC all 20 categories # VOC all 20 categories
else: else:
logger.warning("metric_type: {}, load default categories of VOC.".format(metric_type))
return _vocall_category() return _vocall_category()
elif metric_type.lower() == 'oid': elif metric_type.lower() == 'oid':
...@@ -104,6 +111,7 @@ def get_categories(metric_type, anno_file=None, arch=None): ...@@ -104,6 +111,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name return clsid2catid, catid2name
# anno file not exist, load default category 'pedestrian'. # anno file not exist, load default category 'pedestrian'.
else: else:
logger.warning("metric_type: {}, load default categories of pedestrian MOT.".format(metric_type))
return _mot_category(category='pedestrian') return _mot_category(category='pedestrian')
elif metric_type.lower() in ['kitti', 'bdd100kmot']: elif metric_type.lower() in ['kitti', 'bdd100kmot']:
...@@ -122,6 +130,7 @@ def get_categories(metric_type, anno_file=None, arch=None): ...@@ -122,6 +130,7 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name return clsid2catid, catid2name
# anno file not exist, load default categories of visdrone all 10 categories # anno file not exist, load default categories of visdrone all 10 categories
else: else:
logger.warning("metric_type: {}, load default categories of VisDrone.".format(metric_type))
return _visdrone_category() return _visdrone_category()
else: else:
......
...@@ -26,8 +26,6 @@ from motmetrics.math_util import quiet_divide ...@@ -26,8 +26,6 @@ from motmetrics.math_util import quiet_divide
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import paddle
import paddle.nn.functional as F
from .metrics import Metric from .metrics import Metric
import motmetrics as mm import motmetrics as mm
import openpyxl import openpyxl
...@@ -311,7 +309,9 @@ class MCMOTEvaluator(object): ...@@ -311,7 +309,9 @@ class MCMOTEvaluator(object):
self.gt_filename = os.path.join(self.data_root, '../', self.gt_filename = os.path.join(self.data_root, '../',
'sequences', 'sequences',
'{}.txt'.format(self.seq_name)) '{}.txt'.format(self.seq_name))
if not os.path.exists(self.gt_filename):
logger.warning("gt_filename '{}' of MCMOTEvaluator is not exist, so the MOTA will be -inf.")
def reset_accumulator(self): def reset_accumulator(self):
import motmetrics as mm import motmetrics as mm
mm.lap.default_solver = 'lap' mm.lap.default_solver = 'lap'
......
...@@ -22,8 +22,7 @@ import sys ...@@ -22,8 +22,7 @@ import sys
import math import math
from collections import defaultdict from collections import defaultdict
import numpy as np import numpy as np
import paddle
import paddle.nn.functional as F
from ppdet.modeling.bbox_utils import bbox_iou_np_expand from ppdet.modeling.bbox_utils import bbox_iou_np_expand
from .map_utils import ap_per_class from .map_utils import ap_per_class
from .metrics import Metric from .metrics import Metric
...@@ -36,8 +35,10 @@ __all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric'] ...@@ -36,8 +35,10 @@ __all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric']
def read_mot_results(filename, is_gt=False, is_ignore=False): def read_mot_results(filename, is_gt=False, is_ignore=False):
valid_labels = {1} valid_label = [1]
ignore_labels = {2, 7, 8, 12} # only in motchallenge datasets like 'MOT16' ignore_labels = [2, 7, 8, 12] # only in motchallenge datasets like 'MOT16'
logger.info("In MOT16/17 dataset the valid_label of ground truth is '{}', "
"in other dataset it should be '0' for single classs MOT.".format(valid_label[0]))
results_dict = dict() results_dict = dict()
if os.path.isfile(filename): if os.path.isfile(filename):
with open(filename, 'r') as f: with open(filename, 'r') as f:
...@@ -50,12 +51,10 @@ def read_mot_results(filename, is_gt=False, is_ignore=False): ...@@ -50,12 +51,10 @@ def read_mot_results(filename, is_gt=False, is_ignore=False):
continue continue
results_dict.setdefault(fid, list()) results_dict.setdefault(fid, list())
box_size = float(linelist[4]) * float(linelist[5])
if is_gt: if is_gt:
label = int(float(linelist[7])) label = int(float(linelist[7]))
mark = int(float(linelist[6])) mark = int(float(linelist[6]))
if mark == 0 or label not in valid_labels: if mark == 0 or label not in valid_label:
continue continue
score = 1 score = 1
elif is_ignore: elif is_ignore:
...@@ -118,6 +117,8 @@ class MOTEvaluator(object): ...@@ -118,6 +117,8 @@ class MOTEvaluator(object):
assert self.data_type == 'mot' assert self.data_type == 'mot'
gt_filename = os.path.join(self.data_root, self.seq_name, 'gt', gt_filename = os.path.join(self.data_root, self.seq_name, 'gt',
'gt.txt') 'gt.txt')
if not os.path.exists(gt_filename):
logger.warning("gt_filename '{}' of MOTEvaluator is not exist, so the MOTA will be -inf.")
self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True) self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True)
self.gt_ignore_frame_dict = read_mot_results( self.gt_ignore_frame_dict = read_mot_results(
gt_filename, is_ignore=True) gt_filename, is_ignore=True)
......
...@@ -22,22 +22,23 @@ class BaseArch(nn.Layer): ...@@ -22,22 +22,23 @@ class BaseArch(nn.Layer):
self.fuse_norm = False self.fuse_norm = False
def load_meanstd(self, cfg_transform): def load_meanstd(self, cfg_transform):
self.scale = 1. scale = 1.
self.mean = paddle.to_tensor([0.485, 0.456, 0.406]).reshape( mean = np.array([0.485, 0.456, 0.406], dtype=np.float32)
(1, 3, 1, 1)) std = np.array([0.229, 0.224, 0.225], dtype=np.float32)
self.std = paddle.to_tensor([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1))
for item in cfg_transform: for item in cfg_transform:
if 'NormalizeImage' in item: if 'NormalizeImage' in item:
self.mean = paddle.to_tensor(item['NormalizeImage'][ mean = np.array(
'mean']).reshape((1, 3, 1, 1)) item['NormalizeImage']['mean'], dtype=np.float32)
self.std = paddle.to_tensor(item['NormalizeImage'][ std = np.array(item['NormalizeImage']['std'], dtype=np.float32)
'std']).reshape((1, 3, 1, 1))
if item['NormalizeImage'].get('is_scale', True): if item['NormalizeImage'].get('is_scale', True):
self.scale = 1. / 255. scale = 1. / 255.
break break
if self.data_format == 'NHWC': if self.data_format == 'NHWC':
self.mean = self.mean.reshape(1, 1, 1, 3) self.scale = paddle.to_tensor(scale / std).reshape((1, 1, 1, 3))
self.std = self.std.reshape(1, 1, 1, 3) self.bias = paddle.to_tensor(-mean / std).reshape((1, 1, 1, 3))
else:
self.scale = paddle.to_tensor(scale / std).reshape((1, 3, 1, 1))
self.bias = paddle.to_tensor(-mean / std).reshape((1, 3, 1, 1))
def forward(self, inputs): def forward(self, inputs):
if self.data_format == 'NHWC': if self.data_format == 'NHWC':
...@@ -46,7 +47,7 @@ class BaseArch(nn.Layer): ...@@ -46,7 +47,7 @@ class BaseArch(nn.Layer):
if self.fuse_norm: if self.fuse_norm:
image = inputs['image'] image = inputs['image']
self.inputs['image'] = (image * self.scale - self.mean) / self.std self.inputs['image'] = image * self.scale + self.bias
self.inputs['im_shape'] = inputs['im_shape'] self.inputs['im_shape'] = inputs['im_shape']
self.inputs['scale_factor'] = inputs['scale_factor'] self.inputs['scale_factor'] = inputs['scale_factor']
else: else:
...@@ -66,8 +67,7 @@ class BaseArch(nn.Layer): ...@@ -66,8 +67,7 @@ class BaseArch(nn.Layer):
outs = [] outs = []
for inp in inputs_list: for inp in inputs_list:
if self.fuse_norm: if self.fuse_norm:
self.inputs['image'] = ( self.inputs['image'] = inp['image'] * self.scale + self.bias
inp['image'] * self.scale - self.mean) / self.std
self.inputs['im_shape'] = inp['im_shape'] self.inputs['im_shape'] = inp['im_shape']
self.inputs['scale_factor'] = inp['scale_factor'] self.inputs['scale_factor'] = inp['scale_factor']
else: else:
...@@ -75,7 +75,7 @@ class BaseArch(nn.Layer): ...@@ -75,7 +75,7 @@ class BaseArch(nn.Layer):
outs.append(self.get_pred()) outs.append(self.get_pred())
# multi-scale test # multi-scale test
if len(outs)>1: if len(outs) > 1:
out = self.merge_multi_scale_predictions(outs) out = self.merge_multi_scale_predictions(outs)
else: else:
out = outs[0] out = outs[0]
...@@ -92,7 +92,9 @@ class BaseArch(nn.Layer): ...@@ -92,7 +92,9 @@ class BaseArch(nn.Layer):
keep_top_k = self.bbox_post_process.nms.keep_top_k keep_top_k = self.bbox_post_process.nms.keep_top_k
nms_threshold = self.bbox_post_process.nms.nms_threshold nms_threshold = self.bbox_post_process.nms.nms_threshold
else: else:
raise Exception("Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now") raise Exception(
"Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now"
)
final_boxes = [] final_boxes = []
all_scale_outs = paddle.concat([o['bbox'] for o in outs]).numpy() all_scale_outs = paddle.concat([o['bbox'] for o in outs]).numpy()
...@@ -101,9 +103,11 @@ class BaseArch(nn.Layer): ...@@ -101,9 +103,11 @@ class BaseArch(nn.Layer):
if np.count_nonzero(idxs) == 0: if np.count_nonzero(idxs) == 0:
continue continue
r = nms(all_scale_outs[idxs, 1:], nms_threshold) r = nms(all_scale_outs[idxs, 1:], nms_threshold)
final_boxes.append(np.concatenate([np.full((r.shape[0], 1), c), r], 1)) final_boxes.append(
np.concatenate([np.full((r.shape[0], 1), c), r], 1))
out = np.concatenate(final_boxes) out = np.concatenate(final_boxes)
out = np.concatenate(sorted(out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6)) out = np.concatenate(sorted(
out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6))
out = { out = {
'bbox': paddle.to_tensor(out), 'bbox': paddle.to_tensor(out),
'bbox_num': paddle.to_tensor(np.array([out.shape[0], ])) 'bbox_num': paddle.to_tensor(np.array([out.shape[0], ]))
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册