-Released a new series of PP-PicoDet models: **(2022.03.20)**
-发布全新系列PP-PicoDet模型:**(2022.03.20)**
- (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy;
- (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升;
- (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
- (2)优化CPU端预测速度,同时训练速度大幅提升;
- (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower.
- (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。
### Legacy Model
## 历史版本模型
-Please refer to: [PicoDet 2021.10版本](./legacy_model/)
-详情请参考:[PicoDet 2021.10版本](./legacy_model/)
## Introduction
## 简介
We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
-<aname="latency">Latency:</a> All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12).
<summary>3. Convert to ONNX (click to expand)</summary>
<summary>3. 转换模型至ONNX (点击展开)</summary>
-Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
-Notes: Now the accuracy of post quant is abnormal and this problem is being solved.
-注意: 离线量化模型精度问题正在解决中.
</details>
</details>
## Unstructured Pruning
## 非结构化剪枝
<detailsopen>
<detailsopen>
<summary>Toturial:</summary>
<summary>教程:</summary>
Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) for details such as requirements, training and deployment.
-**Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
<summary>The transpose operator is time-consuming on some hardware.</summary>
<summary>`transpose`算子在某些硬件上耗时验证</summary>
Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
请使用`PicoDet-LCNet`模型,`transpose`较少。
</details>
</details>
<details>
<details>
<summary>How to count model parameters.</summary>
<summary>如何计算模型参数量。</summary>
You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters.
-Released a new series of PP-PicoDet models: **(2022.03.20)**
- (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升;
- (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy;
- (2)优化CPU端预测速度,同时训练速度大幅提升;
- (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
- (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。
- (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary release/2.4ment, and the migration cost is lower.
## 历史版本模型
### Legacy Model
-详情请参考:[PicoDet 2021.10版本](./legacy_model/)
-Please refer to: [PicoDet 2021.10](./legacy_model/)
We release/2.4ed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
PP-PicoDet模型有如下特点:
- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416.
-<aname="latency">Latency:</a> All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml#L12).
If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. At the same time, the configs we published are all trained with 4 GPUs. If the number of GPUs is changed to 1, the base_lr needs to be reduced by a factor of 4.
- If no post processing is required, please specify: `-o export.benchmark=True` (if -o has already appeared, delete -o here) or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml).
- If no NMS is required, please specify: `-o export.nms=True` or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/runtime.yml).
</details>
</details>
<details>
<details>
<summary>2. 转换模型至Paddle Lite (点击展开)</summary>
<summary>2. Convert to PaddleLite (click to expand)</summary>
-Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/pruner/README.md) for details such as requirements, training and deployment.
-**Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/ppdet/engine/trainer.py#L141) to count learnable parameters.
```python
```python
params=sum([
params=sum([
...
@@ -333,8 +330,8 @@ print('params: ', params)
...
@@ -333,8 +330,8 @@ print('params: ', params)
</details>
</details>
## 引用PP-PicoDet
## Cite PP-PicoDet
如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告:
If you use PicoDet in your research, please cite our work by using the following BibTeX entry:
```
```
@misc{yu2021pppicodet,
@misc{yu2021pppicodet,
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},