README_en.md 24.1 KB
Newer Older
G
Guanghua Yu 已提交
1
English | [简体中文](README.md)
G
Guanghua Yu 已提交
2 3 4 5 6

# PP-PicoDet

![](../../docs/images/picedet_demo.jpeg)

G
Guanghua Yu 已提交
7
## News
G
Guanghua Yu 已提交
8

G
Guanghua Yu 已提交
9
- Released a new series of PP-PicoDet models: **(2022.03.20)**
10
  - (1) It was used TAL/ETA Head and optimized PAN, which greatly improved the accuracy;
G
Guanghua Yu 已提交
11 12
  - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
  - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower.
G
Guanghua Yu 已提交
13

G
Guanghua Yu 已提交
14
### Legacy Model
G
Guanghua Yu 已提交
15

G
Guanghua Yu 已提交
16
- Please refer to: [PicoDet 2021.10](./legacy_model/)
G
Guanghua Yu 已提交
17

G
Guanghua Yu 已提交
18
## Introduction
G
Guanghua Yu 已提交
19

G
Guanghua Yu 已提交
20
We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
G
Guanghua Yu 已提交
21

G
Guanghua Yu 已提交
22 23 24 25
- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416.
- 🚀 Faster latency: 150FPS on mobile ARM CPU.
- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation.
- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc.
G
Guanghua Yu 已提交
26 27 28 29 30 31


<div align="center">
  <img src="../../docs/images/picodet_map.png" width='600'/>
</div>

G
Guanghua Yu 已提交
32
## Benchmark
G
Guanghua Yu 已提交
33

G
Guanghua Yu 已提交
34
| Model     | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[CPU](#latency)</small><sup><br><sup>(ms) | Latency<sup><small>[Lite](#latency)</small><sup><br><sup>(ms) |  Weight  | Config | Inference Model |
35 36 37 38 39 40 41 42 43 44
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | :--------------------------------------- |
| PicoDet-XS |  320*320   |          23.5           |        36.1       |        0.70        |       0.67        |              3.9ms              |            7.81ms             | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet_non_postprocess.tar) |
| PicoDet-XS |  416*416   |          26.2           |        39.3        |        0.70        |       1.13        |              6.1ms             |            12.38ms             | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet_non_postprocess.tar) |
| PicoDet-S |  320*320   |          29.1           |        43.4        |        1.18       |       0.97       |             4.8ms              |            9.56ms             | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet_non_postprocess.tar) |
| PicoDet-S |  416*416   |          32.5           |        47.6        |        1.18        |       1.65       |              6.6ms              |            15.20ms             | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_non_postprocess.tar) |
| PicoDet-M |  320*320   |          34.4           |        50.0        |        3.46        |       2.57       |             8.2ms              |            17.68ms             | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet_non_postprocess.tar) |
| PicoDet-M |  416*416   |          37.5           |        53.4       |        3.46        |       4.34        |              12.7ms              |            28.39ms            | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet_non_postprocess.tar) |
| PicoDet-L |  320*320   |          36.1           |        52.0        |        5.80       |       4.20        |              11.5ms             |            25.21ms           | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet_non_postprocess.tar) |
| PicoDet-L |  416*416   |          39.4           |        55.7        |        5.80        |       7.10       |              20.7ms              |            42.23ms            | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet_non_postprocess.tar) |
| PicoDet-L |  640*640   |          42.6           |        59.2        |        5.80        |       16.81        |              62.5ms              |            108.1ms          | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) &#124; [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet_non_postprocess.tar) |
G
Guanghua Yu 已提交
45 46

<details open>
G
Guanghua Yu 已提交
47
<summary><b>Table Notes:</b></summary>
G
Guanghua Yu 已提交
48

P
pk_hk 已提交
49
- <a name="latency">Latency:</a> All our models test on `Intel core i7 10750H` CPU with MKLDNN by 12 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
G
Guanghua Yu 已提交
50 51
- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12).
G
Guanghua Yu 已提交
52 53 54

</details>

G
Guanghua Yu 已提交
55
#### Benchmark of Other Models
G
Guanghua Yu 已提交
56

G
Guanghua Yu 已提交
57
| Model     | Input size | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params<br><sup>(M) | FLOPS<br><sup>(G) | Latency<sup><small>[NCNN](#latency)</small><sup><br><sup>(ms) |
G
Guanghua Yu 已提交
58 59 60 61 62 63 64 65 66 67 68 69 70
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: |
| YOLOv3-Tiny |  416*416   |          16.6           |        33.1      |        8.86        |       5.62        |             25.42               |
| YOLOv4-Tiny |  416*416   |          21.7           |        40.2        |        6.06           |       6.96           |             23.69               |
| PP-YOLO-Tiny |  320*320       |          20.6         |        -              |   1.08             |    0.58             |    6.75                           |  
| PP-YOLO-Tiny |  416*416   |          22.7          |    -               |    1.08               |    1.02             |    10.48                          |  
| Nanodet-M |  320*320      |          20.6            |    -               |    0.95               |    0.72             |    8.71                           |  
| Nanodet-M |  416*416   |          23.5             |    -               |    0.95               |    1.2              |  13.35                          |
| Nanodet-M 1.5x |  416*416   |          26.8        |    -                  | 2.08               |    2.42             |    15.83                          |
| YOLOX-Nano     |  416*416   |          25.8          |    -               |    0.91               |    1.08             |    19.23                          |
| YOLOX-Tiny     |  416*416   |          32.8          |    -               |    5.06               |    6.45             |    32.77                          |
| YOLOv5n |  640*640       |          28.4             |    46.0            |    1.9                |    4.5              |    40.35                          |
| YOLOv5s |  640*640       |          37.2             |    56.0            |    7.2                |    16.5             |    78.05                          |

G
Guanghua Yu 已提交
71
- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark).
G
Guanghua Yu 已提交
72

G
Guanghua Yu 已提交
73
## Quick Start
G
Guanghua Yu 已提交
74 75

<details open>
G
Guanghua Yu 已提交
76
<summary>Requirements:</summary>
G
Guanghua Yu 已提交
77

G
Guanghua Yu 已提交
78
- PaddlePaddle >= 2.2.2
G
Guanghua Yu 已提交
79 80 81 82

</details>

<details>
G
Guanghua Yu 已提交
83
<summary>Installation</summary>
G
Guanghua Yu 已提交
84

G
Guanghua Yu 已提交
85 86
- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
G
Guanghua Yu 已提交
87 88 89 90

</details>

<details>
G
Guanghua Yu 已提交
91
<summary>Training and Evaluation</summary>
G
Guanghua Yu 已提交
92

G
Guanghua Yu 已提交
93
- Training model on single-GPU:
G
Guanghua Yu 已提交
94 95 96 97

```shell
# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
G
Guanghua Yu 已提交
98
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
G
Guanghua Yu 已提交
99
```
G
Guanghua Yu 已提交
100
If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. At the same time, the configs we published are all trained with 4 GPUs. If the number of GPUs is changed to 1, the base_lr needs to be reduced by a factor of 4.
G
Guanghua Yu 已提交
101

G
Guanghua Yu 已提交
102
- Training model on multi-GPU:
G
Guanghua Yu 已提交
103 104 105


```shell
G
Guanghua Yu 已提交
106
# training on multi-GPU
G
Guanghua Yu 已提交
107
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
G
Guanghua Yu 已提交
108
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
G
Guanghua Yu 已提交
109 110
```

G
Guanghua Yu 已提交
111
- Evaluation:
G
Guanghua Yu 已提交
112 113

```shell
G
Guanghua Yu 已提交
114 115
python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
              -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
G
Guanghua Yu 已提交
116 117
```

G
Guanghua Yu 已提交
118
- Infer:
G
Guanghua Yu 已提交
119 120

```shell
G
Guanghua Yu 已提交
121 122
python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
              -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
G
Guanghua Yu 已提交
123 124
```

G
Guanghua Yu 已提交
125
Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
G
Guanghua Yu 已提交
126 127 128 129

</details>


G
Guanghua Yu 已提交
130
## Deployment
G
Guanghua Yu 已提交
131

G
Guanghua Yu 已提交
132
### Export and Convert Model
G
Guanghua Yu 已提交
133

G
Guanghua Yu 已提交
134 135
<details open>
<summary>1. Export model</summary>
G
Guanghua Yu 已提交
136 137 138

```shell
cd PaddleDetection
G
Guanghua Yu 已提交
139 140
python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
              -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams \
P
pk_hk 已提交
141
              --output_dir=output_inference
G
Guanghua Yu 已提交
142 143
```

G
Guanghua Yu 已提交
144
- If no post processing is required, please specify: `-o export.benchmark=True` (if -o has already appeared, delete -o here) or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml).
G
Guanghua Yu 已提交
145
- If no NMS is required, please specify: `-o export.nms=True` or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml). Many scenes exported to ONNX only support single input and fixed shape output, so if exporting to ONNX, it is recommended not to export NMS.
G
Guanghua Yu 已提交
146 147


G
Guanghua Yu 已提交
148 149 150
</details>

<details>
G
Guanghua Yu 已提交
151
<summary>2. Convert to PaddleLite (click to expand)</summary>
G
Guanghua Yu 已提交
152

G
Guanghua Yu 已提交
153
- Install Paddlelite>=2.10:
G
Guanghua Yu 已提交
154 155 156 157 158

```shell
pip install paddlelite
```

G
Guanghua Yu 已提交
159
- Convert model:
G
Guanghua Yu 已提交
160 161 162

```shell
# FP32
P
pk_hk 已提交
163
paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp32
G
Guanghua Yu 已提交
164
# FP16
P
pk_hk 已提交
165
paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp16 --enable_fp16=true
G
Guanghua Yu 已提交
166 167 168 169 170
```

</details>

<details>
G
Guanghua Yu 已提交
171
<summary>3. Convert to ONNX (click to expand)</summary>
G
Guanghua Yu 已提交
172

G
Guanghua Yu 已提交
173
- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
G
Guanghua Yu 已提交
174 175 176

```shell
pip install onnx
G
Guanghua Yu 已提交
177
pip install paddle2onnx==0.9.2
G
Guanghua Yu 已提交
178 179
```

G
Guanghua Yu 已提交
180
- Convert model:
G
Guanghua Yu 已提交
181 182

```shell
G
Guanghua Yu 已提交
183
paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
G
Guanghua Yu 已提交
184 185 186 187 188 189
            --model_filename model.pdmodel  \
            --params_filename model.pdiparams \
            --opset_version 11 \
            --save_file picodet_s_320_coco.onnx
```

G
Guanghua Yu 已提交
190
- Simplify ONNX model: use onnx-simplifier to simplify onnx model.
G
Guanghua Yu 已提交
191

G
Guanghua Yu 已提交
192
  - Install onnx-simplifier >= 0.3.6:
G
Guanghua Yu 已提交
193 194 195
  ```shell
  pip install onnx-simplifier
  ```
G
Guanghua Yu 已提交
196
  - simplify onnx model:
G
Guanghua Yu 已提交
197 198 199 200
  ```shell
  python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx
  ```

G
Guanghua Yu 已提交
201 202 203 204 205
  If the model includes postprocessing, specify `dynamic-input-shape` when simplifying the model:
  ```shell
  python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx --dynamic-input-shape --input-shape image:1,3,320,320
  ```

G
Guanghua Yu 已提交
206 207
</details>

G
Guanghua Yu 已提交
208
- Deploy models
G
Guanghua Yu 已提交
209

P
pk_hk 已提交
210
| Model     | Input size | ONNX(w/o postprocess)  | Paddle Lite(fp32) | Paddle Lite(fp16) |
G
Guanghua Yu 已提交
211
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: |
G
Guanghua Yu 已提交
212 213 214 215 216 217 218 219 220
| PicoDet-XS |  320*320   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet_fp16.tar) |
| PicoDet-XS |  416*416   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet_fp16.tar) |
| PicoDet-S |  320*320   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet_fp16.tar) |
| PicoDet-S |  416*416   |  [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_fp16.tar) |
| PicoDet-M |  320*320   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet_fp16.tar) |
| PicoDet-M |  416*416   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet_fp16.tar) |
| PicoDet-L |  320*320   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet_fp16.tar) |
| PicoDet-L |  416*416   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet_fp16.tar) |
| PicoDet-L |  640*640   | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postprocessed.onnx) &#124; [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco_lcnet.onnx)  [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet_fp16.tar) |
P
pk_hk 已提交
221

G
Guanghua Yu 已提交
222

G
Guanghua Yu 已提交
223
### Deploy
G
Guanghua Yu 已提交
224

225 226
| Infer Engine     | Python | C++  | Predict With Postprocess |
| :-------- | :--------: | :---------------------: | :----------------: |
227
| OpenVINO | [Python](../../deploy/third_engine/demo_openvino/python) | [C++](../../deploy/third_engine/demo_openvino)(postprocess coming soon) |  ✔︎ |
228 229 230
| Paddle Lite |  -    |  [C++](../../deploy/lite) | ✔︎ |
| Android Demo |  -  |  [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) | ✔︎ |
| PaddleInference | [Python](../../deploy/python) |  [C++](../../deploy/cpp) | ✔︎ |
231 232 233
| ONNXRuntime  | [Python](../../deploy/third_engine/demo_onnxruntime) | Coming soon | ✔︎ |
| NCNN |  Coming soon  | [C++](../../deploy/third_engine/demo_ncnn) | ✘ |
| MNN  | Coming soon | [C++](../../deploy/third_engine/demo_mnn) |  ✘ |
G
Guanghua Yu 已提交
234 235


G
Guanghua Yu 已提交
236
Android demo visualization:
G
Guanghua Yu 已提交
237 238 239 240 241
<div align="center">
  <img src="../../docs/images/picodet_android_demo1.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo2.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo3.jpg" height="500px" ><img src="../../docs/images/picodet_android_demo4.jpg" height="500px" >
</div>


G
Guanghua Yu 已提交
242
## Quantization
G
Guanghua Yu 已提交
243 244

<details open>
G
Guanghua Yu 已提交
245
<summary>Requirements:</summary>
G
Guanghua Yu 已提交
246

G
Guanghua Yu 已提交
247
- PaddlePaddle >= 2.2.2
G
Guanghua Yu 已提交
248
- PaddleSlim >= 2.2.2
G
Guanghua Yu 已提交
249

G
Guanghua Yu 已提交
250
**Install:**
G
Guanghua Yu 已提交
251 252

```shell
G
Guanghua Yu 已提交
253
pip install paddleslim==2.2.2
G
Guanghua Yu 已提交
254 255 256 257
```

</details>

G
Guanghua Yu 已提交
258 259
<details open>
<summary>Quant aware</summary>
G
Guanghua Yu 已提交
260

G
Guanghua Yu 已提交
261
Configure the quant config and start training:
G
Guanghua Yu 已提交
262 263

```shell
G
Guanghua Yu 已提交
264 265
python tools/train.py -c configs/picodet/picodet_s_416_coco_lcnet.yml \
          --slim_config configs/slim/quant/picodet_s_416_lcnet_quant.yml --eval
G
Guanghua Yu 已提交
266 267
```

G
Guanghua Yu 已提交
268
- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
G
Guanghua Yu 已提交
269 270 271

</details>

G
Guanghua Yu 已提交
272
- Quant Aware Model ZOO:
G
Guanghua Yu 已提交
273

G
Guanghua Yu 已提交
274 275 276
| Quant Model     | Input size | mAP<sup>val<br>0.5:0.95  | Configs | Weight | Inference Model | Paddle Lite(INT8) |
| :-------- | :--------: | :--------------------: | :-------: | :----------------: | :----------------: | :----------------: |
| PicoDet-S |  416*416   |  31.5  | [config](./picodet_s_416_coco_lcnet.yml) &#124; [slim config](../slim/quant/picodet_s_416_lcnet_quant.yml)  | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet_quant.pdparams)  | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant.tar) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant_non_postprocess.tar) |  [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant.nb) &#124; [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant_non_postprocess.nb) |
G
Guanghua Yu 已提交
277

G
Guanghua Yu 已提交
278
## Unstructured Pruning
G
Guanghua Yu 已提交
279 280

<details open>
281
<summary>Tutorial:</summary>
G
Guanghua Yu 已提交
282

G
Guanghua Yu 已提交
283
Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/legacy_model/pruner/README.md) for details such as requirements, training and deployment.
G
Guanghua Yu 已提交
284 285 286

</details>

G
Guanghua Yu 已提交
287
## Application
G
Guanghua Yu 已提交
288

G
Guanghua Yu 已提交
289
- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
G
Guanghua Yu 已提交
290

291
- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./legacy_model/application/mainbody_detection/README.md)
G
Guanghua Yu 已提交
292 293 294 295

## FAQ

<details>
G
Guanghua Yu 已提交
296
<summary>Out of memory error.</summary>
G
Guanghua Yu 已提交
297

G
Guanghua Yu 已提交
298
Please reduce the `batch_size` of `TrainReader` in config.
G
Guanghua Yu 已提交
299 300 301 302

</details>

<details>
G
Guanghua Yu 已提交
303
<summary>How to transfer learning.</summary>
G
Guanghua Yu 已提交
304

G
Guanghua Yu 已提交
305
Please reset `pretrain_weights` in config, which trained on coco. Such as:
G
Guanghua Yu 已提交
306
```yaml
G
Guanghua Yu 已提交
307
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
G
Guanghua Yu 已提交
308 309 310 311 312
```

</details>

<details>
G
Guanghua Yu 已提交
313
<summary>The transpose operator is time-consuming on some hardware.</summary>
G
Guanghua Yu 已提交
314

G
Guanghua Yu 已提交
315
Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
G
Guanghua Yu 已提交
316 317 318 319 320

</details>


<details>
G
Guanghua Yu 已提交
321
<summary>How to count model parameters.</summary>
G
Guanghua Yu 已提交
322

G
Guanghua Yu 已提交
323
You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters.
G
Guanghua Yu 已提交
324 325 326 327 328 329 330 331 332 333 334

```python
params = sum([
    p.numel() for n, p in self.model. named_parameters()
    if all([x not in n for x in ['_mean', '_variance']])
]) # exclude BatchNorm running status
print('params: ', params)
```

</details>

G
Guanghua Yu 已提交
335 336
## Cite PP-PicoDet
If you use PicoDet in your research, please cite our work by using the following BibTeX entry:
G
Guanghua Yu 已提交
337 338 339 340 341 342 343 344 345 346 347
```
@misc{yu2021pppicodet,
      title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
      author={Guanghua Yu and Qinyao Chang and Wenyu Lv and Chang Xu and Cheng Cui and Wei Ji and Qingqing Dang and Kaipeng Deng and Guanzhong Wang and Yuning Du and Baohua Lai and Qiwen Liu and Xiaoguang Hu and Dianhai Yu and Yanjun Ma},
      year={2021},
      eprint={2111.00902},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

```