From fb504439660da6d103d59f4ed572d9ba6d9714e1 Mon Sep 17 00:00:00 2001
From: Guanghua Yu <742925032@qq.com>
Date: Wed, 30 Mar 2022 16:04:23 +0800
Subject: [PATCH] update picodet readme (#5517)
---
configs/picodet/README.md | 170 ++++++++---------
.../picodet/{README_cn.md => README_en.md} | 171 +++++++++---------
2 files changed, 171 insertions(+), 170 deletions(-)
rename configs/picodet/{README_cn.md => README_en.md} (65%)
diff --git a/configs/picodet/README.md b/configs/picodet/README.md
index a226cc9a9..dfa15b5b2 100644
--- a/configs/picodet/README.md
+++ b/configs/picodet/README.md
@@ -1,60 +1,63 @@
-English | [简体中文](README_cn.md)
+简体中文 | [English](README_en.md)
# PP-PicoDet
![](../../docs/images/picedet_demo.jpeg)
-## News
+## 最新动态
-- Released a new series of PP-PicoDet models: **(2022.03.20)**
- - (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy;
- - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
- - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower.
+- 发布全新系列PP-PicoDet模型:**(2022.03.20)**
+ - (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升;
+ - (2)优化CPU端预测速度,同时训练速度大幅提升;
+ - (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。
-### Legacy Model
+## 历史版本模型
-- Please refer to: [PicoDet 2021.10版本](./legacy_model/)
+- 详情请参考:[PicoDet 2021.10版本](./legacy_model/)
-## Introduction
+## 简介
-We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
+PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902)。
-- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416.
-- 🚀 Faster latency: 150FPS on mobile ARM CPU.
-- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation.
-- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc.
+PP-PicoDet模型有如下特点:
+
+- 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。
+- 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。
+- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。
+- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
-## Benchmark
+## 基线
-| Model | Input size | mAPval
0.5:0.95 | mAPval
0.5 | Params
(M) | FLOPS
(G) | Latency[CPU](#latency)
(ms) | Latency[Lite](#latency)
(ms) | Download | Config |
+| 模型 | 输入尺寸 | mAPval
0.5:0.95 | mAPval
0.5 | 参数量
(M) | FLOPS
(G) | 预测时延[NCNN](#latency)
(ms) | 预测时延[Lite](#latency)
(ms) | 下载 | 配置文件 |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- |
| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) |
| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) |
| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) |
-| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) |
+| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) |
| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) |
| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) |
| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) |
| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) |
-| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) |
+| PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) |
+
-Table Notes:
+注意事项:
-- Latency: All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
-- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
-- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12).
+- 时延测试: 我们所有的模型都在英特尔至强6148的CPU(MKLDNN 10线程)和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用Paddle Inference库测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。
+- PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。
+- Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12)。
-#### Benchmark of Other Models
+#### 其他模型的基线
-| Model | Input size | mAPval
0.5:0.95 | mAPval
0.5 | Params
(M) | FLOPS
(G) | Latency[NCNN](#latency)
(ms) |
+| 模型 | 输入尺寸 | mAPval
0.5:0.95 | mAPval
0.5 | 参数量
(M) | FLOPS
(G) | 预测时延[NCNN](#latency)
(ms) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: |
| YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 |
| YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 |
@@ -68,38 +71,39 @@ We developed a series of lightweight models, named `PP-PicoDet`. Because of the
| YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 |
| YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 |
-- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark).
+- ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)。
-## Quick Start
+## 快速开始
-Requirements:
+依赖包:
-- PaddlePaddle >= 2.2.1
+- PaddlePaddle == 2.2.2
-Installation
+安装
-- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
-- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
+- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
+- [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
-Training and Evaluation
+训练&评估
-- Training model on single-GPU:
+- 单卡GPU上训练:
```shell
# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
```
-If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally.
-- Training model on multi-GPU:
+**注意:**如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。同时我们发布的config均由4卡训练得到,如果改变GPU卡数为1,那么base_lr需要减小4倍。
+
+- 多卡GPU上训练:
```shell
@@ -108,31 +112,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
```
-- Evaluation:
+- 评估:
```shell
python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
```
-- Infer:
+- 测试:
```shell
python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
```
-Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
+详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
-## Deployment
+## 部署
-### Export and Convert Model
+### 导出及转换模型
-1. Export model (click to expand)
+1. 导出模型 (点击展开)
```shell
cd PaddleDetection
@@ -141,18 +145,21 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--output_dir=inference_model
```
+- 如无需导出后处理,请指定:`-o export.benchmark=True`(如果-o已出现过,此处删掉-o)或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml) 中相应字段。
+- 如无需导出NMS,请指定:`-o export.nms=False`或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml) 中相应字段。
+
-2. Convert to PaddleLite (click to expand)
+2. 转换模型至Paddle Lite (点击展开)
-- Install Paddlelite>=2.10:
+- 安装Paddlelite>=2.10:
```shell
pip install paddlelite
```
-- Convert model:
+- 转换模型至Paddle Lite格式:
```shell
# FP32
@@ -164,16 +171,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar
-3. Convert to ONNX (click to expand)
+3. 转换模型至ONNX (点击展开)
-- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
+- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md)
```shell
pip install onnx
-pip install paddle2onnx
+pip install paddle2onnx==0.9.2
```
-- Convert model:
+- 转换模型:
```shell
paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
@@ -183,22 +190,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
--save_file picodet_s_320_coco.onnx
```
-- Simplify ONNX model: use onnx-simplifier to simplify onnx model.
+- 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。
- - Install onnx-simplifier >= 0.3.6:
+ - 安装 onnx-simplifier >= 0.3.6:
```shell
pip install onnx-simplifier
```
- - simplify onnx model:
+ - 简化ONNX模型:
```shell
python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx
```
-- Deploy models
+- 部署用的模型
-| Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
+| 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: |
| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) |
| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) |
@@ -212,31 +219,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) |
-### Deploy
+### 部署
- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [PaddleLite C++ demo](../../deploy/lite)
-- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn)
-- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn)
-- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino)
- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo)
-Android demo visualization:
+Android demo可视化:
-## Quantization
+## 量化
-Requirements:
+依赖包:
- PaddlePaddle >= 2.2.2
- PaddleSlim >= 2.2.1
-**Install:**
+**安装:**
```shell
pip install paddleslim==2.2.1
@@ -245,61 +249,61 @@ pip install paddleslim==2.2.1
-Quant aware (click to expand)
+量化训练 (点击展开)
-Configure the quant config and start training:
+开始量化训练:
```shell
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/quant/picodet_s_quant.yml --eval
```
-- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
+- 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
-Post quant (click to expand)
+离线量化 (点击展开)
-Configure the post quant config and start calibrate model:
+校准及导出量化模型:
```shell
python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/post_quant/picodet_s_ptq.yml
```
-- Notes: Now the accuracy of post quant is abnormal and this problem is being solved.
+- 注意: 离线量化模型精度问题正在解决中.
-## Unstructured Pruning
+## 非结构化剪枝
-Toturial:
+教程:
-Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) for details such as requirements, training and deployment.
+训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md)。
-## Application
+## 应用
-- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
+- **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
-- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./application/mainbody_detection/README.md)
+- **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./application/mainbody_detection/README.md)
## FAQ
-Out of memory error.
+显存爆炸(Out of memory error)
-Please reduce the `batch_size` of `TrainReader` in config.
+请减小配置文件中`TrainReader`的`batch_size`。
-How to transfer learning.
+如何迁移学习
-Please reset `pretrain_weights` in config, which trained on coco. Such as:
+请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练:
```yaml
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
```
@@ -307,17 +311,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
-The transpose operator is time-consuming on some hardware.
+`transpose`算子在某些硬件上耗时验证
-Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
+请使用`PicoDet-LCNet`模型,`transpose`较少。
-How to count model parameters.
+如何计算模型参数量。
-You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters.
+可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) 来计算参数量。
```python
params = sum([
@@ -329,8 +333,8 @@ print('params: ', params)
-## Cite PP-PicoDet
-If you use PicoDet in your research, please cite our work by using the following BibTeX entry:
+## 引用PP-PicoDet
+如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告:
```
@misc{yu2021pppicodet,
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
diff --git a/configs/picodet/README_cn.md b/configs/picodet/README_en.md
similarity index 65%
rename from configs/picodet/README_cn.md
rename to configs/picodet/README_en.md
index 7131200a2..dfadb3abf 100644
--- a/configs/picodet/README_cn.md
+++ b/configs/picodet/README_en.md
@@ -1,63 +1,60 @@
-简体中文 | [English](README.md)
+English | [简体中文](README.md)
# PP-PicoDet
![](../../docs/images/picedet_demo.jpeg)
-## 最新动态
+## News
-- 发布全新系列PP-PicoDet模型:**(2022.03.20)**
- - (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升;
- - (2)优化CPU端预测速度,同时训练速度大幅提升;
- - (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。
+- Released a new series of PP-PicoDet models: **(2022.03.20)**
+ - (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy;
+ - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved;
+ - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower.
-## 历史版本模型
+### Legacy Model
-- 详情请参考:[PicoDet 2021.10版本](./legacy_model/)
+- Please refer to: [PicoDet 2021.10](./legacy_model/)
-## 简介
+## Introduction
-PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902)。
+We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902).
-PP-PicoDet模型有如下特点:
-
-- 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。
-- 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。
-- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。
-- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
+- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416.
+- 🚀 Faster latency: 150FPS on mobile ARM CPU.
+- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation.
+- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc.
-## 基线
+## Benchmark
-| 模型 | 输入尺寸 | mAPval
0.5:0.95 | mAPval
0.5 | 参数量
(M) | FLOPS
(G) | 预测时延[NCNN](#latency)
(ms) | 预测时延[Lite](#latency)
(ms) | 下载 | 配置文件 |
+| Model | Input size | mAPval
0.5:0.95 | mAPval
0.5 | Params
(M) | FLOPS
(G) | Latency[CPU](#latency)
(ms) | Latency[Lite](#latency)
(ms) | Download | Config |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- |
| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) |
| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) |
| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) |
-| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) |
+| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) |
| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) |
| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) |
| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) |
| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) |
-| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) |
-
+| PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) |
-注意事项:
+Table Notes:
-- 时延测试: 我们所有的模型都在英特尔至强6148的CPU(MKLDNN 10线程)和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用Paddle Inference库测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。
-- PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。
-- Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12)。
+- Latency: All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite).
+- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters.
+- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12).
-#### 其他模型的基线
+#### Benchmark of Other Models
-| 模型 | 输入尺寸 | mAPval
0.5:0.95 | mAPval
0.5 | 参数量
(M) | FLOPS
(G) | 预测时延[NCNN](#latency)
(ms) |
+| Model | Input size | mAPval
0.5:0.95 | mAPval
0.5 | Params
(M) | FLOPS
(G) | Latency[NCNN](#latency)
(ms) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: |
| YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 |
| YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 |
@@ -71,39 +68,38 @@ PP-PicoDet模型有如下特点:
| YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 |
| YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 |
-- ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)。
+- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark).
-## 快速开始
+## Quick Start
-依赖包:
+Requirements:
-- PaddlePaddle == 2.2.2
+- PaddlePaddle >= 2.2.2
-安装
+Installation
-- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
-- [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
+- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
+- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
-训练&评估
+Training and Evaluation
-- 单卡GPU上训练:
+- Training model on single-GPU:
```shell
# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
```
+If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. At the same time, the configs we published are all trained with 4 GPUs. If the number of GPUs is changed to 1, the base_lr needs to be reduced by a factor of 4.
-如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。
-
-- 多卡GPU上训练:
+- Training model on multi-GPU:
```shell
@@ -112,31 +108,31 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval
```
-- 评估:
+- Evaluation:
```shell
python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
```
-- 测试:
+- Infer:
```shell
python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams
```
-详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
+Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
-## 部署
+## Deployment
-### 导出及转换模型
+### Export and Convert Model
-1. 导出模型 (点击展开)
+1. Export model (click to expand)
```shell
cd PaddleDetection
@@ -145,18 +141,22 @@ python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--output_dir=inference_model
```
+- If no post processing is required, please specify: `-o export.benchmark=True` (if -o has already appeared, delete -o here) or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml).
+- If no NMS is required, please specify: `-o export.nms=True` or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml).
+
+
-2. 转换模型至Paddle Lite (点击展开)
+2. Convert to PaddleLite (click to expand)
-- 安装Paddlelite>=2.10:
+- Install Paddlelite>=2.10:
```shell
pip install paddlelite
```
-- 转换模型至Paddle Lite格式:
+- Convert model:
```shell
# FP32
@@ -168,16 +168,16 @@ paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_tar
-3. 转换模型至ONNX (点击展开)
+3. Convert to ONNX (click to expand)
-- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md)
+- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md)
```shell
pip install onnx
-pip install paddle2onnx
+pip install paddle2onnx==0.9.2
```
-- 转换模型:
+- Convert model:
```shell
paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
@@ -187,22 +187,22 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
--save_file picodet_s_320_coco.onnx
```
-- 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。
+- Simplify ONNX model: use onnx-simplifier to simplify onnx model.
- - 安装 onnx-simplifier >= 0.3.6:
+ - Install onnx-simplifier >= 0.3.6:
```shell
pip install onnx-simplifier
```
- - 简化ONNX模型:
+ - simplify onnx model:
```shell
python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx
```
-- 部署用的模型
+- Deploy models
-| 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
+| Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) |
| :-------- | :--------: | :---------------------: | :----------------: | :----------------: |
| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) |
| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) |
@@ -216,31 +216,28 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) |
-### 部署
+### Deploy
- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp)
- [PaddleLite C++ demo](../../deploy/lite)
-- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn)
-- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn)
-- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino)
- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo)
-Android demo可视化:
+Android demo visualization:
-## 量化
+## Quantization
-依赖包:
+Requirements:
- PaddlePaddle >= 2.2.2
- PaddleSlim >= 2.2.1
-**安装:**
+**Install:**
```shell
pip install paddleslim==2.2.1
@@ -249,61 +246,61 @@ pip install paddleslim==2.2.1
-量化训练 (点击展开)
+Quant aware (click to expand)
-开始量化训练:
+Configure the quant config and start training:
```shell
python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/quant/picodet_s_quant.yml --eval
```
-- 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
+- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
-离线量化 (点击展开)
+Post quant (click to expand)
-校准及导出量化模型:
+Configure the post quant config and start calibrate model:
```shell
python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \
--slim_config configs/slim/post_quant/picodet_s_ptq.yml
```
-- 注意: 离线量化模型精度问题正在解决中.
+- Notes: Now the accuracy of post quant is abnormal and this problem is being solved.
-## 非结构化剪枝
+## Unstructured Pruning
-教程:
+Toturial:
-训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md)。
+Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) for details such as requirements, training and deployment.
-## 应用
+## Application
-- **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
+- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B)
-- **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./application/mainbody_detection/README.md)
+- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./application/mainbody_detection/README.md)
## FAQ
-显存爆炸(Out of memory error)
+Out of memory error.
-请减小配置文件中`TrainReader`的`batch_size`。
+Please reduce the `batch_size` of `TrainReader` in config.
-如何迁移学习
+How to transfer learning.
-请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练:
+Please reset `pretrain_weights` in config, which trained on coco. Such as:
```yaml
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
```
@@ -311,17 +308,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
-`transpose`算子在某些硬件上耗时验证
+The transpose operator is time-consuming on some hardware.
-请使用`PicoDet-LCNet`模型,`transpose`较少。
+Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
-如何计算模型参数量。
+How to count model parameters.
-可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) 来计算参数量。
+You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters.
```python
params = sum([
@@ -333,8 +330,8 @@ print('params: ', params)
-## 引用PP-PicoDet
-如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告:
+## Cite PP-PicoDet
+If you use PicoDet in your research, please cite our work by using the following BibTeX entry:
```
@misc{yu2021pppicodet,
title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices},
--
GitLab