未验证 提交 a6219af3 编写于 作者: F Feng Ni 提交者: GitHub

[doc] update PaddleYOLO (#7373)

上级 26a5d267
...@@ -64,7 +64,7 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与 ...@@ -64,7 +64,7 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与
- 发布行人分析工具[PP-Human v2](./deploy/pipeline),新增打架、打电话、抽烟、闯入四大行为识别,底层算法性能升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略,支持在线视频流输入 - 发布行人分析工具[PP-Human v2](./deploy/pipeline),新增打架、打电话、抽烟、闯入四大行为识别,底层算法性能升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略,支持在线视频流输入
- 首次发布[PP-Vehicle](./deploy/pipeline),提供车牌识别、车辆属性分析(颜色、车型)、车流量统计以及违章检测四大功能,兼容图片、在线视频流、视频输入,提供完善的二次开发文档教程 - 首次发布[PP-Vehicle](./deploy/pipeline),提供车牌识别、车辆属性分析(颜色、车型)、车流量统计以及违章检测四大功能,兼容图片、在线视频流、视频输入,提供完善的二次开发文档教程
- 💡 前沿算法: - 💡 前沿算法:
- 全面覆盖的[YOLO家族](docs/feature_models/YOLOSERIES_MODEL.md)经典与最新模型代码库[PaddleDetection_YOLOSeries](https://github.com/nemonameless/PaddleDetection_YOLOSeries): 包括YOLOv3,百度飞桨自研的实时高精度目标检测模型PP-YOLOE,以及前沿检测算法YOLOv4、YOLOv5、YOLOX,YOLOv6及YOLOv7 - 全面覆盖的[YOLO家族](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/docs/MODEL_ZOO_cn.md)经典与最新算法模型的代码库[PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO): 包括YOLOv3,百度飞桨自研的实时高精度目标检测模型PP-YOLOE,以及前沿检测算法YOLOv4、YOLOv5、YOLOX,YOLOv6及YOLOv7
- 新增基于[ViT](configs/vitdet)骨干网络高精度检测模型,COCO数据集精度达到55.7% mAP;新增[OC-SORT](configs/mot/ocsort)多目标跟踪模型;新增[ConvNeXt](configs/convnext)骨干网络 - 新增基于[ViT](configs/vitdet)骨干网络高精度检测模型,COCO数据集精度达到55.7% mAP;新增[OC-SORT](configs/mot/ocsort)多目标跟踪模型;新增[ConvNeXt](configs/convnext)骨干网络
- 📋 产业范例:新增[智能健身](https://aistudio.baidu.com/aistudio/projectdetail/4385813)[打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0)[来客分析](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0)、车辆结构化范例 - 📋 产业范例:新增[智能健身](https://aistudio.baidu.com/aistudio/projectdetail/4385813)[打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0)[来客分析](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0)、车辆结构化范例
...@@ -298,7 +298,7 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与 ...@@ -298,7 +298,7 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与
- `Cascade-Faster-RCNN``Cascade-Faster-RCNN-ResNet50vd-DCN`,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS - `Cascade-Faster-RCNN``Cascade-Faster-RCNN-ResNet50vd-DCN`,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS
- `PP-YOLOE`是对`PP-YOLO v2`模型的进一步优化,L版本在COCO数据集mAP为51.6%,Tesla V100预测速度78.1FPS - `PP-YOLOE`是对`PP-YOLO v2`模型的进一步优化,L版本在COCO数据集mAP为51.6%,Tesla V100预测速度78.1FPS
- `PP-YOLOE+`是对`PPOLOE`模型的进一步优化,L版本在COCO数据集mAP为53.3%,Tesla V100预测速度78.1FPS - `PP-YOLOE+`是对`PPOLOE`模型的进一步优化,L版本在COCO数据集mAP为53.3%,Tesla V100预测速度78.1FPS
- [`YOLOX`](configs/yolox)[`YOLOv5`](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5)均为基于PaddleDetection复现算法,`YOLOv5`代码在[`PaddleDetection_YOLOSeries`](https://github.com/nemonameless/PaddleDetection_YOLOSeries)中,参照[YOLOSERIES_MODEL](docs/feature_models/YOLOSERIES_MODEL.md) - [`YOLOX`](configs/yolox)[`YOLOv5`](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5)均为基于PaddleDetection复现算法,`YOLOv5`代码在[`PaddleYOLO`](https://github.com/PaddlePaddle/PaddleYOLO)中,参照[PaddleYOLO_MODEL](docs/feature_models/PaddleYOLO_MODEL.md)
- 图中模型均可在[模型库](#模型库)中获取 - 图中模型均可在[模型库](#模型库)中获取
</details> </details>
...@@ -347,11 +347,11 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与 ...@@ -347,11 +347,11 @@ PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与
| 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 | | 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 |
|:------------------------------------------------------------------ |:-----------:|:-------------------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:| |:------------------------------------------------------------------ |:-----------:|:-------------------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:|
| [YOLOX-l](configs/yolox) | 50.1 | 107.5 | [链接](configs/yolox/yolox_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | | [YOLOX-l](configs/yolox) | 50.1 | 107.5 | [链接](configs/yolox/yolox_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) |
| [YOLOv5-l](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5) | 48.6 | 136.0 | [链接](https://github.com/nemonameless/PaddleDetection_YOLOSeries/blob/develop/configs/yolov5/yolov5_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) | | [YOLOv5-l](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5) | 48.6 | 136.0 | [链接](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5/yolov5_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) |
| [YOLOv7-l](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov7) | 51.0 | 135.0 | [链接](https://github.com/nemonameless/PaddleDetection_YOLOSeries/blob/develop/configs/yolov7/yolov7_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov7_l_300e_coco.pdparams) | | [YOLOv7-l](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7) | 51.0 | 135.0 | [链接](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7/yolov7_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov7_l_300e_coco.pdparams) |
**注意:** **注意:**
- `YOLOv5``YOLOv7`代码在[`PaddleDetection_YOLOSeries`](https://github.com/nemonameless/PaddleDetection_YOLOSeries)中,为基于`PaddleDetection`复现的算法,可参照[YOLOSERIES_MODEL](docs/feature_models/YOLOSERIES_MODEL.md) - `YOLOv5``YOLOv7`代码在[`PaddleYOLO`](https://github.com/PaddlePaddle/PaddleYOLO)中,为基于`PaddleDetection`复现的算法,可参照[PaddleYOLO_MODEL](docs/feature_models/PaddleYOLO_MODEL.md)
#### 其他通用检测模型 [文档链接](docs/MODEL_ZOO_cn.md) #### 其他通用检测模型 [文档链接](docs/MODEL_ZOO_cn.md)
......
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
- 💡 Cutting-edge algorithms: - 💡 Cutting-edge algorithms:
- Covers [YOLO family](https://github.com/nemonameless/PaddleDetection_YOLOSeries) classic and latest models: YOLOv3, PP-YOLOE (a real-time high-precision object detection model developed by Baidu PaddlePaddle), and cutting-edge detection algorithms such as YOLOv4, YOLOv5, YOLOX, YOLOv6, and YOLOv7 - Release [PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO) which overs classic and latest models of [YOLO family](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/docs/MODEL_ZOO_en.md): YOLOv3, PP-YOLOE (a real-time high-precision object detection model developed by Baidu PaddlePaddle), and cutting-edge detection algorithms such as YOLOv4, YOLOv5, YOLOX, YOLOv6, and YOLOv7
- Newly add high precision detection model based on [ViT](configs/vitdet) backbone network, with a 55.7% mAP accuracy on COCO dataset; newly add multi-object tracking model [OC-SORT](configs/mot/ocsort); newly add [ConvNeXt](configs/convnext) backbone network. - Newly add high precision detection model based on [ViT](configs/vitdet) backbone network, with a 55.7% mAP accuracy on COCO dataset; newly add multi-object tracking model [OC-SORT](configs/mot/ocsort); newly add [ConvNeXt](configs/convnext) backbone network.
- 📋 Industrial applications: Newly add [Smart Fitness](https://aistudio.baidu.com/aistudio/projectdetail/4385813), [Fighting recognition](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0),[ and Visitor Analysis](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0). - 📋 Industrial applications: Newly add [Smart Fitness](https://aistudio.baidu.com/aistudio/projectdetail/4385813), [Fighting recognition](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0),[ and Visitor Analysis](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0).
...@@ -323,12 +323,13 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of ...@@ -323,12 +323,13 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of
| PicoDet-M | 34.4 | 17.68 | [Link](configs/picodet/picodet_m_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | | PicoDet-M | 34.4 | 17.68 | [Link](configs/picodet/picodet_m_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) |
| PicoDet-L | 36.1 | 25.21 | [Link](configs/picodet/picodet_l_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | | PicoDet-L | 36.1 | 25.21 | [Link](configs/picodet/picodet_l_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) |
#### [Frontier detection algorithm](docs/feature_models/YOLOSERIES_MODEL.md) #### [Frontier detection algorithm](docs/feature_models/PaddleYOLO_MODEL.md)
| Model | COCO Accuracy(mAP) | V100 TensorRT FP16 speed(FPS) | Configuration | Download | | Model | COCO Accuracy(mAP) | V100 TensorRT FP16 speed(FPS) | Configuration | Download |
|:-------- |:------------------:|:-----------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------:| |:-------- |:------------------:|:-----------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------:|
| YOLOX-l | 50.1 | 107.5 | [Link](configs/yolox/yolox_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | | [YOLOX-l](configs/yolox) | 50.1 | 107.5 | [Link](configs/yolox/yolox_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) |
| YOLOv5-l | 48.6 | 136.0 | [Link](https://github.com/nemonameless/PaddleDetection_YOLOv5/blob/main/configs/yolov5/yolov5_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) | | [YOLOv5-l](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5) | 48.6 | 136.0 | [Link](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5/yolov5_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) |
| [YOLOv7-l](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7) | 51.0 | 135.0 | [链接](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7/yolov7_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov7_l_300e_coco.pdparams) |
#### Other general purpose models [doc](docs/MODEL_ZOO_en.md) #### Other general purpose models [doc](docs/MODEL_ZOO_en.md)
......
# 模型库和基线 # 模型库和基线
# 内容
- [基础设置](#基础设置)
- [测试环境](#测试环境)
- [通用设置](#通用设置)
- [训练策略](#训练策略)
- [ImageNet预训练模型](#ImageNet预训练模型)
- [基线](#基线)
- [目标检测](#目标检测)
- [实例分割](#实例分割)
- [PaddleYOLO](#PaddleYOLO)
- [人脸检测](#人脸检测)
- [旋转框检测](#旋转框检测)
- [关键点检测](#关键点检测)
- [多目标跟踪](#多目标跟踪)
# 基础设置
## 测试环境 ## 测试环境
- Python 3.7 - Python 3.7
...@@ -11,6 +28,7 @@ ...@@ -11,6 +28,7 @@
## 通用设置 ## 通用设置
- 所有模型均在COCO17数据集中训练和测试。 - 所有模型均在COCO17数据集中训练和测试。
- [YOLOv5](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5)[YOLOv6](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov6)[YOLOv7](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7)这3类模型的代码在[PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO)中,**PaddleYOLO库开源协议为GPL 3.0**
- 除非特殊说明,所有ResNet骨干网络采用[ResNet-B](https://arxiv.org/pdf/1812.01187)结构。 - 除非特殊说明,所有ResNet骨干网络采用[ResNet-B](https://arxiv.org/pdf/1812.01187)结构。
- **推理时间(fps)**: 推理时间是在一张Tesla V100的GPU上通过'tools/eval.py'测试所有验证集得到,单位是fps(图片数/秒), cuDNN版本是7.5,包括数据加载、网络前向执行和后处理, batch size是1。 - **推理时间(fps)**: 推理时间是在一张Tesla V100的GPU上通过'tools/eval.py'测试所有验证集得到,单位是fps(图片数/秒), cuDNN版本是7.5,包括数据加载、网络前向执行和后处理, batch size是1。
...@@ -18,32 +36,46 @@ ...@@ -18,32 +36,46 @@
- 我们采用和[Detectron](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#training-schedules)相同的训练策略。 - 我们采用和[Detectron](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#training-schedules)相同的训练策略。
- 1x 策略表示:在总batch size为8时,初始学习率为0.01,在8 epoch和11 epoch后学习率分别下降10倍,最终训练12 epoch。 - 1x 策略表示:在总batch size为8时,初始学习率为0.01,在8 epoch和11 epoch后学习率分别下降10倍,最终训练12 epoch。
- 2x 策略为1x策略的两倍,同时学习率调整位置也为1x的两倍。 - 2x 策略为1x策略的两倍,同时学习率调整的epoch数位置也为1x的两倍。
## ImageNet预训练模型 ## ImageNet预训练模型
Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型均通过标准的Imagenet-1k数据集训练得到,ResNet和MobileNet等是采用余弦学习率调整策略或SSLD知识蒸馏训练得到的高精度预训练模型,可在[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)查看模型细节。 Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型均通过标准的Imagenet-1k数据集训练得到,ResNet和MobileNet等是采用余弦学习率调整策略或SSLD知识蒸馏训练得到的高精度预训练模型,可在[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)查看模型细节。
## 基线 # 基线
## 目标检测
### Faster R-CNN ### Faster R-CNN
请参考[Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/) 请参考[Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/)
### Mask R-CNN ### YOLOv3
请参考[Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/) 请参考[YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/)
### Cascade R-CNN ### PP-YOLOE/PP-YOLOE+
请参考[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn) 请参考[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/)
### YOLOv3 ### PP-YOLO/PP-YOLOv2
请参考[YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/) 请参考[PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/)
### PicoDet
请参考[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)
### RetinaNet
请参考[RetinaNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/retinanet/)
### Cascade R-CNN
请参考[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn)
### SSD ### SSD/SSDLite
请参考[SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ssd/) 请参考[SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ssd/)
...@@ -51,15 +83,11 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -51,15 +83,11 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/fcos/) 请参考[FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/fcos/)
### SOLOv2 ### CenterNet
请参考[SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/solov2/)
### PP-YOLO 请参考[CenterNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/centernet/)
请参考[PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/) ### TTFNet/PAFNet
### TTFNet
请参考[TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ttfnet/) 请参考[TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ttfnet/)
...@@ -79,17 +107,37 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -79,17 +107,37 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[Res2Net](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/res2net/) 请参考[Res2Net](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/res2net/)
### ConvNeXt
请参考[ConvNeXt](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/convnext/)
### GFL ### GFL
请参考[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gfl) 请参考[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gfl)
### PicoDet ### TOOD
请参考[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) 请参考[TOOD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/tood)
### PP-YOLOE/PP-YOLOE+ ### PSS-DET(RCNN-Enhance)
请参考[PSS-DET](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rcnn_enhance)
请参考[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) ### DETR
请参考[DETR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/detr)
### Deformable DETR
请参考[Deformable DETR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/deformable_detr)
### Sparse R-CNN
请参考[Sparse R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/sparse_rcnn)
### Vision Transformer
请参考[Vision Transformer](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/vitdet)
### YOLOX ### YOLOX
...@@ -99,53 +147,98 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -99,53 +147,98 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
请参考[YOLOF](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolof) 请参考[YOLOF](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolof)
## 实例分割
### Mask R-CNN
请参考[Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/)
### Cascade R-CNN
请参考[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn)
### SOLOv2
请参考[SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/solov2/)
## [PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO)
请参考[PaddleYOLO模型库](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/docs/MODEL_ZOO_cn.md)
### YOLOv5 ### YOLOv5
请参考[YOLOv5](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5) 请参考[YOLOv5](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5)
### YOLOv6 ### YOLOv6
请参考[YOLOv6](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov6) 请参考[YOLOv6](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov6)
### YOLOv7 ### YOLOv7
请参考[YOLOv7](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov7) 请参考[YOLOv7](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7)
### RTMDet
请参考[RTMDet](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/rtmdet)
## 人脸检测
请参考[人脸检测模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection)
### BlazeFace
请参考[BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection/)
## 旋转框检测 ## 旋转框检测
[旋转框检测模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate) 请参考[旋转框检测模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate)
### PP-YOLOE-R
请参考[PP-YOLOE-R](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/ppyoloe_r)
### FCOSR
请参考[FCOSR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/fcosr)
### S2ANet
请参考[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet)
## 关键点检测 ## 关键点检测
请参考[关键点检测模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint)
### PP-TinyPose ### PP-TinyPose
请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose) 请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose)
## HRNet ### HRNet
请参考[HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/hrnet) 请参考[HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/hrnet)
## HigherHRNet ### Lite-HRNet
请参考[Lite-HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/lite_hrnet)
### HigherHRNet
请参考[HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) 请参考[HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet)
## 多目标跟踪 ## 多目标跟踪
请参考[多目标跟踪模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot)
### DeepSORT ### DeepSORT
请参考[DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) 请参考[DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort)
### JDE
请参考[JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
### FairMOT
请参考[FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### ByteTrack ### ByteTrack
请参考[ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack) 请参考[ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack)
...@@ -153,3 +246,11 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -153,3 +246,11 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
### OC-SORT ### OC-SORT
请参考[OC-SORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/ocsort) 请参考[OC-SORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/ocsort)
### FairMOT/MC-FairMOT
请参考[FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### JDE
请参考[JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
# Model Libraries and Baselines # Model Zoos and Baselines
# Content
- [Basic Settings](#Basic-Settings)
- [Test Environment](#Test-Environment)
- [General Settings](#General-Settings)
- [Training strategy](#Training-strategy)
- [ImageNet pretraining model](#ImageNet-pretraining-model)
- [Baseline](#Baseline)
- [Object Detection](#Object-Detection)
- [Instance Segmentation](#Instance-Segmentation)
- [PaddleYOLO](#PaddleYOLO)
- [Face Detection](#Face-Detection)
- [Rotated Object detection](#Rotated-Object-detection)
- [KeyPoint Detection](#KeyPoint-Detection)
- [Multi Object Tracking](#Multi-Object-Tracking)
# Basic Settings
## Test Environment ## Test Environment
...@@ -11,6 +28,7 @@ ...@@ -11,6 +28,7 @@
## General Settings ## General Settings
- All models were trained and tested in the COCO17 dataset. - All models were trained and tested in the COCO17 dataset.
- The codes of [YOLOv5](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5),[YOLOv6](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov6) and [YOLOv7](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7) can be found in [PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO). Note that **the LICENSE of PaddleYOLO is GPL 3.0**.
- Unless special instructions, all the ResNet backbone network using [ResNet-B](https://arxiv.org/pdf/1812.01187) structure. - Unless special instructions, all the ResNet backbone network using [ResNet-B](https://arxiv.org/pdf/1812.01187) structure.
- **Inference time (FPS)**: The reasoning time was calculated on a Tesla V100 GPU by `tools/eval.py` testing all validation sets in FPS (number of pictures/second). CuDNN version is 7.5, including data loading, network forward execution and post-processing, and Batch size is 1. - **Inference time (FPS)**: The reasoning time was calculated on a Tesla V100 GPU by `tools/eval.py` testing all validation sets in FPS (number of pictures/second). CuDNN version is 7.5, including data loading, network forward execution and post-processing, and Batch size is 1.
...@@ -18,132 +36,208 @@ ...@@ -18,132 +36,208 @@
- We adopt and [Detectron](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#training-schedules) in the same training strategy. - We adopt and [Detectron](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#training-schedules) in the same training strategy.
- 1x strategy indicates that when the total batch size is 8, the initial learning rate is 0.01, and the learning rate decreases by 10 times after 8 epoch and 11 epoch, respectively, and the final training is 12 epoch. - 1x strategy indicates that when the total batch size is 8, the initial learning rate is 0.01, and the learning rate decreases by 10 times after 8 epoch and 11 epoch, respectively, and the final training is 12 epoch.
- 2X strategy is twice as much as strategy 1X, and the learning rate adjustment position is twice as much as strategy 1X. - 2x strategy is twice as much as strategy 1x, and the learning rate adjustment position of epochs is twice as much as strategy 1x.
## ImageNet pretraining model ## ImageNet pretraining model
Paddle provides a skeleton network pretraining model based on ImageNet. All pre-training models were trained by standard Imagenet 1K dataset. Res Net and Mobile Net are high-precision pre-training models obtained by cosine learning rate adjustment strategy or SSLD knowledge distillation training. Model details are available at [PaddleClas](https://github.com/PaddlePaddle/PaddleClas). Paddle provides a skeleton network pretraining model based on ImageNet. All pre-training models were trained by standard Imagenet 1K dataset. ResNet and MobileNet are high-precision pre-training models obtained by cosine learning rate adjustment strategy or SSLD knowledge distillation training. Model details are available at [PaddleClas](https://github.com/PaddlePaddle/PaddleClas).
## Baseline # Baseline
## Object Detection
### Faster R-CNN ### Faster R-CNN
Please refer to[Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/) Please refer to [Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/)
### Mask R-CNN ### YOLOv3
Please refer to[Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/) Please refer to [YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/)
### Cascade R-CNN ### PP-YOLOE/PP-YOLOE+
Please refer to[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn) Please refer to [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/)
### YOLOv3 ### PP-YOLO/PP-YOLOv2
Please refer to[YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/) Please refer to [PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/)
### SSD ### PicoDet
Please refer to[SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ssd/) Please refer to [PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)
### FCOS ### RetinaNet
Please refer to[FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/fcos/) Please refer to [RetinaNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/retinanet/)
### SOLOv2 ### Cascade R-CNN
Please refer to[SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/solov2/) Please refer to [Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn)
### PP-YOLO ### SSD/SSDLite
Please refer to[PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/) Please refer to [SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ssd/)
### TTFNet ### FCOS
Please refer to [FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/fcos/)
### CenterNet
Please refer to [CenterNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/centernet/)
### TTFNet/PAFNet
请参考[TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ttfnet/) Please refer to [TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ttfnet/)
### Group Normalization ### Group Normalization
Please refer to[Group Normalization](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gn/) Please refer to [Group Normalization](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gn/)
### Deformable ConvNets v2 ### Deformable ConvNets v2
Please refer to[Deformable ConvNets v2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dcn/) Please refer to [Deformable ConvNets v2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dcn/)
### HRNets ### HRNets
Please refer to[HRNets](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/hrnet/) Please refer to [HRNets](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/hrnet/)
### Res2Net ### Res2Net
Please refer to[Res2Net](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/res2net/) Please refer to [Res2Net](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/res2net/)
### ConvNeXt
Please refer to [ConvNeXt](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/convnext/)
### GFL ### GFL
Please refer to[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gfl) Please refer to [GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/gfl)
### PicoDet ### TOOD
Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) Please refer to [TOOD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/tood)
### PP-YOLOE/PP-YOLOE+ ### PSS-DET(RCNN-Enhance)
Please refer to[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) Please refer to [PSS-DET](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rcnn_enhance)
### DETR
Please refer to [DETR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/detr)
### Deformable DETR
Please refer to [Deformable DETR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/deformable_detr)
### Sparse R-CNN
Please refer to [Sparse R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/sparse_rcnn)
### Vision Transformer
Please refer to [Vision Transformer](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/vitdet)
### YOLOX ### YOLOX
Please refer to[YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox) Please refer to [YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox)
### YOLOF ### YOLOF
Please refer to[YOLOF](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolof) Please refer to [YOLOF](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolof)
## Instance-Segmentation
### Mask R-CNN
Please refer to [Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/)
### Cascade R-CNN
Please refer to [Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn)
### SOLOv2
Please refer to [SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/solov2/)
## [PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO)
Please refer to [Model Zoo for PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/docs/MODEL_ZOO_en.md)
### YOLOv5 ### YOLOv5
Please refer to[YOLOv5](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5) Please refer to [YOLOv5](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5)
### YOLOv6 ### YOLOv6
Please refer to[YOLOv6](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov6) Please refer to [YOLOv6](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov6)
### YOLOv7 ### YOLOv7
Please refer to[YOLOv7](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov7) Please refer to [YOLOv7](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7)
### RTMDet
Please refer to [RTMDet](https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/rtmdet)
## Face Detection
Please refer to [Model Zoo for Face Detection](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection)
### BlazeFace
Please refer to [BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection/)
## Rotated Object detection ## Rotated Object detection
[Model Zoo for Rotated Object Detection](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate) Please refer to [Model Zoo for Rotated Object Detection](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate)
### PP-YOLOE-R
Please refer to [PP-YOLOE-R](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/ppyoloe_r)
### FCOSR
Please refer to [FCOSR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/fcosr)
### S2ANet
Please refer to [S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet)
## KeyPoint Detection ## KeyPoint Detection
Please refer to [Model Zoo for KeyPoint Detection](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint)
### PP-TinyPose ### PP-TinyPose
Please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose) Please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose)
## HRNet ### HRNet
Please refer to [HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/hrnet) Please refer to [HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/hrnet)
## HigherHRNet ### Lite-HRNet
Please refer to [Lite-HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/lite_hrnet)
### HigherHRNet
Please refer to [HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) Please refer to [HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet)
## Multi-Object Tracking ## Multi-Object Tracking
Please refer to [Model Zoo for Multi-Object Tracking](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot)
### DeepSORT ### DeepSORT
Please refer to [DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) Please refer to [DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort)
### JDE
Please refer to [JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
### FairMOT
Please refer to [FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### ByteTrack ### ByteTrack
Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack) Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack)
...@@ -151,3 +245,11 @@ Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree ...@@ -151,3 +245,11 @@ Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree
### OC-SORT ### OC-SORT
Please refer to [OC-SORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/ocsort) Please refer to [OC-SORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/ocsort)
### FairMOT/MC-FairMOT
Please refer to [FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot)
### JDE
Please refer to [JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde)
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册