未验证 提交 b345d7eb 编写于 作者: W wangguanzhong 提交者: GitHub

[cherry-pick] add release 2.5 doc (#6762)

* update release note, test=document_fix

* update README, test=document_fix

* update link, test=document_fix

* add en doc, test=document_fix

* add ocsort, test=document_fix
上级 f52c63bf
...@@ -24,20 +24,18 @@ ...@@ -24,20 +24,18 @@
## <img src="https://user-images.githubusercontent.com/48054808/157793354-6e7f381a-0aa6-4bb7-845c-9acf2ecc05c3.png" width="20"/> 产品动态 ## <img src="https://user-images.githubusercontent.com/48054808/157793354-6e7f381a-0aa6-4bb7-845c-9acf2ecc05c3.png" width="20"/> 产品动态
- 🔥 **2022.8.09:[YOLO家族全系列模型](https://github.com/nemonameless/PaddleDetection_YOLOSeries)发布** - 🔥 **2022.8.26:PaddleDetection发布[release/2.5版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)**
- 全面覆盖的YOLO家族经典与最新模型: 包括YOLOv3,百度飞桨自研的实时高精度目标检测检测模型PP-YOLOE,以及前沿检测算法YOLOv4、YOLOv5、YOLOX,MT-YOLOv6及YOLOv7 - 🗳 特色模型:
- 更强的模型性能:基于各家前沿YOLO算法进行创新并升级,缩短训练周期5~8倍,精度普遍提升1%~5% mAP;使用模型压缩策略实现精度无损的同时速度提升30%以上 - 发布[PP-YOLOE+](configs/ppyoloe),最高精度提升2.4% mAP,达到54.9% mAP,模型训练收敛速度提升3.75倍,端到端预测速度最高提升2.3倍;多个下游任务泛化性提升
- 完备的端到端开发支持:支持从模型训练、评估、预测到模型量化压缩,部署多种硬件的端到端开发全流程。同时支持不同模型算法灵活切换,一键实现算法二次开发 - 发布[PicoDet-NPU](configs/picodet)模型,支持模型全量化部署;新增[PicoDet](configs/picodet)版面分析模型
- 发布[PP-TinyPose升级版](./configs/keypoint/tiny_pose/)增强版,在健身、舞蹈等场景精度提升9.1% AP,支持侧身、卧躺、跳跃、高抬腿等非常规动作
- 🔥 **2022.8.01:发布[PP-TinyPose升级版](./configs/keypoint/tiny_pose/). 在健身、舞蹈等场景的业务数据集端到端AP提升9.1** - 🔮 场景能力:
- 新增体育场景真实数据,复杂动作识别效果显著提升,覆盖侧身、卧躺、跳跃、高抬腿等非常规动作 - 发布行人分析工具[PP-Human v2](./deploy/pipeline),新增打架、打电话、抽烟、闯入四大行为识别,底层算法性能升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略,支持在线视频流输入
- 检测模型采用[PP-PicoDet增强版](./configs/picodet/README.md),在COCO数据集上精度提升3.1% - 首次发布[PP-Vehicle](./deploy/pipeline),提供车牌识别、车辆属性分析(颜色、车型)、车流量统计以及违章检测四大功能,兼容图片、在线视频流、视频输入,提供完善的二次开发文档教程
- 关键点稳定性增强,新增滤波稳定方式,使得视频预测结果更加稳定平滑 - 💡 前沿算法:
- 全面覆盖的[YOLO家族](https://github.com/nemonameless/PaddleDetection_YOLOSeries)经典与最新模型: 包括YOLOv3,百度飞桨自研的实时高精度目标检测检测模型PP-YOLOE,以及前沿检测算法YOLOv4、YOLOv5、YOLOX,MT-YOLOv6及YOLOv7
- 2022.7.14:[行人分析工具PP-Human v2](./deploy/pipeline)发布 - 新增基于[ViT](configs/vitdet)骨干网络高精度检测模型,COCO数据集精度达到55.7% mAP;新增[OC-SORT](configs/mot/ocsort)多目标跟踪模型;新增[ConvNeXt](configs/convnext)骨干网络
- 四大产业特色功能:高性能易扩展的五大复杂行为识别、闪电级人体属性识别、一行代码即可实现的人流检测与轨迹留存以及高精度跨镜跟踪 - 📋 产业范例:新增[智能健身](https://aistudio.baidu.com/aistudio/projectdetail/4385813)[打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0)[来客分析](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0)、车辆结构化范例
- 底层核心算法性能强劲:覆盖行人检测、跟踪、属性三类核心算法能力,对目标人数、光线、背景均无限制
- 极低使用门槛:提供保姆级全流程开发及模型优化策略、一行命令完成推理、兼容各类数据输入格式
- 2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) - 2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)
- 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe),提供s/m/l/x版本,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。 - 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe),提供s/m/l/x版本,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。
...@@ -63,7 +61,7 @@ ...@@ -63,7 +61,7 @@
- **高性能**: 基于飞桨的高性能内核,模型训练速度及显存占用优势明显。支持FP16训练, 支持多机训练。 - **高性能**: 基于飞桨的高性能内核,模型训练速度及显存占用优势明显。支持FP16训练, 支持多机训练。
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/48054808/172783897-26a93368-d262-443c-a838-8f36bfd714e5.png" width="800"/> <img src="https://user-images.githubusercontent.com/22989727/186703085-8740e135-d61f-41df-9a29-30273285baa7.png" width="800"/>
</div> </div>
## <img title="" src="https://user-images.githubusercontent.com/48054808/157800467-2a9946ad-30d1-49a9-b9db-ba33413d9c90.png" alt="" width="20"> 技术交流 ## <img title="" src="https://user-images.githubusercontent.com/48054808/157800467-2a9946ad-30d1-49a9-b9db-ba33413d9c90.png" alt="" width="20"> 技术交流
...@@ -112,6 +110,7 @@ ...@@ -112,6 +110,7 @@
<li>PP-YOLOv1/v2</li> <li>PP-YOLOv1/v2</li>
<li>PP-YOLO-Tiny</li> <li>PP-YOLO-Tiny</li>
<li>PP-YOLOE</li> <li>PP-YOLOE</li>
<li>PP-YOLOE+</li>
<li>YOLOX</li> <li>YOLOX</li>
<li>SSD</li> <li>SSD</li>
<li>CenterNet</li> <li>CenterNet</li>
...@@ -141,6 +140,7 @@ ...@@ -141,6 +140,7 @@
<li>FairMOT</li> <li>FairMOT</li>
<li>DeepSORT</li> <li>DeepSORT</li>
<li>ByteTrack</li> <li>ByteTrack</li>
<li>OC-SORT</li>
</ul></details> </ul></details>
<details><summary><b>KeyPoint-Detection</b></summary> <details><summary><b>KeyPoint-Detection</b></summary>
<ul> <ul>
...@@ -172,6 +172,8 @@ ...@@ -172,6 +172,8 @@
<li>LCNet</li> <li>LCNet</li>
<li>ESNet</li> <li>ESNet</li>
<li>Swin-Transformer</li> <li>Swin-Transformer</li>
<li>ConvNeXt</li>
<li>Vision Transformer</li>
</ul></details> </ul></details>
</td> </td>
<td> <td>
...@@ -290,14 +292,14 @@ ...@@ -290,14 +292,14 @@
<details> <details>
<summary><b> 1. 通用检测</b></summary> <summary><b> 1. 通用检测</b></summary>
#### [PP-YOLOE](./configs/ppyoloe)系列 推荐场景:Nvidia V100, T4等云端GPU和Jetson系列等边缘端设备 #### [PP-YOLOE+](./configs/ppyoloe)系列 推荐场景:Nvidia V100, T4等云端GPU和Jetson系列等边缘端设备
| 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 | | 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 |
|:---------- |:-----------:|:-------------------------:|:-----------------------------------------------------:|:------------------------------------------------------------------------------------:| |:---------- |:-----------:|:-------------------------:|:-----------------------------------------------------:|:------------------------------------------------------------------------------------:|
| PP-YOLOE-s | 42.7 | 333.3 | [链接](configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | | PP-YOLOE+_s | 43.9 | 333.3 | [链接](configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams) |
| PP-YOLOE-m | 48.6 | 208.3 | [链接](configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams) | | PP-YOLOE+_m | 50.0 | 208.3 | [链接](configs/ppyoloe/ppyoloe_plus_crn_m_80e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco.pdparams) |
| PP-YOLOE-l | 50.9 | 149.2 | [链接](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | | PP-YOLOE+_l | 53.3 | 149.2 | [链接](configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco.pdparams) |
| PP-YOLOE-x | 51.9 | 95.2 | [链接](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | | PP-YOLOE+_x | 54.9 | 95.2 | [链接](configs/ppyoloe/ppyoloe_plus_crn_x_80e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_x_80e_coco.pdparams) |
#### [PP-PicoDet](./configs/picodet)系列 推荐场景:ARM CPU(RK3399, 树莓派等) 和NPU(比特大陆,晶晨等)移动端芯片和x86 CPU设备 #### [PP-PicoDet](./configs/picodet)系列 推荐场景:ARM CPU(RK3399, 树莓派等) 和NPU(比特大陆,晶晨等)移动端芯片和x86 CPU设备
...@@ -354,6 +356,7 @@ ...@@ -354,6 +356,7 @@
| ByteTrack | SDE多目标跟踪算法 仅包含检测模型 | 云边端 | MOT-17 half val: 77.3 | [链接](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | | ByteTrack | SDE多目标跟踪算法 仅包含检测模型 | 云边端 | MOT-17 half val: 77.3 | [链接](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) |
| JDE | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 64.6 | [链接](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) | | JDE | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 64.6 | [链接](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) |
| FairMOT | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 75.0 | [链接](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | | FairMOT | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 75.0 | [链接](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) |
| OC-SORT | SDE多目标跟踪算法 仅包含检测模型 | 云边端 | MOT-17 half val: 75.5 | [链接](configs/mot/ocsort/ocsort_yolox.yml) | - |
#### 其他多目标跟踪模型 [文档链接](configs/mot) #### 其他多目标跟踪模型 [文档链接](configs/mot)
......
...@@ -23,21 +23,25 @@ ...@@ -23,21 +23,25 @@
## <img src="https://user-images.githubusercontent.com/48054808/157793354-6e7f381a-0aa6-4bb7-845c-9acf2ecc05c3.png" width="20"/> Product Update ## <img src="https://user-images.githubusercontent.com/48054808/157793354-6e7f381a-0aa6-4bb7-845c-9acf2ecc05c3.png" width="20"/> Product Update
- 🔥 **2022.8.09:Release [YOLO series model zoo](https://github.com/nemonameless/PaddleDetection_YOLOSeries)** - 🔥 **2022.8.26:PaddleDetection releases[release/2.5 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)**
- Comprehensive coverage of classic and latest models of the YOLO series: Including YOLOv3,Paddle real-time object detection model PP-YOLOE, and frontier detection algorithms YOLOv4, YOLOv5, YOLOX, MT-YOLOv6 and YOLOv7
- Better model performance:Upgrade based on various YOLO algorithms, shorten training time in 5-8 times and the accuracy is generally improved by 1%-5% mAP. The model compression strategy is used to achieve 30% improvement in speed without precision loss - 🗳 Model features:
- Complete end-to-end development support:End-to-end development pipieline including training, evaluation, inference, model compression and deployment on various hardware. Meanwhile, support flexible algorithnm switch and implement customized development efficiently
- Release [PP-YOLOE+](configs/ppyoloe): Increased accuracy by a maximum of 2.4% mAP to 54.9% mAP, 3.75 times faster model training convergence rate, and up to 2.3 times faster end-to-end inference speed; improved generalization for multiple downstream tasks
- 🔥 **2022.8.01:Release [PP-TinyPose plus](./configs/keypoint/tiny_pose/). The end-to-end precision improves 9.1% AP in dataset - Release [PicoDet-NPU](configs/picodet) model which supports full quantization deployment of models; add [PicoDet](configs/picodet) layout analysis model
of fitness and dance scenes** - Release [PP-TinyPose Plus](./configs/keypoint/tiny_pose/). With 9.1% AP accuracy improvement in physical exercise, dance, and other scenarios, our PP-TinyPose Plus supports unconventional movements such as turning to one side, lying down, jumping, and high lifts
- Increase data of sports scenes, and the recognition performance of complex actions is significantly improved, covering actions such as sideways, lying down, jumping, and raising legs
- Detection model uses PP-PicoDet plus and the precision on COCO dataset is improved by 3.1% mAP - 🔮 Functions in different scenarios
- The stability of keypoints is enhanced. Implement the filter stabilization method to make the video prediction result more stable and smooth.
- Release the pedestrian analysis tool [PP-Human v2](./deploy/pipeline). It introduces four new behavior recognition: fighting, telephoning, smoking, and trespassing. The underlying algorithm performance is optimized, covering three core algorithm capabilities: detection, tracking, and attributes of pedestrians. Our model provides end-to-end development and model optimization strategies for beginners and supports online video streaming input.
- 2022.7.14:Release [pedestrian analysis tool PP-Human v2](./deploy/pipeline) - First release [PP-Vehicle](./deploy/pipeline), which has four major functions: license plate recognition, vehicle attribute analysis (color, model), traffic flow statistics, and violation detection. It is compatible with input formats, including pictures, online video streaming, and video. And we also offer our users a comprehensive set of tutorials for customization.
- Four major functions: five complicated action recognition with high performance and Flexible, real-time human attribute recognition, visitor flow statistics and high-accuracy multi-camera tracking.
- High performance algorithm: including pedestrian detection, tracking, attribute recognition which is robust to the number of targets and the variant of background and light. - 💡 Cutting-edge algorithms:
- Highly Flexible: providing complete introduction of end-to-end development and optimization strategy, simple command for deployment and compatibility with different input format.
- Covers [YOLO family](https://github.com/nemonameless/PaddleDetection_YOLOSeries) classic and latest models: YOLOv3, PP-YOLOE (a real-time high-precision object detection model developed by Baidu PaddlePaddle), and cutting-edge detection algorithms such as YOLOv4, YOLOv5, YOLOX, MT-YOLOv6, and YOLOv7
- Newly add high precision detection model based on [ViT](configs/vitdet) backbone network, with a 55.7% mAP accuracy on COCO dataset; newly add multi-object tracking model [OC-SORT](configs/mot/ocsort); newly add [ConvNeXt](configs/convnext) backbone network.
- 📋 Industrial applications: Newly add [Smart Fitness](https://aistudio.baidu.com/aistudio/projectdetail/4385813), [Fighting recognition](https://aistudio.baidu.com/aistudio/projectdetail/4086987?channelType=0&channel=0),[ and Visitor Analysis](https://aistudio.baidu.com/aistudio/projectdetail/4230123?channelType=0&channel=0).
- 2022.3.24:PaddleDetection released[release/2.4 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) - 2022.3.24:PaddleDetection released[release/2.4 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)
- Release high-performanace SOTA object detection model [PP-YOLOE](configs/ppyoloe). It integrates cloud and edge devices and provides S/M/L/X versions. In particular, Verson L has the accuracy as 51.4% on COCO test 2017 dataset, inference speed as 78.1 FPS on a single Test V100. It supports mixed precision training, 33% faster than PP-YOLOv2. Its full range of multi-sized models can meet different hardware arithmetic requirements, and adaptable to server, edge-device GPU and other AI accelerator cards on servers. - Release high-performanace SOTA object detection model [PP-YOLOE](configs/ppyoloe). It integrates cloud and edge devices and provides S/M/L/X versions. In particular, Verson L has the accuracy as 51.4% on COCO test 2017 dataset, inference speed as 78.1 FPS on a single Test V100. It supports mixed precision training, 33% faster than PP-YOLOv2. Its full range of multi-sized models can meet different hardware arithmetic requirements, and adaptable to server, edge-device GPU and other AI accelerator cards on servers.
...@@ -65,7 +69,7 @@ ...@@ -65,7 +69,7 @@
- **High Performance**: Due to the high performance core, PaddlePaddle has clear advantages in training speed and memory occupation. It also supports FP16 training and multi-machine training. - **High Performance**: Due to the high performance core, PaddlePaddle has clear advantages in training speed and memory occupation. It also supports FP16 training and multi-machine training.
<div align="center"> <div align="center">
<img src="img width="484" alt="newstructure" src="https://user-images.githubusercontent.com/107399028/177736039-fdf69bfc-ef38-40e6-8746-1e581101e76a.png"" width="800"/> <img src="img width="484" alt="newstructure" src="https://user-images.githubusercontent.com/22989727/186703085-8740e135-d61f-41df-9a29-30273285baa7.png"" width="800"/>
</div </div
## <img title="" src="https://user-images.githubusercontent.com/48054808/157800467-2a9946ad-30d1-49a9-b9db-ba33413d9c90.png" alt="" width="20"> Exchanges ## <img title="" src="https://user-images.githubusercontent.com/48054808/157800467-2a9946ad-30d1-49a9-b9db-ba33413d9c90.png" alt="" width="20"> Exchanges
...@@ -111,6 +115,7 @@ ...@@ -111,6 +115,7 @@
<li>PP-YOLOv1/v2</li> <li>PP-YOLOv1/v2</li>
<li>PP-YOLO-Tiny</li> <li>PP-YOLO-Tiny</li>
<li>PP-YOLOE</li> <li>PP-YOLOE</li>
<li>PP-YOLOE+</li>
<li>YOLOX</li> <li>YOLOX</li>
<li>SSD</li> <li>SSD</li>
<li>CenterNet</li> <li>CenterNet</li>
...@@ -140,6 +145,7 @@ ...@@ -140,6 +145,7 @@
<li>FairMOT</li> <li>FairMOT</li>
<li>DeepSORT</li> <li>DeepSORT</li>
<li>ByteTrack</li> <li>ByteTrack</li>
<li>OC-SORT</li>
</ul></details> </ul></details>
<details><summary><b>KeyPoint-Detection</b></summary> <details><summary><b>KeyPoint-Detection</b></summary>
<ul> <ul>
...@@ -171,6 +177,8 @@ ...@@ -171,6 +177,8 @@
<li>LCNet</li> <li>LCNet</li>
<li>ESNet</li> <li>ESNet</li>
<li>Swin-Transformer</li> <li>Swin-Transformer</li>
<li>ConvNeXt</li>
<li>Vision Transformer</li>
</ul></details> </ul></details>
</td> </td>
<td> <td>
...@@ -258,11 +266,10 @@ The comparison between COCO mAP and FPS on Tesla V100 of representative models o ...@@ -258,11 +266,10 @@ The comparison between COCO mAP and FPS on Tesla V100 of representative models o
**Clarification:** **Clarification:**
- `CBResNet` stands for `Cascade-Faster-RCNN-CBResNet200vd-FPN`, which has highest mAP on COCO as 53.3% - `ViT` stands for `ViT-Cascade-Faster-RCNN`, which has highest mAP on COCO as 55.7%
- `Cascade-Faster-RCNN`stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models - `Cascade-Faster-RCNN`stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models
- `PP-YOLO` reached accuracy as 45.9% on COCO dataset, inference speed as 72.9 FPS on Tesla V100, higher than [YOLOv4]([[2004.10934] YOLOv4: Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934)) in terms of speed and accuracy - `PP-YOLOE` are optimized `PP-YOLO v2`. It reached accuracy as 51.4% on COCO dataset, inference speed as 78.1 FPS on Tesla V100
- `PP-YOLO v2`are optimized `PP-YOLO`. It reached accuracy as 49.5% on COCO dataset, inference speed as 68.9 FPS on Tesla V100. - `PP-YOLOE+` are optimized `PP-YOLOE`. It reached accuracy as 53.3% on COCO dataset, inference speed as 78.1 FPS on Tesla V100
- `PP-YOLOE`are optimized `PP-YOLO v2`. It reached accuracy as 51.4% on COCO dataset, inference speed as 78.1 FPS on Tesla V100
- The models in the figure are available in the[ model library](#模型库) - The models in the figure are available in the[ model library](#模型库)
</details> </details>
...@@ -292,10 +299,10 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of ...@@ -292,10 +299,10 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of
| Model | COCO Accuracy(mAP) | V100 TensorRT FP16 Speed(FPS) | Configuration | Download | | Model | COCO Accuracy(mAP) | V100 TensorRT FP16 Speed(FPS) | Configuration | Download |
|:---------- |:------------------:|:-----------------------------:|:-------------------------------------------------------:|:----------------------------------------------------------------------------------------:| |:---------- |:------------------:|:-----------------------------:|:-------------------------------------------------------:|:----------------------------------------------------------------------------------------:|
| PP-YOLOE-s | 42.7 | 333.3 | [Link](configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | | PP-YOLOE+_s | 43.9 | 333.3 | [link](configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml) | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams) |
| PP-YOLOE-m | 48.6 | 208.3 | [Link](configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams) | | PP-YOLOE+_m | 50.0 | 208.3 | [link](configs/ppyoloe/ppyoloe_plus_crn_m_80e_coco.yml) | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco.pdparams) |
| PP-YOLOE-l | 50.9 | 149.2 | [Link](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | | PP-YOLOE+_l | 53.3 | 149.2 | [link](configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml) | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco.pdparams) |
| PP-YOLOE-x | 51.9 | 95.2 | [Link](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | | PP-YOLOE+_x | 54.9 | 95.2 | [link](configs/ppyoloe/ppyoloe_plus_crn_x_80e_coco.yml) | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_x_80e_coco.pdparams) |
#### PP-PicoDet series Recommended scenarios: Mobile chips and x86 CPU devices, such as ARM CPU(RK3399, Raspberry Pi) and NPU(BITMAIN) #### PP-PicoDet series Recommended scenarios: Mobile chips and x86 CPU devices, such as ARM CPU(RK3399, Raspberry Pi) and NPU(BITMAIN)
...@@ -351,6 +358,7 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of ...@@ -351,6 +358,7 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of
| ByteTrack | SDE Multi-object tracking algorithm with detection model only | Edge-Cloud end | MOT-17 half val: 77.3 | [Link](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | | ByteTrack | SDE Multi-object tracking algorithm with detection model only | Edge-Cloud end | MOT-17 half val: 77.3 | [Link](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) |
| JDE | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 64.6 | [Link](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) | | JDE | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 64.6 | [Link](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) |
| FairMOT | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 75.0 | [Link](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | | FairMOT | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 75.0 | [Link](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) |
| OC-SORT | SDE multi-object tracking algorithm with detection model only | Edge-Cloud end | MOT-16 half val: 75.5 | [Link](configs/mot/ocsort/ocsort_yolox.yml) | - |
#### Other multi-object tracking models [docs](configs/mot) #### Other multi-object tracking models [docs](configs/mot)
......
...@@ -4,6 +4,63 @@ ...@@ -4,6 +4,63 @@
## 最新版本信息 ## 最新版本信息
### 2.5(08.26/2022)
- 特色模型
- PP-YOLOE+:
- 发布PP-YOLOE+模型,COCO test2017数据集精度提升0.7%-2.4% mAP,模型训练收敛速度提升3.75倍,端到端预测速度提升1.73-2.3倍
- 发布智慧农业,夜间安防检测,工业质检场景预训练模型,精度提升1.3%-8.1% mAP
- 支持分布式训练、在线量化、serving部署等10大高性能训练部署能力,新增C++/Python Serving、TRT原生推理、ONNX Runtime等5+部署demo教程
- PP-PicoDet:
- 发布PicoDet-NPU模型,支持模型全量化部署
- 新增PicoDet版面分析模型,基于FGD蒸馏算法精度提升0.5% mAP
- PP-TinyPose
- 发布PP-TinyPose增强版,在健身、舞蹈等场景的业务数据集端到端AP提升9.1% AP
- 覆盖侧身、卧躺、跳跃、高抬腿等非常规动作
- 新增滤波稳定模块,关键点稳定性显著增强
- 场景能力
- PP-Human v2
- 发布PP-Human v2,支持四大产业特色功能:多方案行为识别案例库、人体属性识别、人流检测与轨迹留存以及高精度跨镜跟踪
- 底层算法能力升级,行人检测精度提升1.5% mAP;行人跟踪精度提升10.2% MOTA,轻量级模型速度提升34%;属性识别精度提升0.6% ma,轻量级模型速度提升62.5%
- 提供全流程教程,覆盖数据采集标注,模型训练优化和预测部署,及pipeline中后处理代码修改
- 新增在线视频流输入支持
- 易用性提升,一行代码执行功能,执行流程判断、模型下载背后自动完成。
- PP-Vehicle
- 全新发布PP-Vehicle,支持四大交通场景核心功能:车牌识别、属性识别、车流量统计、违章检测
- 车牌识别支持基于PP-OCR v3的轻量级车牌识别模型
- 车辆属性识别支持基于PP-LCNet多标签分类模型
- 兼容图片、视频、在线视频流等各类数据输入格式
- 易用性提升,一行代码执行功能,执行流程判断、模型下载背后自动完成。
- 前沿算法
- YOLO家族全系列模型
- 发布YOLO家族全系列模型,覆盖前沿检测算法YOLOv5、MT-YOLOv6及YOLOv7
- 基于ConvNext骨干网络,YOLO各算法训练周期缩5-8倍,精度普遍提升1%-5% mAP;使用模型压缩策略实现精度无损的同时速度提升30%以上
- 新增基于ViT骨干网络高精度检测模型,COCO数据集精度达到55.7% mAP
- 新增OC-SORT多目标跟踪模型
- 新增ConvNeXt骨干网络
- 产业实践范例教程
- 基于PP-TinyPose增强版的智能健身动作识别
- 基于PP-Human的打架识别
- 基于PP-Human的营业厅来客分析
- 基于PP-Vehicle的车辆结构化分析
- 基于PP-YOLOE+的PCB电路板缺陷检测
- 框架能力
- 功能新增
- 新增自动压缩工具支持并提供demo,PP-YOLOE l版本精度损失0.3% mAP,V100速度提升13%
- 新增PaddleServing python/C++和ONNXRuntime部署demo
- 新增PP-YOLOE 端到端TensorRT部署demo
- 新增FGC蒸馏算法,RetinaNet精度提升3.3%
- 新增分布式训练文档
- 功能完善/Bug修复
- 修复Windows c++部署编译问题
- 修复VOC格式数据预测时保存结果问题
- 修复FairMOT c++部署检测框输出
- 旋转框检测模型S2ANet支持batch size>1部署
### 2.4(03.24/2022) ### 2.4(03.24/2022)
- PP-YOLOE: - PP-YOLOE:
......
...@@ -4,6 +4,68 @@ English | [简体中文](./CHANGELOG.md) ...@@ -4,6 +4,68 @@ English | [简体中文](./CHANGELOG.md)
## Last Version Information ## Last Version Information
### 2.5(08.26/2022)
- Featured model
- PP-YOLOE+:
- Released PP-YOLOE+ model, with a 0.7%-2.4% mAP improvement on COCO test2017. 3.75 times faster model training convergence rate and 1.73-2.3 times faster end-to-end inference speed
- Released pre-trained models for smart agriculture, night security detection, and industrial quality inspection with 1.3%-8.1% mAP accuracy improvement
- supports 10 high-performance training deployment capabilities, including distributed training, online quantization, and serving deployment. We also provide more than five new deployment demos, such as C++/Python Serving, TRT native inference, and ONNX Runtime
- PP-PicoDet:
- Release the PicoDet-NPU model to support full quantization of model deployment
- Add PicoDet layout analysis model with 0.5% mAP accuracy improvement due to FGD distillation algorithm
- PP-TinyPose
- Release PP-TinyPose Plus with 9.1% end-to-end AP improvement for business data sets such as physical exercises, dance, and other scenarios
- Covers unconventional movements such as turning to one side, lying down, jumping, high lift
- Add stabilization module (via filter) to significantly improve the stability at key points
- Functions in different scenarios
- PP-Human v2
- Release PP-Human v2, which supports four industrial features: behavioral recognition case zoo for multiple solutions, human attribute recognition, human traffic detection and trajectory retention, as well as high precision multi-camera tracking
- Upgraded underlying algorithm capabilities: 1.5% mAP improvement in pedestrian detection accuracy; 10.2% MOTA improvement in pedestrian tracking accuracy, 34% speed improvement in the lightweight model; 0.6% ma improvement in attribute recognition accuracy, 62.5% speed improvement in the lightweight model
- Provides comprehensive tutorials covering data collection and annotation, model training optimization and prediction deployment, and post-processing code modification in the pipeline
- Supports online video streaming input
- Become more user-friendly with a one-line code execution function that automates the process determination and model download
- PP-Vehicle
- Launch PP-Vehicle, which supports four core functions for traffic application: license plate recognition, attribute recognition, traffic flow statistics, and violation detection
- License plate recognition supports a lightweight model based on PP-OCR v3
- Vehicle attribute recognition supports a multi-label classification model based on PP-LCNet
- Compatible with various data input formats such as pictures, videos and online video streaming
- Become more user-friendly with a one-line code execution function that automates the process determination and model download
- Cutting-edge algorithms
- YOLO Family
- Release the full range of YOLO family models covering the cutting-edge detection algorithms YOLOv5, MT-YOLOv6 and YOLOv7
- Based on the ConvNext backbone network, YOLO's algorithm training periods are reduced by 5-8 times with accuracy generally improving by 1%-5% mAP; Thanks to the model compression strategy, its speed increased by over 30% with no loss of precision.
- Newly add high precision detection model based on [ViT](configs/vitdet) backbone network, with a 55.7% mAP accuracy on the COCO dataset
- Newly add multi-object tracking model [OC-SORT](configs/mot/ocsort)
- Newly add [ConvNeXt](configs/convnext) backbone network.
- Industrial application
- Intelligent physical exercise recognition based on PP-TinyPose Plus
- Fighting recognition based on PP-Human
- Business hall visitor analysis based on PP-Human
- Vehicle structuring analysis based on PP-Vehicle
- PCB board defect detection based on PP-YOLOE+
- Framework capabilities
- New functions
- Release auto-compression tools and demos, 0.3% mAP accuracy loss for PP-YOLOE l version, while 13% speed increase for V100
- Release PaddleServing python/C++ and ONNXRuntime deployment demos
- Release PP-YOLOE end-to-end TensorRT deployment demo
- Release FGC distillation algorithm with RetinaNet accuracy improved by 3.3%
- Release distributed training documentation
- Improvement and fixes
- Fix compilation problem with Windows c++ deployment
- Fix problems when saving results of inference data in VOC format
- Fix the detection box output of FairMOT c++ deployment
- Rotating frame detection model S2ANet supports batch size>1 deployment
### 2.4(03.24/2022) ### 2.4(03.24/2022)
- PP-YOLOE: - PP-YOLOE:
......
...@@ -18,12 +18,12 @@ STGCN是一个基于骨骼点坐标序列进行预测的模型。在PaddleVideo ...@@ -18,12 +18,12 @@ STGCN是一个基于骨骼点坐标序列进行预测的模型。在PaddleVideo
| N | 不定 | 数据集序列个数 | | N | 不定 | 数据集序列个数 |
| C | 2 | 关键点坐标维度,即(x, y) | | C | 2 | 关键点坐标维度,即(x, y) |
| T | 50 | 动作序列的时序维度(即持续帧数)| | T | 50 | 动作序列的时序维度(即持续帧数)|
| V | 17 | 每个人物关键点的个数,这里我们使用了`COCO`数据集的定义,具体可见[这里](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareKeypointDataSet_cn.md#COCO%E6%95%B0%E6%8D%AE%E9%9B%86) | | V | 17 | 每个人物关键点的个数,这里我们使用了`COCO`数据集的定义,具体可见[这里](../../../tutorials/PrepareKeypointDataSet_cn.md#COCO%E6%95%B0%E6%8D%AE%E9%9B%86) |
| M | 1 | 人物个数,这里我们每个动作序列只针对单人预测 | | M | 1 | 人物个数,这里我们每个动作序列只针对单人预测 |
### 获取序列的骨骼点坐标 ### 获取序列的骨骼点坐标
对于一个待标注的序列(这里序列指一个动作片段,可以是视频或有顺序的图片集合)。可以通过模型预测或人工标注的方式获取骨骼点(也称为关键点)坐标。 对于一个待标注的序列(这里序列指一个动作片段,可以是视频或有顺序的图片集合)。可以通过模型预测或人工标注的方式获取骨骼点(也称为关键点)坐标。
- 模型预测:可以直接选用[PaddleDetection KeyPoint模型系列](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/keypoint) 模型库中的模型,并根据`3、训练与测试 - 部署预测 - 检测+keypoint top-down模型联合部署`中的步骤获取目标序列的17个关键点坐标。 - 模型预测:可以直接选用[PaddleDetection KeyPoint模型系列](../../../..//configs/keypoint) 模型库中的模型,并根据`3、训练与测试 - 部署预测 - 检测+keypoint top-down模型联合部署`中的步骤获取目标序列的17个关键点坐标。
- 人工标注:若对关键点的数量或是定义有其他需求,也可以直接人工标注各个关键点的坐标位置,注意对于被遮挡或较难标注的点,仍需要标注一个大致坐标,否则后续网络学习过程会受到影响。 - 人工标注:若对关键点的数量或是定义有其他需求,也可以直接人工标注各个关键点的坐标位置,注意对于被遮挡或较难标注的点,仍需要标注一个大致坐标,否则后续网络学习过程会受到影响。
...@@ -46,7 +46,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc ...@@ -46,7 +46,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc
# if your data is video # if your data is video
python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True
``` ```
这样我们会得到一个`det_keypoint_unite_image_results.json`的检测结果文件。内容的具体含义请见[这里](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/det_keypoint_unite_infer.py#L108) 这样我们会得到一个`det_keypoint_unite_image_results.json`的检测结果文件。内容的具体含义请见[这里](../../../../deploy/python/det_keypoint_unite_infer.py#L108)
### 统一序列的时序长度 ### 统一序列的时序长度
...@@ -202,4 +202,4 @@ INFERENCE: ...@@ -202,4 +202,4 @@ INFERENCE:
基于人体骨骼点的行为识别方案中,模型输出的分类结果即代表了该人物在一定时间段内行为类型。对应分类的类型最终即视为当前阶段的行为。因此在完成自定义模型的训练及部署的基础上,使用模型输出作为最终结果,修改可视化的显示结果即可。 基于人体骨骼点的行为识别方案中,模型输出的分类结果即代表了该人物在一定时间段内行为类型。对应分类的类型最终即视为当前阶段的行为。因此在完成自定义模型的训练及部署的基础上,使用模型输出作为最终结果,修改可视化的显示结果即可。
#### 修改可视化输出 #### 修改可视化输出
目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。 目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](../../../../deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。
...@@ -17,12 +17,12 @@ STGCN is a model based on the sequence of skeleton point coordinates. In PaddleV ...@@ -17,12 +17,12 @@ STGCN is a model based on the sequence of skeleton point coordinates. In PaddleV
| N | Not Fixed | The number of sequences in the dataset | | N | Not Fixed | The number of sequences in the dataset |
| C | 2 | Keypoint coordinate, i.e. (x, y) | | C | 2 | Keypoint coordinate, i.e. (x, y) |
| T | 50 | The temporal dimension of the action sequence (i.e. the number of continuous frames)| | T | 50 | The temporal dimension of the action sequence (i.e. the number of continuous frames)|
| V | 17 | The number of keypoints of each person, here we use the definition of the `COCO` dataset, see [here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareKeypointDataSet_en.md#description-for-coco-datasetkeypoint) | | V | 17 | The number of keypoints of each person, here we use the definition of the `COCO` dataset, see [here](../../../tutorials/PrepareKeypointDataSet_en.md#description-for-coco-datasetkeypoint) |
| M | 1 | The number of persons, here we only predict a single person for each action sequence | | M | 1 | The number of persons, here we only predict a single person for each action sequence |
### Get The Skeleton Point Coordinates of The Sequence ### Get The Skeleton Point Coordinates of The Sequence
For a sequence to be labeled (here a sequence refers to an action segment, which can be a video or an ordered collection of pictures). The coordinates of skeletal points (also known as keypoints) can be obtained through model prediction or manual annotation. For a sequence to be labeled (here a sequence refers to an action segment, which can be a video or an ordered collection of pictures). The coordinates of skeletal points (also known as keypoints) can be obtained through model prediction or manual annotation.
- Model prediction: You can directly select the model in the [PaddleDetection KeyPoint Models](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/README_en.md) and according to `3, training and testing - Deployment Prediction - Detect + keypoint top-down model joint deployment` to get the 17 keypoint coordinates of the target sequence. - Model prediction: You can directly select the model in the [PaddleDetection KeyPoint Models](../../../../configs/keypoint/README_en.md) and according to `3, training and testing - Deployment Prediction - Detect + keypoint top-down model joint deployment` to get the 17 keypoint coordinates of the target sequence.
When using the model to predict and obtain the coordinates, you can refer to the following steps, please note that the operation in PaddleDetection at this time. When using the model to predict and obtain the coordinates, you can refer to the following steps, please note that the operation in PaddleDetection at this time.
...@@ -43,7 +43,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc ...@@ -43,7 +43,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc
# if your data is video # if your data is video
python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True
``` ```
We can get a detection result file named `det_keypoint_unite_image_results.json`. The detail of content can be seen at [Here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/det_keypoint_unite_infer.py#L108). We can get a detection result file named `det_keypoint_unite_image_results.json`. The detail of content can be seen at [Here](../../../../deploy/python/det_keypoint_unite_infer.py#L108).
### Uniform Sequence Length ### Uniform Sequence Length
...@@ -197,4 +197,4 @@ INFERENCE: ...@@ -197,4 +197,4 @@ INFERENCE:
In the skeleton-based action recognition, the classification result of the model represents the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, the model output is directly used as the final result, and the displayed result of the visualization should be modified. In the skeleton-based action recognition, the classification result of the model represents the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, the model output is directly used as the final result, and the displayed result of the visualization should be modified.
#### Modify Visual Output #### Modify Visual Output
At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result. At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](../../../../deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result.
...@@ -45,10 +45,10 @@ python tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o pretrai ...@@ -45,10 +45,10 @@ python tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o pretrai
2. 关键点数据增强 2. 关键点数据增强
在关键点模型训练中增加遮挡的数据增强,参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/tiny_pose/tinypose_256x192.yml#L100)。有助于模型提升这类场景下的表现。 在关键点模型训练中增加遮挡的数据增强,参考[PP-TinyPose](../../../configs/keypoint/tiny_pose/tinypose_256x192.yml#L100)。有助于模型提升这类场景下的表现。
### 对视频预测进行平滑处理 ### 对视频预测进行平滑处理
关键点模型是在图片级别的基础上进行训练和预测的,对于视频类型的输入也是将视频拆分为帧进行预测。帧与帧之间虽然内容大多相似,但微小的差异仍然可能导致模型的输出发生较大的变化,表现为虽然预测的坐标大体正确,但视觉效果上有较大的抖动问题。通过添加滤波平滑处理,将每一帧预测的结果与历史结果综合考虑,得到最终的输出结果,可以有效提升视频上的表现。该部分内容可参考[滤波平滑处理](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/python/det_keypoint_unite_infer.py#L206) 关键点模型是在图片级别的基础上进行训练和预测的,对于视频类型的输入也是将视频拆分为帧进行预测。帧与帧之间虽然内容大多相似,但微小的差异仍然可能导致模型的输出发生较大的变化,表现为虽然预测的坐标大体正确,但视觉效果上有较大的抖动问题。通过添加滤波平滑处理,将每一帧预测的结果与历史结果综合考虑,得到最终的输出结果,可以有效提升视频上的表现。该部分内容可参考[滤波平滑处理](../../../deploy/python/det_keypoint_unite_infer.py#L206)
## 新增或修改关键点点位定义 ## 新增或修改关键点点位定义
...@@ -236,7 +236,7 @@ python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml ...@@ -236,7 +236,7 @@ python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml
注意:由于测试依赖pycocotools工具,其默认为`COCO`数据集的17点,如果修改后的模型并非预测17点,直接使用评估命令会报错。 注意:由于测试依赖pycocotools工具,其默认为`COCO`数据集的17点,如果修改后的模型并非预测17点,直接使用评估命令会报错。
需要修改以下内容以获得正确的评估结果: 需要修改以下内容以获得正确的评估结果:
- [sigma列表](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/ppdet/modeling/keypoint_utils.py#L219),表示每个关键点的范围方差,越大则容忍度越高。其长度与预测点数一致。根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 - [sigma列表](../../../ppdet/modeling/keypoint_utils.py#L219),表示每个关键点的范围方差,越大则容忍度越高。其长度与预测点数一致。根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。
- [pycocotools sigma列表](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L523),含义及内容同上,取值与sigma列表一致。 - [pycocotools sigma列表](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L523),含义及内容同上,取值与sigma列表一致。
### 模型导出及预测 ### 模型导出及预测
......
...@@ -54,13 +54,13 @@ Refer to [Target Detection Task Secondary Development](. /detection.md) to impro ...@@ -54,13 +54,13 @@ Refer to [Target Detection Task Secondary Development](. /detection.md) to impro
2. Keypoint data augmentation 2. Keypoint data augmentation
Augmentation of covered data in keypoint model training to improve model performance in such scenarios, please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/tiny_pose/) Augmentation of covered data in keypoint model training to improve model performance in such scenarios, please refer to [PP-TinyPose](../../../configs/keypoint/tiny_pose/)
### Smooth video prediction ### Smooth video prediction
The keypoint model is trained and predicted on the basis of image, and video input is also predicted by splitting the video into frames. Although the content is mostly similar between frames, small differences may still lead to large changes in the output of the model. As a result of that, although the predicted coordinates are roughly correct, there may be jitters in the visual effect. The keypoint model is trained and predicted on the basis of image, and video input is also predicted by splitting the video into frames. Although the content is mostly similar between frames, small differences may still lead to large changes in the output of the model. As a result of that, although the predicted coordinates are roughly correct, there may be jitters in the visual effect.
By adding a smoothing filter process, the performance of the video output can be effectively improved by combining the predicted results of each frame and the historical results. For this part, please see [Filter Smoothing](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/python/det_keypoint_unite_infer.py#L206). By adding a smoothing filter process, the performance of the video output can be effectively improved by combining the predicted results of each frame and the historical results. For this part, please see [Filter Smoothing](../../../deploy/python/det_keypoint_unite_infer.py#L206).
## Add or modify keypoint definition ## Add or modify keypoint definition
......
docs/images/fps_map.png

352.9 KB | W: | H:

docs/images/fps_map.png

337.9 KB | W: | H:

docs/images/fps_map.png
docs/images/fps_map.png
docs/images/fps_map.png
docs/images/fps_map.png
  • 2-up
  • Swipe
  • Onion skin
...@@ -14,7 +14,7 @@ For general information about PaddleDetection, please see [README.md](https://gi ...@@ -14,7 +14,7 @@ For general information about PaddleDetection, please see [README.md](https://gi
- OS 64 bit - OS 64 bit
- Python 3(3.5.1+/3.6/3.7/3.8/3.9),64 bit - Python 3(3.5.1+/3.6/3.7/3.8/3.9),64 bit
- pip/pip3(9.0.1+), 64 bit - pip/pip3(9.0.1+), 64 bit
- CUDA >= 10.1 - CUDA >= 10.2
- cuDNN >= 7.6 - cuDNN >= 7.6
...@@ -23,6 +23,7 @@ Dependency of PaddleDetection and PaddlePaddle: ...@@ -23,6 +23,7 @@ Dependency of PaddleDetection and PaddlePaddle:
| PaddleDetection version | PaddlePaddle version | tips | | PaddleDetection version | PaddlePaddle version | tips |
| :----------------: | :---------------: | :-------: | | :----------------: | :---------------: | :-------: |
| develop | >= 2.2.2 | Dygraph mode is set as default | | develop | >= 2.2.2 | Dygraph mode is set as default |
| release/2.5 | >= 2.2.2 | Dygraph mode is set as default |
| release/2.4 | >= 2.2.2 | Dygraph mode is set as default | | release/2.4 | >= 2.2.2 | Dygraph mode is set as default |
| release/2.3 | >= 2.2.0rc | Dygraph mode is set as default | | release/2.3 | >= 2.2.0rc | Dygraph mode is set as default |
| release/2.2 | >= 2.1.2 | Dygraph mode is set as default | | release/2.2 | >= 2.1.2 | Dygraph mode is set as default |
...@@ -40,11 +41,11 @@ Dependency of PaddleDetection and PaddlePaddle: ...@@ -40,11 +41,11 @@ Dependency of PaddleDetection and PaddlePaddle:
``` ```
# CUDA10.1 # CUDA10.2
python -m pip install paddlepaddle-gpu==2.2.0.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html python -m pip install paddlepaddle-gpu==2.2.2 -i https://mirror.baidu.com/pypi/simple
# CPU # CPU
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle==2.2.2 -i https://mirror.baidu.com/pypi/simple
``` ```
- For more CUDA version or environment to quick install, please refer to the [PaddlePaddle Quick Installation document](https://www.paddlepaddle.org.cn/install/quick) - For more CUDA version or environment to quick install, please refer to the [PaddlePaddle Quick Installation document](https://www.paddlepaddle.org.cn/install/quick)
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
- OS 64位操作系统 - OS 64位操作系统
- Python 3(3.5.1+/3.6/3.7/3.8/3.9),64位版本 - Python 3(3.5.1+/3.6/3.7/3.8/3.9),64位版本
- pip/pip3(9.0.1+),64位版本 - pip/pip3(9.0.1+),64位版本
- CUDA >= 10.1 - CUDA >= 10.2
- cuDNN >= 7.6 - cuDNN >= 7.6
PaddleDetection 依赖 PaddlePaddle 版本关系: PaddleDetection 依赖 PaddlePaddle 版本关系:
...@@ -19,6 +19,7 @@ PaddleDetection 依赖 PaddlePaddle 版本关系: ...@@ -19,6 +19,7 @@ PaddleDetection 依赖 PaddlePaddle 版本关系:
| PaddleDetection版本 | PaddlePaddle版本 | 备注 | | PaddleDetection版本 | PaddlePaddle版本 | 备注 |
| :------------------: | :---------------: | :-------: | | :------------------: | :---------------: | :-------: |
| develop | >= 2.2.2 | 默认使用动态图模式 | | develop | >= 2.2.2 | 默认使用动态图模式 |
| release/2.5 | >= 2.2.2 | 默认使用动态图模式 |
| release/2.4 | >= 2.2.2 | 默认使用动态图模式 | | release/2.4 | >= 2.2.2 | 默认使用动态图模式 |
| release/2.3 | >= 2.2.0rc | 默认使用动态图模式 | | release/2.3 | >= 2.2.0rc | 默认使用动态图模式 |
| release/2.2 | >= 2.1.2 | 默认使用动态图模式 | | release/2.2 | >= 2.1.2 | 默认使用动态图模式 |
...@@ -34,11 +35,11 @@ PaddleDetection 依赖 PaddlePaddle 版本关系: ...@@ -34,11 +35,11 @@ PaddleDetection 依赖 PaddlePaddle 版本关系:
### 1. 安装PaddlePaddle ### 1. 安装PaddlePaddle
``` ```
# CUDA10.1 # CUDA10.2
python -m pip install paddlepaddle-gpu==2.2.0.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html python -m pip install paddlepaddle-gpu==2.2.2 -i https://mirror.baidu.com/pypi/simple
# CPU # CPU
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle==2.2.2 -i https://mirror.baidu.com/pypi/simple
``` ```
- 更多CUDA版本或环境快速安装,请参考[PaddlePaddle快速安装文档](https://www.paddlepaddle.org.cn/install/quick) - 更多CUDA版本或环境快速安装,请参考[PaddlePaddle快速安装文档](https://www.paddlepaddle.org.cn/install/quick)
- 更多安装方式例如conda或源码编译安装方法,请参考[PaddlePaddle安装文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/index_cn.html) - 更多安装方式例如conda或源码编译安装方法,请参考[PaddlePaddle安装文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/index_cn.html)
......
...@@ -549,7 +549,7 @@ class MultiClassNMS(object): ...@@ -549,7 +549,7 @@ class MultiClassNMS(object):
if background_label > -1: if background_label > -1:
kwargs.update({'background_label': background_label}) kwargs.update({'background_label': background_label})
kwargs.pop('trt') kwargs.pop('trt')
# TODO(wangxinxin08): paddle version should be release/2.5 or 2.3 and above to run nms on tensorrt # TODO(wangxinxin08): paddle version should be develop or 2.3 and above to run nms on tensorrt
if self.trt and (int(paddle.version.major) == 0 or if self.trt and (int(paddle.version.major) == 0 or
(int(paddle.version.major) >= 2 and (int(paddle.version.major) >= 2 and
int(paddle.version.minor) >= 3)): int(paddle.version.minor) >= 3)):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册