未验证 提交 2fb498ad 编写于 作者: Z zhiboniu 提交者: GitHub

cherry-pick of pipeline docs (#7772)

* update pipeline docs; vehicle illegal;rtsp;jetson

* update docs

* bold text
上级 a9cd917e
...@@ -14,7 +14,8 @@ ...@@ -14,7 +14,8 @@
## 📣 近期更新 ## 📣 近期更新
- 🔥🔥🔥 **2022.8.20:PP-Vehicle首发,提供车牌识别、车辆属性分析(颜色、车型)、车流量统计以及违章检测四大功能,完善的文档教程支持高效完成二次开发与模型优化** - 🔥🔥🔥 **2023.02.15: Jetson部署专用小模型PP-YOLOE-PLUS-Tiny发布,可在AGX平台实现4路视频流实时预测;PP-Vehicle发布违法分析功能车辆逆行和压车道线**
- **2022.8.20:PP-Vehicle首发,提供车牌识别、车辆属性分析(颜色、车型)、车流量统计以及违章检测四大功能,完善的文档教程支持高效完成二次开发与模型优化**
- **2022.7.13:PP-Human v2发布,新增打架、打电话、抽烟、闯入四大行为识别,底层算法性能升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略** - **2022.7.13:PP-Human v2发布,新增打架、打电话、抽烟、闯入四大行为识别,底层算法性能升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略**
- 2022.4.18:新增PP-Human全流程实战教程, 覆盖训练、部署、动作类型扩展等内容,AIStudio项目请见[链接](https://aistudio.baidu.com/aistudio/projectdetail/3842982) - 2022.4.18:新增PP-Human全流程实战教程, 覆盖训练、部署、动作类型扩展等内容,AIStudio项目请见[链接](https://aistudio.baidu.com/aistudio/projectdetail/3842982)
- 2022.4.10:新增PP-Human范例,赋能社区智能精细化管理, AIStudio快速上手教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3679564) - 2022.4.10:新增PP-Human范例,赋能社区智能精细化管理, AIStudio快速上手教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3679564)
...@@ -28,7 +29,7 @@ ...@@ -28,7 +29,7 @@
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| **跨镜跟踪(ReID)** | 超强性能:针对目标遮挡、完整度、模糊度等难点特殊优化,实现mAP 98.8、1.5ms/人 | <img title="" src="https://user-images.githubusercontent.com/48054808/173037607-0a5deadc-076e-4dcc-bd96-d54eea205f1f.png" alt="" width="191"> | | **跨镜跟踪(ReID)** | 超强性能:针对目标遮挡、完整度、模糊度等难点特殊优化,实现mAP 98.8、1.5ms/人 | <img title="" src="https://user-images.githubusercontent.com/48054808/173037607-0a5deadc-076e-4dcc-bd96-d54eea205f1f.png" alt="" width="191"> |
| **属性分析** | 兼容多种数据格式:支持图片、视频、在线视频流输入<br><br>高性能:融合开源数据集与企业真实数据进行训练,实现mAP 95.4、2ms/人<br><br>支持26种属性:性别、年龄、眼镜、上衣、鞋子、帽子、背包等26种高频属性 | <img title="" src="https://user-images.githubusercontent.com/48054808/173036043-68b90df7-e95e-4ada-96ae-20f52bc98d7c.png" alt="" width="191">| | **属性分析** | 兼容多种数据格式:支持图片、视频、在线视频流输入<br><br>高性能:融合开源数据集与企业真实数据进行训练,实现mAP 95.4、2ms/人<br><br>支持26种属性:性别、年龄、眼镜、上衣、鞋子、帽子、背包等26种高频属性 | <img title="" src="https://user-images.githubusercontent.com/48054808/173036043-68b90df7-e95e-4ada-96ae-20f52bc98d7c.png" alt="" width="191">|
| **行为识别** | 功能丰富:支持摔倒、打架、抽烟、打电话、人员闯入五种高频异常行为识别<br><br>鲁棒性强:对光照、视角、背景环境无限制<br><br>性能高:与视频识别技术相比,模型计算量大幅降低,支持本地化与服务化快速部署<br><br>训练速度快:仅需15分钟即可产出高精度行为识别模型 |<img title="" src="https://user-images.githubusercontent.com/48054808/173034825-623e4f78-22a5-4f14-9b83-dc47aa868478.gif" alt="" width="191"> | | **行为识别(包含摔倒、打架、抽烟、打电话、人员闯入)** | 功能丰富:支持摔倒、打架、抽烟、打电话、人员闯入五种高频异常行为识别<br><br>鲁棒性强:对光照、视角、背景环境无限制<br><br>性能高:与视频识别技术相比,模型计算量大幅降低,支持本地化与服务化快速部署<br><br>训练速度快:仅需15分钟即可产出高精度行为识别模型 |<img title="" src="https://user-images.githubusercontent.com/48054808/173034825-623e4f78-22a5-4f14-9b83-dc47aa868478.gif" alt="" width="191"> |
| **人流量计数**<br>**轨迹记录** | 简洁易用:单个参数即可开启人流量计数与轨迹记录功能 | <img title="" src="https://user-images.githubusercontent.com/22989727/174736440-87cd5169-c939-48f8-90a1-0495a1fcb2b1.gif" alt="" width="191"> | | **人流量计数**<br>**轨迹记录** | 简洁易用:单个参数即可开启人流量计数与轨迹记录功能 | <img title="" src="https://user-images.githubusercontent.com/22989727/174736440-87cd5169-c939-48f8-90a1-0495a1fcb2b1.gif" alt="" width="191"> |
### PP-Vehicle ### PP-Vehicle
...@@ -39,6 +40,8 @@ ...@@ -39,6 +40,8 @@
| **车辆属性分析** | 支持多种车型、颜色类别识别 <br/><br/> 使用更强力的Backbone模型PP-HGNet、PP-LCNet,精度高、速度快。识别精度: 90.81 | <img title="" src="https://user-images.githubusercontent.com/48054808/185044490-00edd930-1885-4e79-b3d4-3a39a77dea93.gif" alt="" width="207"> | | **车辆属性分析** | 支持多种车型、颜色类别识别 <br/><br/> 使用更强力的Backbone模型PP-HGNet、PP-LCNet,精度高、速度快。识别精度: 90.81 | <img title="" src="https://user-images.githubusercontent.com/48054808/185044490-00edd930-1885-4e79-b3d4-3a39a77dea93.gif" alt="" width="207"> |
| **违章检测** | 简单易用:一行命令即可实现违停检测,自定义设置区域 <br/><br/> 检测、跟踪效果好,可实现违停车辆车牌识别 | <img title="" src="https://user-images.githubusercontent.com/48054808/185028419-58ae0af8-a035-42e7-9583-25f5e4ce0169.png" alt="" width="209"> | | **违章检测** | 简单易用:一行命令即可实现违停检测,自定义设置区域 <br/><br/> 检测、跟踪效果好,可实现违停车辆车牌识别 | <img title="" src="https://user-images.githubusercontent.com/48054808/185028419-58ae0af8-a035-42e7-9583-25f5e4ce0169.png" alt="" width="209"> |
| **车流量计数** | 简单易用:一行命令即可开启功能,自定义出入位置 <br/><br/> 可提供目标跟踪轨迹显示,统计准确度高 | <img title="" src="https://user-images.githubusercontent.com/48054808/185028798-9e07379f-7486-4266-9d27-3aec943593e0.gif" alt="" width="200"> | | **车流量计数** | 简单易用:一行命令即可开启功能,自定义出入位置 <br/><br/> 可提供目标跟踪轨迹显示,统计准确度高 | <img title="" src="https://user-images.githubusercontent.com/48054808/185028798-9e07379f-7486-4266-9d27-3aec943593e0.gif" alt="" width="200"> |
| **违法分析-车辆逆行** | 简单易用:一行命令即可开启功能 <br/><br/> 车道线分割使用高精度模型PP-LIteSeg | <img title="" src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_retrograde.gif" alt="" width="200"> |
| **违法分析-压车道线** | 简单易用:一行命令即可开启功能 <br/><br/> 车道线分割使用高精度模型PP-LIteSeg | <img title="" src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_press.gif" alt="" width="200"> |
## 🗳 模型库 ## 🗳 模型库
...@@ -51,8 +54,10 @@ ...@@ -51,8 +54,10 @@
|:---------:|:---------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------:| |:---------:|:---------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------:|
| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人检测(超轻量级) | 10ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人跟踪(超轻量级) | 13.2ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 跨镜跟踪(REID) | 单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M | | 跨镜跟踪(REID) | 单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M |
| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M | | 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M |
| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M | | 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M |
...@@ -76,10 +81,13 @@ ...@@ -76,10 +81,13 @@
| :---------: | :-------: | :------: |:------: | | :---------: | :-------: | :------: |:------: |
| 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 车辆检测(超轻量级) | 10ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 车辆跟踪(超轻量级) | 13.2ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M | | 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M |
| 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M | | 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
| 车道线检测 | 47ms | [车道线模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) | 47M |
点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中 点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
...@@ -147,6 +155,10 @@ ...@@ -147,6 +155,10 @@
- [快速开始](docs/tutorials/ppvehicle_press.md) - [快速开始](docs/tutorials/ppvehicle_press.md)
- [二次开发教程]
#### 车辆逆行 #### 车辆逆行
- [快速开始](docs/tutorials/ppvehicle_retrograde.md) - [快速开始](docs/tutorials/ppvehicle_retrograde.md)
- [二次开发教程]
...@@ -12,7 +12,8 @@ ...@@ -12,7 +12,8 @@
## 📣 Updates ## 📣 Updates
- 🔥🔥🔥 **2022.8.20:PP-Vehicle was first launched with four major toolbox for vehicle analysis,and it also provide detailed documentation for user to train with their own datas and model optimize.** - 🔥🔥🔥 **PP-YOLOE-PLUS-Tiny was launched for Jetson deploy, which has achieved 20fps while four rtsp streams work at the same time; PP-Vehicle was launched with retrograde and lane line press.**
- 🔥 **2022.8.20:PP-Vehicle was first launched with four major toolbox for vehicle analysis,and it also provide detailed documentation for user to train with their own datas and model optimize.**
- 🔥 2022.7.13:PP-Human v2 launched with a full upgrade of four industrial features: behavior analysis, attributes recognition, visitor traffic statistics and ReID. It provides a strong core algorithm for pedestrian detection, tracking and attribute analysis with a simple and detailed development process and model optimization strategy. - 🔥 2022.7.13:PP-Human v2 launched with a full upgrade of four industrial features: behavior analysis, attributes recognition, visitor traffic statistics and ReID. It provides a strong core algorithm for pedestrian detection, tracking and attribute analysis with a simple and detailed development process and model optimization strategy.
- 2022.4.18: Add PP-Human practical tutorials, including training, deployment, and action expansion. Details for AIStudio project please see [Link](https://aistudio.baidu.com/aistudio/projectdetail/3842982) - 2022.4.18: Add PP-Human practical tutorials, including training, deployment, and action expansion. Details for AIStudio project please see [Link](https://aistudio.baidu.com/aistudio/projectdetail/3842982)
...@@ -41,7 +42,8 @@ ...@@ -41,7 +42,8 @@
| **Vehicle Attributes** | Identify 10 vehicle colors and 9 models <br/><br/> More powerfull backbone: PP-HGNet/PP-LCNet, with higher accuracy and faster speed <br/><br/> accuracy of model: 90.81 <br/><br/> | <img title="" src="https://user-images.githubusercontent.com/48054808/185044490-00edd930-1885-4e79-b3d4-3a39a77dea93.gif" alt="" width="207"> | | **Vehicle Attributes** | Identify 10 vehicle colors and 9 models <br/><br/> More powerfull backbone: PP-HGNet/PP-LCNet, with higher accuracy and faster speed <br/><br/> accuracy of model: 90.81 <br/><br/> | <img title="" src="https://user-images.githubusercontent.com/48054808/185044490-00edd930-1885-4e79-b3d4-3a39a77dea93.gif" alt="" width="207"> |
| **Illegal Parking** | Easy to use with one line command, and define the illegal area by yourself <br/><br/> Get the license of illegal car <br/><br/> | <img title="" src="https://user-images.githubusercontent.com/48054808/185028419-58ae0af8-a035-42e7-9583-25f5e4ce0169.png" alt="" width="209"> | | **Illegal Parking** | Easy to use with one line command, and define the illegal area by yourself <br/><br/> Get the license of illegal car <br/><br/> | <img title="" src="https://user-images.githubusercontent.com/48054808/185028419-58ae0af8-a035-42e7-9583-25f5e4ce0169.png" alt="" width="209"> |
| **in-out counting** | Easy to use with one line command, and define the in-out line by yourself <br/><br/> Target route visualize with high tracking performance | <img title="" src="https://user-images.githubusercontent.com/48054808/185028798-9e07379f-7486-4266-9d27-3aec943593e0.gif" alt="" width="200"> | | **in-out counting** | Easy to use with one line command, and define the in-out line by yourself <br/><br/> Target route visualize with high tracking performance | <img title="" src="https://user-images.githubusercontent.com/48054808/185028798-9e07379f-7486-4266-9d27-3aec943593e0.gif" alt="" width="200"> |
| **vehicle retrograde** | Easy to use with one line command <br/><br/> High precision Segmetation model PP-LiteSeg | <img title="" src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_retrograde.gif" alt="" width="200"> |
| **vehicle press line** | Easy to use with one line command <br/><br/> High precision Segmetation model PP-LiteSeg | <img title="" src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_press.gif" alt="" width="200"> |
## 🗳 Model Zoo ## 🗳 Model Zoo
...@@ -52,8 +54,10 @@ ...@@ -52,8 +54,10 @@
|:--------------------------------------:|:--------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:| |:--------------------------------------:|:--------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|
| Pedestrian detection (high precision) | 25.1ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | Pedestrian detection (high precision) | 25.1ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| Pedestrian detection (lightweight) | 16.2ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | Pedestrian detection (lightweight) | 16.2ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| Pedestrian detection (super lightweight) | 10ms(Jetson AGX) | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| Pedestrian tracking (high precision) | 31.8ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | Pedestrian tracking (high precision) | 31.8ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| Pedestrian tracking (lightweight) | 21.0ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | Pedestrian tracking (lightweight) | 21.0ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| Pedestrian tracking(super lightweight) | 13.2ms(Jetson AGX) | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| MTMCT(REID) | Single Person 1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M | | MTMCT(REID) | Single Person 1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M |
| Attribute recognition (high precision) | Single person8.5ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M<br>Attribute recognition:86M | | Attribute recognition (high precision) | Single person8.5ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M<br>Attribute recognition:86M |
| Attribute recognition (lightweight) | Single person 7.1ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M<br>Attribute recognition:86M | | Attribute recognition (lightweight) | Single person 7.1ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M<br>Attribute recognition:86M |
...@@ -72,10 +76,13 @@ ...@@ -72,10 +76,13 @@
|:--------------------------------------:|:--------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:| |:--------------------------------------:|:--------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|
| Vehicle detection (high precision) | 25.7ms | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | Vehicle detection (high precision) | 25.7ms | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| Vehicle detection (lightweight) | 13.2ms | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | Vehicle detection (lightweight) | 13.2ms | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| Vehicle detection (super lightweight) | 10ms(Jetson AGX) | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| Vehicle tracking (high precision) | 40ms | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | Vehicle tracking (high precision) | 40ms | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| Vehicle tracking (lightweight) | 25ms | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | Vehicle tracking (lightweight) | 25ms | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| Vehicle tracking (super lightweight) | 13.2ms(Jetson AGX) | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| Plate Recognition | 4.68ms | [plate detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz)<br>[plate recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | Plate detection:3.9M<br>Plate recognition:12M | | Plate Recognition | 4.68ms | [plate detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz)<br>[plate recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | Plate detection:3.9M<br>Plate recognition:12M |
| Vehicle attribute | 7.31ms | [attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M | | Vehicle attribute | 7.31ms | [attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
| Lane line Segmentation | 47ms | [Lane line Segmentation](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) | 47M |
</details> </details>
...@@ -145,6 +152,10 @@ Click to download the model, then unzip and save it in the `. /output_inference` ...@@ -145,6 +152,10 @@ Click to download the model, then unzip and save it in the `. /output_inference`
- [A quick start](docs/tutorials/ppvehicle_press_en.md) - [A quick start](docs/tutorials/ppvehicle_press_en.md)
- [Customized development tutorials]
#### Vehicle Retrograde #### Vehicle Retrograde
- [A quick start](docs/tutorials/ppvehicle_retrograde_en.md) - [A quick start](docs/tutorials/ppvehicle_retrograde_en.md)
- [Customized development tutorials]
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
- [模型下载](#模型下载) - [模型下载](#模型下载)
- [配置文件说明](#配置文件说明) - [配置文件说明](#配置文件说明)
- [预测部署](#预测部署) - [预测部署](#预测部署)
- [在线视频流](#在线视频流)
- [Jetson部署说明](#Jetson部署说明)
- [参数说明](#参数说明) - [参数说明](#参数说明)
- [方案介绍](#方案介绍) - [方案介绍](#方案介绍)
- [行人检测](#行人检测) - [行人检测](#行人检测)
...@@ -49,8 +51,10 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模 ...@@ -49,8 +51,10 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模
| :---------: | :-------: | :------: |:------: | | :---------: | :-------: | :------: |:------: |
| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人检测(超轻量级) | 10ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人跟踪(超轻量级) | 13.2ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 跨镜跟踪(REID) | 单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M | | 跨镜跟踪(REID) | 单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M |
| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M<br>属性识别:86M | | 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M<br>属性识别:86M |
| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | 目标检测:182M<br>属性识别:86M | | 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | 目标检测:182M<br>属性识别:86M |
...@@ -126,7 +130,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -126,7 +130,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu
``` ```
3. rtsp推拉流 ### 在线视频流
在线视频流解码功能基于opencv的capture函数,支持rtsp、rtmp格式。
- rtsp拉流预测 - rtsp拉流预测
对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下: 对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:
...@@ -147,19 +154,30 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -147,19 +154,30 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
``` ```
注: 注:
1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务. 1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务.
2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s版本,即修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。 使用方法很简单,以linux平台为例:1)下载对应平台release包;2)解压后在命令行执行命令 `./rtsp-simple-server`即可,成功后进入服务开启状态就可以接收视频流了。
2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s或ppyoloe-plus-tiny版本,方式为修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。
### Jetson部署说明 ### Jetson部署说明
由于Jetson平台算力相比服务器有较大差距,有如下使用建议: 由于Jetson平台算力相比服务器有较大差距,有如下使用建议:
1. 模型选择轻量级版本,特别是跟踪模型,推荐使用`ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` 1. 模型选择轻量级版本,我们最新提供了轻量级[PP-YOLOE-Plus Tiny模型](../../../../configs/pphuman/README.md),该模型在Jetson AGX上可以实现4路视频流20fps实时跟踪。
2. 开启跟踪跳帧功能,推荐使用2或者3: `skip_frame_num: 3` 2. 如果需进一步提升速度,建议开启跟踪跳帧功能,推荐使用2或者3: `skip_frame_num: 3`,该功能当前默认关闭。
上述修改可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。
PP-YOLOE-Plus Tiny模型在AGX平台不同功能开启时的速度如下:(跟踪人数为3人情况下,以属性为例,总耗时为跟踪13.3+5.2*3≈29ms)
使用该推荐配置,在TX2平台上可以达到较高速率,经测试属性案例达到20fps。 | 功能 | 平均每帧耗时(ms) | 运行帧率(fps) |
|:----------|:----------|:----------|
| 跟踪 | 13 | 77 |
| 属性识别 | 29 | 34 |
| 摔倒识别 | 64.5 | 15.5 |
| 抽烟识别 | 68.8 | 14.5 |
| 打电话识别 | 22.5 | 44.5 |
| 打架识别 | 3.98 | 251 |
可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。
### 参数说明 ### 参数说明
......
...@@ -8,6 +8,8 @@ English | [简体中文](PPHuman_QUICK_STARTED.md) ...@@ -8,6 +8,8 @@ English | [简体中文](PPHuman_QUICK_STARTED.md)
- [Model Download](#Model-Download) - [Model Download](#Model-Download)
- [Configuration](#Configuration) - [Configuration](#Configuration)
- [Inference Deployment](#Inference-Deployment) - [Inference Deployment](#Inference-Deployment)
- [rtsp_stream](#rtsp_stream)
- [Nvidia_Jetson](#Nvidia_Jetson)
- [Parameters](#Parameters) - [Parameters](#Parameters)
- [Solutions](#Solutions) - [Solutions](#Solutions)
- [Pedestrian Detection](#edestrian-Detection) - [Pedestrian Detection](#edestrian-Detection)
...@@ -49,8 +51,10 @@ PP-Human provides object detection, attribute recognition, behaviour recognition ...@@ -49,8 +51,10 @@ PP-Human provides object detection, attribute recognition, behaviour recognition
|:--------------------------------------:|:--------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------:| |:--------------------------------------:|:--------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------:|
| Pedestrian Detection (high precision) | 25.1ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | Pedestrian Detection (high precision) | 25.1ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| Pedestrian Detection (Lightweight) | 16.2ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | Pedestrian Detection (Lightweight) | 16.2ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| Pedestrian detection (super lightweight) | 10ms(Jetson AGX) | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| Pedestrian Tracking (high precision) | 31.8ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | Pedestrian Tracking (high precision) | 31.8ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| Pedestrian Tracking (Lightweight) | 21.0ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | Pedestrian Tracking (Lightweight) | 21.0ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| Pedestrian tracking(super lightweight) | 13.2ms(Jetson AGX) | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| MTMCT(REID) | Single Person 1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M | | MTMCT(REID) | Single Person 1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M |
| Attribute Recognition (high precision) | Single Person 8.5ms | [Object Detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute Recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | Object Detection:182M<br>Attribute Recogniton:86M | | Attribute Recognition (high precision) | Single Person 8.5ms | [Object Detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute Recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | Object Detection:182M<br>Attribute Recogniton:86M |
| Attribute Recognition (Lightweight) | Single Person 7.1ms | [Object Detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute Recogniton](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | Object Detection:182M<br>Attribute Recogniton:86M | | Attribute Recognition (Lightweight) | Single Person 7.1ms | [Object Detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [Attribute Recogniton](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | Object Detection:182M<br>Attribute Recogniton:86M |
...@@ -126,7 +130,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -126,7 +130,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu
``` ```
3. rtsp push/pull stream ### rtsp_stream
The online stream decode based on opencv Capture function, normally support rtsp and rtmp.
- rtsp pull stream - rtsp pull stream
For rtsp pull stream, use `--rtsp RTSP [RTSP ...]` parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows For rtsp pull stream, use `--rtsp RTSP [RTSP ...]` parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows
...@@ -148,17 +155,29 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -148,17 +155,29 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
``` ```
Note: Note:
1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first. 1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file. It's very easy to use: 1) download the [release package](https://github.com/aler9/rtsp-simple-server/releases) which is compatible with your workspace. 2) run command './rtsp-simple-server', which works as a rtsp server.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s or ppyoloe_plus_tiny in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.
### Jetson Deployment ### Nvidia_Jetson
Due to the large gap in computing power of the Jetson platform compared to the server, we suggest: Due to the large gap in computing power of the Jetson platform compared to the server, we suggest:
1. choose a lightweight model, especially for tracking model, `ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` is recommended 1. choose a lightweight model, we provide a new model named [PP-YOLOE-Plus Tiny](../../../../configs/pphuman/README.md),which achieve 20fps with four rtsp streams work togather on Jetson AGX.
2. For frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3` 2. For further speedup, you can set frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
PP-YOLOE-Plus Tiny module speed test data on AGX:(three people in video, for example of attribute,the whole time cost per frame is 13.3+5.2*3≈29ms)
| module | time cost per frame(ms) | speed(fps) |
|:----------|:----------|:----------|
| tracking | 13 | 77 |
| Attribute | 29 | 34 |
| falldown | 64.5 | 15.5 |
| smoking | 68.8 | 14.5 |
| calling | 22.5 | 44.5 |
| fighting | 3.98 | 251 |
With this recommended configuration, it is possible to achieve higher speeds on the TX2 platform. It has been tested with attribute case, with speeds up to 20fps. The configuration file can be modified directly (recommended) or from the command line (not recommended due to its long fields).
### Parameters ### Parameters
...@@ -172,8 +191,7 @@ With this recommended configuration, it is possible to achieve higher speeds on ...@@ -172,8 +191,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
| --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input | | --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input |
| --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 | | --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 |
| --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. | | --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. |
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now. | --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.|
|
| --output_dir | Option | The root directory for the visualization results, and the default is output/ | | --output_dir | Option | The root directory for the visualization results, and the default is output/ |
| --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional | | --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional |
| --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False | | --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False |
...@@ -192,14 +210,14 @@ The overall solution for PP-Human v2 is shown in the graph below: ...@@ -192,14 +210,14 @@ The overall solution for PP-Human v2 is shown in the graph below:
### Pedestrian detection ### Pedestrian detection
- Take PP-YOLOE L as the object detection model - Take PP-YOLOE L as the object detection model
- For detailed documentation, please refer to [PP-YOLOE](... /... /... /... /configs/ppyoloe/) and [Multiple-Object-Tracking](pphuman_mot_en.md) - For detailed documentation, please refer to [PP-YOLOE](../../../../configs/ppyoloe/) and [Multiple-Object-Tracking](pphuman_mot_en.md)
### Pedestrian tracking ### Pedestrian tracking
- Vehicle tracking by SDE solution - Vehicle tracking by SDE solution
- Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models - Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models
- Adopt the OC-SORT solution for racking module - Adopt the OC-SORT solution for racking module
- Refer to [OC-SORT](... /... /... /... /configs/mot/ocsort) and [Multi-Object Tracking](pphuman_mot_en.md) for details - Refer to [OC-SORT](../../../../configs/mot/ocsort) and [Multi-Object Tracking](pphuman_mot_en.md) for details
### Multi-camera & multi-pedestrain tracking ### Multi-camera & multi-pedestrain tracking
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
- [模型下载](#模型下载) - [模型下载](#模型下载)
- [配置文件说明](#配置文件说明) - [配置文件说明](#配置文件说明)
- [预测部署](#预测部署) - [预测部署](#预测部署)
- [在线视频流](#在线视频流)
- [Jetson部署说明](#Jetson部署说明)
- [参数说明](#参数说明) - [参数说明](#参数说明)
- [方案介绍](#方案介绍) - [方案介绍](#方案介绍)
- [车辆检测](#车辆检测) - [车辆检测](#车辆检测)
...@@ -50,11 +52,13 @@ PP-Vehicle提供了目标检测、属性识别、行为识别、ReID预训练模 ...@@ -50,11 +52,13 @@ PP-Vehicle提供了目标检测、属性识别、行为识别、ReID预训练模
| :---------: | :-------: | :------: |:------: | | :---------: | :-------: | :------: |:------: |
| 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 车辆检测(超轻量级) | 10ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 车辆跟踪(超轻量级) | 13.2ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M | | 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M |
| 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M | | 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
| 车道线检测 | 47ms | [车道线模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) | 47M |
下载模型后,解压至`./output_inference`文件夹。 下载模型后,解压至`./output_inference`文件夹。
...@@ -131,7 +135,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -131,7 +135,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
``` ```
3. rtsp推拉流 ### 在线视频流
在线视频流解码功能基于opencv的capture函数,支持rtsp、rtmp格式。
- rtsp拉流预测 - rtsp拉流预测
对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下: 对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:
...@@ -152,18 +159,25 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -152,18 +159,25 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
``` ```
注: 注:
1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务. 1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务.
使用方法很简单,以linux平台为例:1)下载对应平台release包;2)解压后在命令行执行命令 `./rtsp-simple-server`即可,成功后进入服务开启状态就可以接收视频流了。
2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s版本,即修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。 2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s版本,即修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。
### Jetson部署说明 ### Jetson部署说明
由于Jetson平台算力相比服务器有较大差距,有如下使用建议: 由于Jetson平台算力相比服务器有较大差距,有如下使用建议:
1. 模型选择轻量级版本,特别是跟踪模型,推荐使用`ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` 1. 模型选择轻量级版本,我们最新提供了轻量级[PP-YOLOE-Plus Tiny模型](../../../../configs/ppvehicle/README.md),该模型在Jetson AGX上可以实现4路视频流20fps实时跟踪。
2. 开启跟踪跳帧功能,推荐使用2或者3. `skip_frame_num: 3` 2. 如果需进一步提升速度,建议开启跟踪跳帧功能,推荐使用2或者3: `skip_frame_num: 3`,该功能当前默认关闭。
使用该推荐配置,在TX2平台上可以达到较高速率,经测试属性案例达到20fps 上述修改可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)
可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。 PP-YOLOE-Plus Tiny模型在AGX平台不同功能开启时的速度如下:(测试视频跟踪车辆为1个)
| 功能 | 平均每帧耗时(ms) | 运行帧率(fps) |
|:----------|:----------|:----------|
| 跟踪 | 13 | 77 |
| 属性识别 | 20.2 | 49.4 |
| 车牌识别 | - | - |
### 参数说明 ### 参数说明
...@@ -195,7 +209,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -195,7 +209,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
PP-Vehicle 整体方案如下图所示: PP-Vehicle 整体方案如下图所示:
<div width="1000" align="center"> <div width="1000" align="center">
<img src="../../../../docs/images/ppvehicle.png"/> <img src="https://user-images.githubusercontent.com/31800336/218659932-31f4298c-042d-436d-9845-18879f5d31e3.png"/>
</div> </div>
...@@ -220,3 +234,11 @@ PP-Vehicle 整体方案如下图所示: ...@@ -220,3 +234,11 @@ PP-Vehicle 整体方案如下图所示:
### 违章停车识别 ### 违章停车识别
- 车辆跟踪模型使用高精度模型PP-YOLOE L,根据车辆的跟踪轨迹以及指定的违停区域判断是否违章停车,如果存在则展示违章停车车牌号。 - 车辆跟踪模型使用高精度模型PP-YOLOE L,根据车辆的跟踪轨迹以及指定的违停区域判断是否违章停车,如果存在则展示违章停车车牌号。
- 详细文档参考[违章停车识别](ppvehicle_illegal_parking.md) - 详细文档参考[违章停车识别](ppvehicle_illegal_parking.md)
### 违法分析-逆行
- 违法分析-逆行,通过使用高精度分割模型PP-Seg,对车道线进行分割拟合,然后与车辆轨迹组合判断车辆行驶方向是否与道路方向一致。
- 详细文档参考[违法分析-逆行](ppvehicle_retrograde.md)
### 违法分析-压线
- 违法分析-逆行,通过使用高精度分割模型PP-Seg,对车道线进行分割拟合,然后与车辆区域是否覆盖实线区域,进行压线判断。
- 详细文档参考[违法分析-压线](ppvehicle_press.md)
...@@ -8,6 +8,8 @@ English | [简体中文](PPVehicle_QUICK_STARTED.md) ...@@ -8,6 +8,8 @@ English | [简体中文](PPVehicle_QUICK_STARTED.md)
- [Model Download](#Model-Download) - [Model Download](#Model-Download)
- [Configuration](#Configuration) - [Configuration](#Configuration)
- [Inference Deployment](#Inference-Deployment) - [Inference Deployment](#Inference-Deployment)
- [rtsp_stream](#rtsp_stream)
- [Nvidia_Jetson](#Nvidia_Jetson)
- [Parameters](#Parameters) - [Parameters](#Parameters)
- [Solutions](#Solutions) - [Solutions](#Solutions)
- [Vehicle Detection](#Vehicle-Detection) - [Vehicle Detection](#Vehicle-Detection)
...@@ -49,10 +51,13 @@ PP-Vehicle provides object detection, attribute recognition, behaviour recogniti ...@@ -49,10 +51,13 @@ PP-Vehicle provides object detection, attribute recognition, behaviour recogniti
|:---------------------------------:|:----------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------:| |:---------------------------------:|:----------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------:|
| Vehicle Detection(high precision) | 25.7ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | Vehicle Detection(high precision) | 25.7ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| Vehicle Detection(Lightweight) | 13.2ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | Vehicle Detection(Lightweight) | 13.2ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| Vehicle detection (super lightweight) | 10ms(Jetson AGX) | [object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| Vehicle Tracking(high precision) | 40ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | Vehicle Tracking(high precision) | 40ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| Vehicle Tracking(Lightweight) | 25ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | Vehicle Tracking(Lightweight) | 25ms | [Multi-Object Tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| Vehicle tracking (super lightweight) | 13.2ms(Jetson AGX) | [multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.tar.gz) | 17M |
| License plate recognition | 4.68ms | [License plate recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [License plate character recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | Vehicle Detection:3.9M <br> License plate character recognition: 12M | | License plate recognition | 4.68ms | [License plate recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [License plate character recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | Vehicle Detection:3.9M <br> License plate character recognition: 12M |
| Vehicle Attribute Recognition | 7.31ms | [Vehicle Attribute](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M | | Vehicle Attribute Recognition | 7.31ms | [Vehicle Attribute](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
| Lane line Segmentation | 47ms | [Lane line Segmentation](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) | 47M |
Download the model and unzip it into the `. /output_inference` folder. Download the model and unzip it into the `. /output_inference` folder.
...@@ -60,7 +65,7 @@ In the configuration file, the model path defaults to the download path of the m ...@@ -60,7 +65,7 @@ In the configuration file, the model path defaults to the download path of the m
**Notes:** **Notes:**
- The accuracy of detection tracking model is obtained from the joint dataset PPVehicle (integration of the public dataset BDD100K-MOT and UA-DETRAC). For more details, please refer to [PP-Vehicle](... /... /... /... /configs/ppvehicle) - The accuracy of detection tracking model is obtained from the joint dataset PPVehicle (integration of the public dataset BDD100K-MOT and UA-DETRAC). For more details, please refer to [PP-Vehicle](../../../../configs/ppvehicle)
- Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing. - Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing.
## Configuration ## Configuration
...@@ -129,7 +134,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -129,7 +134,10 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
--region_polygon 600 300 1300 300 1300 800 600 800 --region_polygon 600 300 1300 300 1300 800 600 800
``` ```
3. rtsp push/pull stream ### rtsp_stream
The online stream decode based on opencv Capture function, normally support rtsp and rtmp.
- rtsp pull stream - rtsp pull stream
For rtsp pull stream, use --rtsp RTSP [RTSP ...] parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows For rtsp pull stream, use --rtsp RTSP [RTSP ...] parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows
...@@ -151,16 +159,24 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe ...@@ -151,16 +159,24 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
``` ```
Note: Note:
1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first. 1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.
It's very easy to use: 1) download the [release package](https://github.com/aler9/rtsp-simple-server/releases) which is compatible with your workspace. 2) run command './rtsp-simple-server', which works as a rtsp server.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file. 2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.
### Jetson Deployment ### Nvidia_Jetson
Due to the large gap in computing power of the Jetson platform compared to the server, we suggest: Due to the large gap in computing power of the Jetson platform compared to the server, we suggest:
1. choose a lightweight model, especially for tracking model, `ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` is recommended 1. choose a lightweight model, we provide a new model named [PP-YOLOE-Plus Tiny](../../../../configs/ppvehicle/README.md),which achieve 20fps with four rtsp streams work togather on Jetson AGX.
2. For frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3` 2. For further speedup, you can set frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
PP-YOLOE-Plus Tiny module speed test data on AGX:(a single car in the test video)
| module | time cost per frame(ms) | speed(fps) |
|:----------|:----------|:----------|
| tracking | 13 | 77 |
| Attribute | 20.2 | 49.4 |
| Plate | - | - |
With this recommended configuration, it is possible to achieve higher speeds on the TX2 platform. It has been tested with attribute case, with speeds up to 20fps. The configuration file can be modified directly (recommended) or from the command line (not recommended due to its long fields).
### Parameters ### Parameters
...@@ -176,8 +192,7 @@ With this recommended configuration, it is possible to achieve higher speeds on ...@@ -176,8 +192,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
| --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input | | --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input |
| --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 | | --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 |
| --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. | | --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. |
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now. | --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.|
|
| --output_dir | Option | The root directory for the visualization results, and the default is output/ | | --output_dir | Option | The root directory for the visualization results, and the default is output/ |
| --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional | | --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional |
| --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False | | --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False |
...@@ -194,7 +209,7 @@ With this recommended configuration, it is possible to achieve higher speeds on ...@@ -194,7 +209,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
The overall solution for PP-Vehicle v2 is shown in the graph below: The overall solution for PP-Vehicle v2 is shown in the graph below:
<div width="1000" align="center"> <div width="1000" align="center">
<img src="../../../../docs/images/ppvehicle.png"/> <img src="https://user-images.githubusercontent.com/31800336/218659932-31f4298c-042d-436d-9845-18879f5d31e3.png"/>
</div> </div>
### ###
...@@ -202,14 +217,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below: ...@@ -202,14 +217,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below:
### Vehicle detection ### Vehicle detection
- Take PP-YOLOE L as the object detection model - Take PP-YOLOE L as the object detection model
- For detailed documentation, please refer to [PP-YOLOE](... /... /... /... /configs/ppyoloe/) and [Multiple-Object-Tracking](ppvehicle_mot_en.md) - For detailed documentation, please refer to [PP-YOLOE](../../../../configs/ppyoloe/) and [Multiple-Object-Tracking](ppvehicle_mot_en.md)
### Vehicle tracking ### Vehicle tracking
- Vehicle tracking by SDE solution - Vehicle tracking by SDE solution
- Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models - Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models
- Adopt the OC-SORT solution for racking module - Adopt the OC-SORT solution for racking module
- Refer to [OC-SORT](... /... /... /... /configs/mot/ocsort) and [Multi-Object Tracking](ppvehicle_mot_en.md) for details - Refer to [OC-SORT](../../../../configs/mot/ocsort) and [Multi-Object Tracking](ppvehicle_mot_en.md) for details
### Attribute Recognition ### Attribute Recognition
...@@ -226,3 +241,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below: ...@@ -226,3 +241,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below:
- Use vehicle tracking model (high precision) PP-YOLOE L to determine whether the parking is illegal based on the vehicle's trajectory and the designated illegal parking area. If it is illegal parking, display the illegal parking plate number. - Use vehicle tracking model (high precision) PP-YOLOE L to determine whether the parking is illegal based on the vehicle's trajectory and the designated illegal parking area. If it is illegal parking, display the illegal parking plate number.
- For details, please refer to [Illegal Parking Detection](ppvehicle_illegal_parking_en.md) - For details, please refer to [Illegal Parking Detection](ppvehicle_illegal_parking_en.md)
#### Vehicle Press Line
- Use segmentation model PP-LiteSeg to get the lane line in frame, combine it with vehicle route to find out the vehicle against traffic.
- For details, please refer to [Vehicle Press Line](ppvehicle_press_en.md)
#### Vehicle Retrograde
- Use segmentation model PP-LiteSeg to get the lane line in frame, combine it with vehicle detection box to juege if the car is pressing on lines.
- For details, please refer to [Vehicle Retrograde](ppvehicle_retrograde_en.md)
...@@ -13,7 +13,7 @@ Vehicle detection and tracking are widely used in traffic monitoring and autonom ...@@ -13,7 +13,7 @@ Vehicle detection and tracking are widely used in traffic monitoring and autonom
| Vehicle Detection/Tracking | PP-YOLOE-l | mAP: 63.9<br>MOTA: 50.1 | Detection: 25.1ms<br>Tracking:31.8ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | | Vehicle Detection/Tracking | PP-YOLOE-l | mAP: 63.9<br>MOTA: 50.1 | Detection: 25.1ms<br>Tracking:31.8ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| Vehicle Detection/Tracking | PP-YOLOE-s | mAP: 61.3<br>MOTA: 46.8 | Detection: 16.2ms<br>Tracking:21.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | | Vehicle Detection/Tracking | PP-YOLOE-s | mAP: 61.3<br>MOTA: 46.8 | Detection: 16.2ms<br>Tracking:21.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) |
1. The detection/tracking model uses the PPVehicle dataset ( which integrates BDD100K-MOT and UA-DETRAC). The dataset merged car, truck, bus, van from BDD100K-MOT and car, bus, van from UA-DETRAC all into 1 class vehicle(1). The detection accuracy mAP was tested on the test set of PPVehicle, and the tracking accuracy MOTA was obtained on the test set of BDD100K-MOT (`car, truck, bus, van` were combined into 1 class `vehicle`). For more details about the training procedure, please refer to [ppvehicle](... /... /... /... /configs/ppvehicle). 1. The detection/tracking model uses the PPVehicle dataset ( which integrates BDD100K-MOT and UA-DETRAC). The dataset merged car, truck, bus, van from BDD100K-MOT and car, bus, van from UA-DETRAC all into 1 class vehicle(1). The detection accuracy mAP was tested on the test set of PPVehicle, and the tracking accuracy MOTA was obtained on the test set of BDD100K-MOT (`car, truck, bus, van` were combined into 1 class `vehicle`). For more details about the training procedure, please refer to [ppvehicle](../../../../configs/ppvehicle).
2. Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing. 2. Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing.
## How To Use ## How To Use
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册