未验证 提交 40d2dff1 编写于 作者: Z zhiboniu 提交者: GitHub

ppvehicle tutorials template (#6578)

* ppvehicle tutorials template

* rename pphuman docs

* update quickstart
上级 befeaeb5
......@@ -423,9 +423,9 @@
- 二次开发教程
- [目标检测](docs/advanced_tutorials/customization/detection.md)
- [关键点检测](docs/advanced_tutorials/customization/keypoint_detection.md)
- [多目标跟踪](docs/advanced_tutorials/customization/mot.md)
- [行为识别](docs/advanced_tutorials/customization/action.md)
- [属性识别](docs/advanced_tutorials/customization/attribute.md)
- [多目标跟踪](docs/advanced_tutorials/customization/pphuman_mot.md)
- [行为识别](docs/advanced_tutorials/customization/pphuman_action.md)
- [属性识别](docs/advanced_tutorials/customization/pphuman_attribute.md)
### 课程专栏
......
......@@ -418,9 +418,9 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of
- Custumization
- [Object detection](docs/advanced_tutorials/customization/detection.md)
- [Keypoint detection](docs/advanced_tutorials/customization/keypoint_detection.md)
- [Multiple object tracking](docs/advanced_tutorials/customization/mot.md)
- [Action recognition](docs/advanced_tutorials/customization/action.md)
- [Attribute recognition](docs/advanced_tutorials/customization/attribute.md)
- [Multiple object tracking](docs/advanced_tutorials/customization/pphuman_mot.md)
- [Action recognition](docs/advanced_tutorials/customization/pphuman_action.md)
- [Attribute recognition](docs/advanced_tutorials/customization/pphuman_attribute.md)
### Courses
......
......@@ -26,7 +26,7 @@
- **mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集,**mix_det**是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。
- OC_SORT的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。
- OC_SORT的导出部署,是单独导出检测模型,再组装跟踪器运行的,参照[PP-Tracking](../../../deploy/pptracking/python)
- OC_SORT是PP-Human和PP-Vehicle等Pipeline分析项目跟踪方向的主要方案,具体使用参照[Pipeline](../../../deploy/pipeline)[MOT](../../../deploy/pipeline/docs/tutorials/mot.md)
- OC_SORT是PP-Human和PP-Vehicle等Pipeline分析项目跟踪方向的主要方案,具体使用参照[Pipeline](../../../deploy/pipeline)[MOT](../../../deploy/pipeline/docs/tutorials/pphuman_mot.md)
## 快速开始
......
......@@ -17,7 +17,7 @@ PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用
# PP-YOLOE 香烟检测模型
基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。
基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
|:---------|:-------:|:------:|:------:| :----: | :------:|
......
......@@ -70,19 +70,19 @@ PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能
## 📚 文档教程
### [快速开始](docs/tutorials/QUICK_STARTED.md)
### [快速开始](docs/tutorials/PPHuman_QUICK_STARTED.md)
### 行人属性/特征识别
* [快速开始](docs/tutorials/attribute.md)
* [二次开发教程](../../docs/advanced_tutorials/customization/attribute.md)
* [快速开始](docs/tutorials/pphuman_attribute.md)
* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_attribute.md)
* 数据准备
* 模型优化
* 新增属性
### 行为识别
* [快速开始](docs/tutorials/action.md)
* [快速开始](docs/tutorials/pphuman_action.md)
* 摔倒检测
* 打架识别
* [二次开发教程](../../docs/advanced_tutorials/customization/action_recognotion/README.md)
......@@ -93,17 +93,17 @@ PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能
### 跨镜跟踪ReID
* [快速开始](docs/tutorials/mtmct.md)
* [二次开发教程](../../docs/advanced_tutorials/customization/mtmct.md)
* [快速开始](docs/tutorials/pphuman_mtmct.md)
* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mtmct.md)
* 数据准备
* 模型优化
### 行人跟踪、人流量计数与轨迹记录
* [快速开始](docs/tutorials/mot.md)
* [快速开始](docs/tutorials/pphuman_mot.md)
* 行人跟踪
* 人流量计数与轨迹记录
* 区域闯入判断和计数
* [二次开发教程](../../docs/advanced_tutorials/customization/mot.md)
* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mot.md)
* 数据准备
* 模型优化
......@@ -74,19 +74,19 @@ Click to download the model, then unzip and save it in the `. /output_inference`
## 📚 Doc Tutorials
### [A Quick Start](docs/tutorials/QUICK_STARTED.md)
### [A Quick Start](docs/tutorials/PPHuman_QUICK_STARTED.md)
### Pedestrian attribute/feature recognition
* [A quick start](docs/tutorials/attribute.md)
* [Customized development tutorials](../../docs/advanced_tutorials/customization/attribute.md)
* [A quick start](docs/tutorials/pphuman_attribute.md)
* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_attribute.md)
* Data Preparation
* Model Optimization
* New Attributes
### Behavior detection
* [A quick start](docs/tutorials/action.md)
* [A quick start](docs/tutorials/pphuman_action.md)
* Falling detection
* Fighting detection
* [Customized development tutorials](../../docs/advanced_tutorials/customization/action_recognotion/README.md)
......@@ -97,17 +97,17 @@ Click to download the model, then unzip and save it in the `. /output_inference`
### ReID
* [A quick start](docs/tutorials/mtmct.md)
* [Customized development tutorials](../../docs/advanced_tutorials/customization/mtmct.md)
* [A quick start](docs/tutorials/pphuman_mtmct.md)
* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mtmct.md)
* Data Preparation
* Model Optimization
### Pedestrian tracking, visitor traffic statistics, trace records
* [A quick start](docs/tutorials/mot.md)
* [A quick start](docs/tutorials/pphuman_mot.md)
* Pedestrian tracking,
* Visitor traffic statistics
* Regional intrusion diagnosis and counting
* [Customized development tutorials](../../docs/advanced_tutorials/customization/mot.md)
* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mot.md)
* Data Preparation
* Model Optimization
......@@ -159,24 +159,24 @@ PP-Human v2整体方案如下图所示:
### 行人检测
- 采用PP-YOLOE L 作为目标检测模型
- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)[检测跟踪文档](mot.md)
- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)[检测跟踪文档](pphuman_mot.md)
### 行人跟踪
- 采用SDE方案完成行人跟踪
- 检测模型使用PP-YOLOE L(高精度)和S(轻量级)
- 跟踪模块采用OC-SORT方案
- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)[检测跟踪文档](mot.md)
- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)[检测跟踪文档](pphuman_mot.md)
### 跨镜行人跟踪
- 使用PP-YOLOE + OC-SORT得到单镜头多目标跟踪轨迹
- 使用ReID(StrongBaseline网络)对每一帧的检测结果提取特征
- 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果
- 详细文档参考[跨镜跟踪](mtmct.md)
- 详细文档参考[跨镜跟踪](pphuman_mtmct.md)
### 属性识别
- 使用PP-YOLOE + OC-SORT跟踪人体
- 使用StrongBaseline(多分类模型)完成识别属性,主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等
- 详细文档参考[属性识别](attribute.md)
- 详细文档参考[属性识别](pphuman_attribute.md)
### 行为识别:
- 提供四种行为识别方案
......@@ -184,4 +184,4 @@ PP-Human v2整体方案如下图所示:
- 2. 基于图像分类的行为识别,例如打电话识别
- 3. 基于检测的行为识别,例如吸烟识别
- 4. 基于视频分类的行为识别,例如打架识别
- 详细文档参考[行为识别](action.md)
- 详细文档参考[行为识别](pphuman_action.md)
# 快速开始
## 目录
- [环境准备](#环境准备)
- [模型下载](#模型下载)
- [配置文件说明](#配置文件说明)
- [预测部署](#预测部署)
- [参数说明](#参数说明)
- [方案介绍](#方案介绍)
- [车辆检测](#车辆检测)
- [车辆跟踪](#车辆跟踪)
- [车牌识别](#车牌识别)
- [属性识别](#属性识别)
## 环境准备
环境要求: PaddleDetection版本 >= release/2.4 或 develop版本
PaddlePaddle和PaddleDetection安装
```
# PaddlePaddle CUDA10.1
python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
# PaddlePaddle CPU
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
# 克隆PaddleDetection仓库
cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git
# 安装其他依赖
cd PaddleDetection
pip install -r requirements.txt
```
1. 详细安装文档参考[文档](../../../../docs/tutorials/INSTALL_cn.md)
2. 如果需要TensorRT推理加速(测速方式),请安装带`TensorRT版本Paddle`。您可以从[Paddle安装包](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python)下载安装,或者按照[指导文档](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html)使用docker或自编译方式准备Paddle环境。
## 模型下载
## 配置文件说明
PP-Vehicle相关配置位于```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型
功能及任务类型对应表单如下:
| 输入类型 | 功能 | 任务类型 | 配置项 |
|-------|-------|----------|-----|
| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR |
| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR |
| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT VEHICLEPLATE |
例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下:
```
```
**注意:**
- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True, 其basemode类型会在代码中开启依赖的基础能力模型,比如跟踪模型。
- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。
## 预测部署
```
# 车辆检测,指定配置文件路径和测试图片
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
# 车辆跟踪,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True```
# 命令行中指定的模型路径优先级高于配置文件
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
```
### 参数说明
| 参数 | 是否必须|含义 |
|-------|-------|----------|
| --config | Yes | 配置文件路径 |
| --model_dir | Option | 各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`|
| --image_file | Option | 需要预测的图片 |
| --image_dir | Option | 要预测的图片文件夹路径 |
| --video_file | Option | 需要预测的视频 |
| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4|
| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`|
| --output_dir | Option|可视化结果保存的根目录,默认为output/|
| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)|
| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False |
| --cpu_threads | Option| 设置cpu线程数,默认为1 |
| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False |
| --do_entrance_counting | Option | 是否统计出入口流量,默认为False |
| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False |
## 方案介绍
PP-Vehicle v2整体方案如下图所示:
<div width="1000" align="center">
<img src="../../../../docs/images/ppvehicle.png"/>
</div>
### 车辆检测
- 采用PP-YOLOE L 作为目标检测模型
- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)[检测跟踪文档](ppvehicle_mot.md)
### 车辆跟踪
- 采用SDE方案完成车辆跟踪
- 检测模型使用PP-YOLOE L(高精度)和S(轻量级)
- 跟踪模块采用OC-SORT方案
- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)[检测跟踪文档](ppvehicle_mot.md)
### 属性识别
- 使用PP-YOLOE + OC-SORT跟踪车辆
- 详细文档参考[属性识别](ppvehicle_attribute.md)
### 车牌识别
- 使用PaddleOCR特色模型ch_PP-OCRv3_det+ch_PP-OCRv3_rec模型,识别车牌号码
- 详细文档参考[属性识别](ppvehicle_plate.md)
[English](action_en.md) | 简体中文
[English](pphuman_action_en.md) | 简体中文
# PP-Human行为识别模块
......@@ -68,7 +68,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
```
4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
### 方案说明
......@@ -127,7 +127,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
--video_file=test_video.mp4 \
--device=gpu
```
4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
### 方案说明
1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)
......@@ -183,7 +183,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
--video_file=test_video.mp4 \
--device=gpu
```
4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
### 方案说明
1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)
......@@ -202,7 +202,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
<img src="https://user-images.githubusercontent.com/22989727/178769370-03ab1965-cfd1-401b-9902-82620a06e43c.gif" width='1000'/>
</div>
具体使用请参照[PP-Human检测跟踪模块](mot.md)`5. 区域闯入判断和计数`
具体使用请参照[PP-Human检测跟踪模块](pphuman_mot.md)`5. 区域闯入判断和计数`
### 方案说明
1. 使用多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)
......@@ -255,7 +255,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
--video_file=test_video.mp4 \
--device=gpu
```
4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
### 方案说明
......
English | [简体中文](action.md)
English | [简体中文](pphuman_action.md)
# Action Recognition Module of PP-Human
......@@ -81,7 +81,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
......@@ -133,7 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
--video_file=test_video.mp4 \
--device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
......@@ -180,7 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
--video_file=test_video.mp4 \
--device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
......@@ -238,7 +238,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model
--video_file=test_video.mp4 \
--device=gpu
```
5. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md).
5. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md).
The result is shown as follow:
......
[English](attribute_en.md) | 简体中文
[English](pphuman_attribute_en.md) | 简体中文
# PP-Human属性识别模块
......@@ -15,7 +15,7 @@
1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/)[CrowdHuman](http://www.crowdhuman.org/)[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。
2. 行人属性分析精度为[PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html)和部分业务数据融合训练测试得到
3. 预测速度为V100 机器上使用TensorRT FP16时的速度, 该处测速速度为模型预测速度
4. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。
4. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./pphuman_mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。
5. 模型下载后解压放置在PaddleDetection/output_inference/目录下。
## 使用方法
......@@ -31,7 +31,7 @@ ATTR: #模
enable: False #功能是否开启
```
2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./QUICK_STARTED.md#41-参数说明))。
2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./PPHuman_QUICK_STARTED.md#41-参数说明))。
```python
#单张图片
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
......
English | [简体中文](attribute.md)
English | [简体中文](pphuman_attribute.md)
# Attribute Recognition Modules of PP-Human
......@@ -12,7 +12,7 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The inference speed is V100, the speed of using TensorRT FP16.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./mot_en.md). The High precision and Faster model are both available.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./pphuman_mot_en.md). The High precision and Faster model are both available.
4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`.
## Instruction
......@@ -28,7 +28,7 @@ ATTR: #modul
enable: False #whether to enable this model
```
2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./QUICK_STARTED.md#41-参数说明) for more details):
2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./PPHuman_QUICK_STARTED.md#41-参数说明) for more details):
```python
#single image
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
......
[English](mot_en.md) | 简体中文
[English](pphuman_mot_en.md) | 简体中文
# PP-Human检测跟踪模块
......
English | [简体中文](mot.md)
English | [简体中文](pphuman_mot.md)
# Detection and Tracking Module of PP-Human
......
[English](mtmct_en.md) | 简体中文
[English](pphuman_mtmct_en.md) | 简体中文
# PP-Human跨镜头跟踪模块
......@@ -7,7 +7,7 @@ PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨
## 使用方法
1. 下载模型 [行人跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)[REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下,修改配置文件中模型路径。也可简单起见直接用默认配置,自动下载模型。 MOT模型请参考[mot说明](./mot.md)文件下载。
1. 下载模型 [行人跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)[REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下,修改配置文件中模型路径。也可简单起见直接用默认配置,自动下载模型。 MOT模型请参考[mot说明](./pphuman_mot.md)文件下载。
2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下:
```python
......
English | [简体中文](mtmct.md)
English | [简体中文](pphuman_mtmct.md)
# Multi-Target Multi-Camera Tracking Module of PP-Human
......@@ -7,7 +7,7 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
## How to Use
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./mot.md).
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./pphuman_mot.md).
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
```python
......
# PP-Vehicle属性识别模块
【应用介绍】
【模型下载】
## 使用方法
【配置项说明】
【使用命令】
【效果展示】
## 方案说明
【实现方案及特色】
## 参考文献
# PP-Vehicle车辆跟踪模块
【应用介绍】
【模型下载】
## 使用方法
【配置项说明】
【使用命令】
【效果展示】
## 方案说明
【实现方案及特色】
## 参考文献
# PP-Vehicle车牌识别模块
【应用介绍】
【模型下载】
## 使用方法
【配置项说明】
【使用命令】
【效果展示】
## 方案说明
【实现方案及特色】
## 参考文献
......@@ -50,5 +50,5 @@
1. [基于人体id检测的行为识别](./idbased_det.md)
2. [基于人体id分类的行为识别](./idbased_clas.md)
3. [基于人体骨骼点的行为识别](./skeletonbased_rec.md)
4. [基于人体id跟踪的行为识别](../mot.md)
4. [基于人体id跟踪的行为识别](../pphuman_mot.md)
5. [基于视频分类的行为识别](./videobased_rec.md)
......@@ -51,5 +51,5 @@ The following are detailed description for the five major categories of solution
1. [action recognition based on detection with human id.](./idbased_det_en.md)
2. [action recognition based on classification with human id.](./idbased_clas_en.md)
3. [action recognition based on skelenton.](./skeletonbased_rec_en.md)
4. [action recognition based on tracking with human id](../mot_en.md)
4. [action recognition based on tracking with human id](../pphuman_mot_en.md)
5. [action recognition based on video classification](./videobased_rec_en.md)
......@@ -53,7 +53,7 @@ data/
## 模型优化
### 检测-跟踪模型优化
基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。
基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。
### 半身图预测
......
......@@ -54,7 +54,7 @@ data/
## Model Optimization
### Detection-Tracking Model Optimization
The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
### Half-Body Prediction
......
......@@ -15,7 +15,7 @@
## 模型优化
### 检测-跟踪模型优化
基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。
基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。
### 更大的分辨率
......
......@@ -14,7 +14,7 @@ The model of action recognition based on detection with human id directly recogn
## Model Optimization
### Detection-Tracking Model Optimization
The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
### Larger resolution
......
......@@ -83,7 +83,7 @@ python prepare_dataset.py
## 模型优化
### 检测-跟踪模型优化
基于骨骼点的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。
基于骨骼点的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。
### 关键点模型优化
骨骼点作为该方案的核心特征,对行人的骨骼点定位效果也决定了行为识别的整体效果。若发现在实际场景中对关键点坐标的识别结果有明显错误,从关键点组成的骨架图像看,已经难以辨别具体动作,可以参考[关键点检测任务二次开发](../keypoint_detection.md)对关键点模型进行优化。
......
......@@ -77,7 +77,7 @@ Now, we have available training data (`.npy`) and corresponding annotation files
## Model Optimization
### detection-tracking model optimization
The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
### keypoint model optimization
As the core feature of the scheme, the skeleton point positioning performance also determines the overall effect of action recognition. If there are obvious errors in the recognition results of the keypoint coordinates of in the actual scene, it is difficult to distinguish the specific actions from the skeleton image composed of the keypoint.
......
......@@ -21,7 +21,7 @@
### 检测-跟踪模型优化
在PaddleDetection中,关键点检测能力支持Top-Down、Bottom-Up两套方案,Top-Down先检测主体,再检测局部关键点,优点是精度较高,缺点是速度会随着检测对象的个数增加,Bottom-Up先检测关键点再组合到对应的部位上,优点是速度快,与检测对象个数无关,缺点是精度较低。关于两种方案的详情及对应模型,可参考[关键点检测系列模型](../../../configs/keypoint/README.md)
当使用Top-Down方案时,模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,会使关键点检测部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](./detection.md)以及[多目标跟踪任务二次开发](./mot.md)对检测/跟踪模型进行优化。
当使用Top-Down方案时,模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,会使关键点检测部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](./detection.md)以及[多目标跟踪任务二次开发](./pphuman_mot.md)对检测/跟踪模型进行优化。
### 使用符合场景的数据迭代
目前发布的关键点检测算法模型主要在`COCO`/ `AI Challenger`等开源数据集上迭代,这部分数据集中可能缺少与实际任务较为相似的监控场景(视角、光照等因素)、体育场景(存在较多非常规的姿态)。使用更符合实际任务场景的数据进行训练,有助于提升模型效果。
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册