diff --git a/README_cn.md b/README_cn.md index 4db9ecae956202ef7949fd7161f4e7f8a7241ca3..1b828414fc35f348ada797cafd07a49232245291 100644 --- a/README_cn.md +++ b/README_cn.md @@ -423,9 +423,9 @@ - 二次开发教程 - [目标检测](docs/advanced_tutorials/customization/detection.md) - [关键点检测](docs/advanced_tutorials/customization/keypoint_detection.md) - - [多目标跟踪](docs/advanced_tutorials/customization/mot.md) - - [行为识别](docs/advanced_tutorials/customization/action.md) - - [属性识别](docs/advanced_tutorials/customization/attribute.md) + - [多目标跟踪](docs/advanced_tutorials/customization/pphuman_mot.md) + - [行为识别](docs/advanced_tutorials/customization/pphuman_action.md) + - [属性识别](docs/advanced_tutorials/customization/pphuman_attribute.md) ### 课程专栏 diff --git a/README_en.md b/README_en.md index 30d2c5849f0b292901198b3f6d46ef8832e15183..c7f88eff36c5517af5afe42767810ea9489222c3 100644 --- a/README_en.md +++ b/README_en.md @@ -418,9 +418,9 @@ The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of - Custumization - [Object detection](docs/advanced_tutorials/customization/detection.md) - [Keypoint detection](docs/advanced_tutorials/customization/keypoint_detection.md) - - [Multiple object tracking](docs/advanced_tutorials/customization/mot.md) - - [Action recognition](docs/advanced_tutorials/customization/action.md) - - [Attribute recognition](docs/advanced_tutorials/customization/attribute.md) + - [Multiple object tracking](docs/advanced_tutorials/customization/pphuman_mot.md) + - [Action recognition](docs/advanced_tutorials/customization/pphuman_action.md) + - [Attribute recognition](docs/advanced_tutorials/customization/pphuman_attribute.md) ### Courses diff --git a/configs/mot/ocsort/README.md b/configs/mot/ocsort/README.md index 6dfcf9fb76410b85f8fb9447961a6c881c838183..1d2d6144a2b4a0360854c1fbd8557d9158ac3608 100644 --- a/configs/mot/ocsort/README.md +++ b/configs/mot/ocsort/README.md @@ -26,7 +26,7 @@ - **mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集,**mix_det**是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。 - OC_SORT的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。 - OC_SORT的导出部署,是单独导出检测模型,再组装跟踪器运行的,参照[PP-Tracking](../../../deploy/pptracking/python)。 - - OC_SORT是PP-Human和PP-Vehicle等Pipeline分析项目跟踪方向的主要方案,具体使用参照[Pipeline](../../../deploy/pipeline)和[MOT](../../../deploy/pipeline/docs/tutorials/mot.md)。 + - OC_SORT是PP-Human和PP-Vehicle等Pipeline分析项目跟踪方向的主要方案,具体使用参照[Pipeline](../../../deploy/pipeline)和[MOT](../../../deploy/pipeline/docs/tutorials/pphuman_mot.md)。 ## 快速开始 diff --git a/configs/pphuman/README.md b/configs/pphuman/README.md index b7745eaf5b443a8de59c17770bd64ce4c3687740..e583668ab7b722a4def52e579eaa121a980ea354 100644 --- a/configs/pphuman/README.md +++ b/configs/pphuman/README.md @@ -17,7 +17,7 @@ PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用 # PP-YOLOE 香烟检测模型 -基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。 +基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。 | 模型 | 数据集 | mAPval
0.5:0.95 | mAPval
0.5 | 下载 | 配置文件 | |:---------|:-------:|:------:|:------:| :----: | :------:| diff --git a/deploy/pipeline/README.md b/deploy/pipeline/README.md index 943e82e68c9483736f75216ce244fefb739188d6..29d86d125875b2afbe1b95aac004e90ee8803a56 100644 --- a/deploy/pipeline/README.md +++ b/deploy/pipeline/README.md @@ -70,19 +70,19 @@ PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能 ## 📚 文档教程 -### [快速开始](docs/tutorials/QUICK_STARTED.md) +### [快速开始](docs/tutorials/PPHuman_QUICK_STARTED.md) ### 行人属性/特征识别 -* [快速开始](docs/tutorials/attribute.md) -* [二次开发教程](../../docs/advanced_tutorials/customization/attribute.md) +* [快速开始](docs/tutorials/pphuman_attribute.md) +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_attribute.md) * 数据准备 * 模型优化 * 新增属性 ### 行为识别 -* [快速开始](docs/tutorials/action.md) +* [快速开始](docs/tutorials/pphuman_action.md) * 摔倒检测 * 打架识别 * [二次开发教程](../../docs/advanced_tutorials/customization/action_recognotion/README.md) @@ -93,17 +93,17 @@ PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能 ### 跨镜跟踪ReID -* [快速开始](docs/tutorials/mtmct.md) -* [二次开发教程](../../docs/advanced_tutorials/customization/mtmct.md) +* [快速开始](docs/tutorials/pphuman_mtmct.md) +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mtmct.md) * 数据准备 * 模型优化 ### 行人跟踪、人流量计数与轨迹记录 -* [快速开始](docs/tutorials/mot.md) +* [快速开始](docs/tutorials/pphuman_mot.md) * 行人跟踪 * 人流量计数与轨迹记录 * 区域闯入判断和计数 -* [二次开发教程](../../docs/advanced_tutorials/customization/mot.md) +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mot.md) * 数据准备 * 模型优化 diff --git a/deploy/pipeline/README_en.md b/deploy/pipeline/README_en.md index e74abfacd0e4b206cef2de9c01135d2e1de3ea55..227d08ec7b1467d48c365629373b09c196c32528 100644 --- a/deploy/pipeline/README_en.md +++ b/deploy/pipeline/README_en.md @@ -74,19 +74,19 @@ Click to download the model, then unzip and save it in the `. /output_inference` ## 📚 Doc Tutorials -### [A Quick Start](docs/tutorials/QUICK_STARTED.md) +### [A Quick Start](docs/tutorials/PPHuman_QUICK_STARTED.md) ### Pedestrian attribute/feature recognition -* [A quick start](docs/tutorials/attribute.md) -* [Customized development tutorials](../../docs/advanced_tutorials/customization/attribute.md) +* [A quick start](docs/tutorials/pphuman_attribute.md) +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_attribute.md) * Data Preparation * Model Optimization * New Attributes ### Behavior detection -* [A quick start](docs/tutorials/action.md) +* [A quick start](docs/tutorials/pphuman_action.md) * Falling detection * Fighting detection * [Customized development tutorials](../../docs/advanced_tutorials/customization/action_recognotion/README.md) @@ -97,17 +97,17 @@ Click to download the model, then unzip and save it in the `. /output_inference` ### ReID -* [A quick start](docs/tutorials/mtmct.md) -* [Customized development tutorials](../../docs/advanced_tutorials/customization/mtmct.md) +* [A quick start](docs/tutorials/pphuman_mtmct.md) +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mtmct.md) * Data Preparation * Model Optimization ### Pedestrian tracking, visitor traffic statistics, trace records -* [A quick start](docs/tutorials/mot.md) +* [A quick start](docs/tutorials/pphuman_mot.md) * Pedestrian tracking, * Visitor traffic statistics * Regional intrusion diagnosis and counting -* [Customized development tutorials](../../docs/advanced_tutorials/customization/mot.md) +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mot.md) * Data Preparation * Model Optimization diff --git a/deploy/pipeline/docs/tutorials/QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md similarity index 97% rename from deploy/pipeline/docs/tutorials/QUICK_STARTED.md rename to deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md index 398c44e50b56b8e3180716b4187128122c39abc1..245db50b6a9d4d58066281fe7c70c69b92f89e80 100644 --- a/deploy/pipeline/docs/tutorials/QUICK_STARTED.md +++ b/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md @@ -159,24 +159,24 @@ PP-Human v2整体方案如下图所示: ### 行人检测 - 采用PP-YOLOE L 作为目标检测模型 -- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](mot.md) +- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](pphuman_mot.md) ### 行人跟踪 - 采用SDE方案完成行人跟踪 - 检测模型使用PP-YOLOE L(高精度)和S(轻量级) - 跟踪模块采用OC-SORT方案 -- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](mot.md) +- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](pphuman_mot.md) ### 跨镜行人跟踪 - 使用PP-YOLOE + OC-SORT得到单镜头多目标跟踪轨迹 - 使用ReID(StrongBaseline网络)对每一帧的检测结果提取特征 - 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果 -- 详细文档参考[跨镜跟踪](mtmct.md) +- 详细文档参考[跨镜跟踪](pphuman_mtmct.md) ### 属性识别 - 使用PP-YOLOE + OC-SORT跟踪人体 - 使用StrongBaseline(多分类模型)完成识别属性,主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等 -- 详细文档参考[属性识别](attribute.md) +- 详细文档参考[属性识别](pphuman_attribute.md) ### 行为识别: - 提供四种行为识别方案 @@ -184,4 +184,4 @@ PP-Human v2整体方案如下图所示: - 2. 基于图像分类的行为识别,例如打电话识别 - 3. 基于检测的行为识别,例如吸烟识别 - 4. 基于视频分类的行为识别,例如打架识别 -- 详细文档参考[行为识别](action.md) +- 详细文档参考[行为识别](pphuman_action.md) diff --git a/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md new file mode 100644 index 0000000000000000000000000000000000000000..2e4ff20bcb8b7cec16675bd415ebc9e2e3d83f76 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md @@ -0,0 +1,131 @@ +# 快速开始 + +## 目录 + +- [环境准备](#环境准备) +- [模型下载](#模型下载) +- [配置文件说明](#配置文件说明) +- [预测部署](#预测部署) + - [参数说明](#参数说明) +- [方案介绍](#方案介绍) + - [车辆检测](#车辆检测) + - [车辆跟踪](#车辆跟踪) + - [车牌识别](#车牌识别) + - [属性识别](#属性识别) + + +## 环境准备 + +环境要求: PaddleDetection版本 >= release/2.4 或 develop版本 + +PaddlePaddle和PaddleDetection安装 + +``` +# PaddlePaddle CUDA10.1 +python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html + +# PaddlePaddle CPU +python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple + +# 克隆PaddleDetection仓库 +cd +git clone https://github.com/PaddlePaddle/PaddleDetection.git + +# 安装其他依赖 +cd PaddleDetection +pip install -r requirements.txt +``` + +1. 详细安装文档参考[文档](../../../../docs/tutorials/INSTALL_cn.md) +2. 如果需要TensorRT推理加速(测速方式),请安装带`TensorRT版本Paddle`。您可以从[Paddle安装包](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python)下载安装,或者按照[指导文档](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html)使用docker或自编译方式准备Paddle环境。 + +## 模型下载 + + +## 配置文件说明 + +PP-Vehicle相关配置位于```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型 + +功能及任务类型对应表单如下: + +| 输入类型 | 功能 | 任务类型 | 配置项 | +|-------|-------|----------|-----| +| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR | +| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR | +| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT VEHICLEPLATE | + +例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下: + +``` + +``` + +**注意:** + +- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True, 其basemode类型会在代码中开启依赖的基础能力模型,比如跟踪模型。 +- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。 + + +## 预测部署 + +``` +# 车辆检测,指定配置文件路径和测试图片 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16] + +# 车辆跟踪,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True``` +# 命令行中指定的模型路径优先级高于配置文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16] + +# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +``` + +### 参数说明 + +| 参数 | 是否必须|含义 | +|-------|-------|----------| +| --config | Yes | 配置文件路径 | +| --model_dir | Option | 各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`| +| --image_file | Option | 需要预测的图片 | +| --image_dir | Option | 要预测的图片文件夹路径 | +| --video_file | Option | 需要预测的视频 | +| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4| +| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`| +| --output_dir | Option|可视化结果保存的根目录,默认为output/| +| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)| +| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | +| --cpu_threads | Option| 设置cpu线程数,默认为1 | +| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | +| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | +| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | + +## 方案介绍 + +PP-Vehicle v2整体方案如下图所示: + +
+ +
+ + +### 车辆检测 +- 采用PP-YOLOE L 作为目标检测模型 +- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](ppvehicle_mot.md) + +### 车辆跟踪 +- 采用SDE方案完成车辆跟踪 +- 检测模型使用PP-YOLOE L(高精度)和S(轻量级) +- 跟踪模块采用OC-SORT方案 +- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](ppvehicle_mot.md) + +### 属性识别 +- 使用PP-YOLOE + OC-SORT跟踪车辆 +- 详细文档参考[属性识别](ppvehicle_attribute.md) + +### 车牌识别 +- 使用PaddleOCR特色模型ch_PP-OCRv3_det+ch_PP-OCRv3_rec模型,识别车牌号码 +- 详细文档参考[属性识别](ppvehicle_plate.md) diff --git a/deploy/pipeline/docs/tutorials/action.md b/deploy/pipeline/docs/tutorials/pphuman_action.md similarity index 98% rename from deploy/pipeline/docs/tutorials/action.md rename to deploy/pipeline/docs/tutorials/pphuman_action.md index 4e0f94d3cdfde722b97723e545a9565f7946185d..53b9f5f388622b66480020baff52bdadc22a4149 100644 --- a/deploy/pipeline/docs/tutorials/action.md +++ b/deploy/pipeline/docs/tutorials/pphuman_action.md @@ -1,4 +1,4 @@ -[English](action_en.md) | 简体中文 +[English](pphuman_action_en.md) | 简体中文 # PP-Human行为识别模块 @@ -68,7 +68,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph --device=gpu \ --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN ``` -4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)。 +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 ### 方案说明 @@ -127,7 +127,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph --video_file=test_video.mp4 \ --device=gpu ``` -4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)。 +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 ### 方案说明 1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 @@ -183,7 +183,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph --video_file=test_video.mp4 \ --device=gpu ``` -4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)。 +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 ### 方案说明 1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 @@ -202,7 +202,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph -具体使用请参照[PP-Human检测跟踪模块](mot.md)的`5. 区域闯入判断和计数`。 +具体使用请参照[PP-Human检测跟踪模块](pphuman_mot.md)的`5. 区域闯入判断和计数`。 ### 方案说明 1. 使用多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 @@ -255,7 +255,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph --video_file=test_video.mp4 \ --device=gpu ``` -4. 启动命令中的完整参数说明,请参考[参数说明](./QUICK_STARTED.md)。 +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 ### 方案说明 diff --git a/deploy/pipeline/docs/tutorials/action_en.md b/deploy/pipeline/docs/tutorials/pphuman_action_en.md similarity index 98% rename from deploy/pipeline/docs/tutorials/action_en.md rename to deploy/pipeline/docs/tutorials/pphuman_action_en.md index 51ab1cf41a5b5c07dbcdaef253f3f789376cd52e..159e474897e023f90ae8087155030439ea36a442 100644 --- a/deploy/pipeline/docs/tutorials/action_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_action_en.md @@ -1,4 +1,4 @@ -English | [简体中文](action.md) +English | [简体中文](pphuman_action.md) # Action Recognition Module of PP-Human @@ -81,7 +81,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model --device=gpu \ --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN ``` -4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md) +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) ### Introduction to the Solution @@ -133,7 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model --video_file=test_video.mp4 \ --device=gpu ``` -4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md) +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) ### Introduction to the Solution 1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe). @@ -180,7 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model --video_file=test_video.mp4 \ --device=gpu ``` -4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md) +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) ### Introduction to the Solution 1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe). @@ -238,7 +238,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model --video_file=test_video.mp4 \ --device=gpu ``` -5. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md). +5. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md). The result is shown as follow: diff --git a/deploy/pipeline/docs/tutorials/attribute.md b/deploy/pipeline/docs/tutorials/pphuman_attribute.md similarity index 96% rename from deploy/pipeline/docs/tutorials/attribute.md rename to deploy/pipeline/docs/tutorials/pphuman_attribute.md index 430bd96794226a6a42c60a2f4337e6b40f1d3692..9055225fedf2b2dda13833289dfeb2ee54773cc2 100644 --- a/deploy/pipeline/docs/tutorials/attribute.md +++ b/deploy/pipeline/docs/tutorials/pphuman_attribute.md @@ -1,4 +1,4 @@ -[English](attribute_en.md) | 简体中文 +[English](pphuman_attribute_en.md) | 简体中文 # PP-Human属性识别模块 @@ -15,7 +15,7 @@ 1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。 2. 行人属性分析精度为[PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html)和部分业务数据融合训练测试得到 3. 预测速度为V100 机器上使用TensorRT FP16时的速度, 该处测速速度为模型预测速度 -4. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。 +4. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./pphuman_mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。 5. 模型下载后解压放置在PaddleDetection/output_inference/目录下。 ## 使用方法 @@ -31,7 +31,7 @@ ATTR: #模 enable: False #功能是否开启 ``` -2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./QUICK_STARTED.md#41-参数说明))。 +2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./PPHuman_QUICK_STARTED.md#41-参数说明))。 ```python #单张图片 python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ diff --git a/deploy/pipeline/docs/tutorials/attribute_en.md b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md similarity index 95% rename from deploy/pipeline/docs/tutorials/attribute_en.md rename to deploy/pipeline/docs/tutorials/pphuman_attribute_en.md index 3f264a7161c4d05208ef7805d91174adc9cd35ea..2e055971a42522983dcf6fdb94d88d2d3915cd98 100644 --- a/deploy/pipeline/docs/tutorials/attribute_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md @@ -1,4 +1,4 @@ -English | [简体中文](attribute.md) +English | [简体中文](pphuman_attribute.md) # Attribute Recognition Modules of PP-Human @@ -12,7 +12,7 @@ Pedestrian attribute recognition has been widely used in the intelligent communi 1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data. 2. The inference speed is V100, the speed of using TensorRT FP16. -3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./mot_en.md). The High precision and Faster model are both available. +3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./pphuman_mot_en.md). The High precision and Faster model are both available. 4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`. ## Instruction @@ -28,7 +28,7 @@ ATTR: #modul enable: False #whether to enable this model ``` -2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./QUICK_STARTED.md#41-参数说明) for more details): +2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./PPHuman_QUICK_STARTED.md#41-参数说明) for more details): ```python #single image python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ diff --git a/deploy/pipeline/docs/tutorials/mot.md b/deploy/pipeline/docs/tutorials/pphuman_mot.md similarity index 99% rename from deploy/pipeline/docs/tutorials/mot.md rename to deploy/pipeline/docs/tutorials/pphuman_mot.md index a29898f4327cbcca7834a097c4654fa000ac279e..cc26da04549f1f74041d939a52b8bc36e4e399b2 100644 --- a/deploy/pipeline/docs/tutorials/mot.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mot.md @@ -1,4 +1,4 @@ -[English](mot_en.md) | 简体中文 +[English](pphuman_mot_en.md) | 简体中文 # PP-Human检测跟踪模块 diff --git a/deploy/pipeline/docs/tutorials/mot_en.md b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md similarity index 99% rename from deploy/pipeline/docs/tutorials/mot_en.md rename to deploy/pipeline/docs/tutorials/pphuman_mot_en.md index fe447579c4fc83cefc220016abea69c23362d81a..9944a71833d70dd9d810a37dd5e6e5a1130c3d6b 100644 --- a/deploy/pipeline/docs/tutorials/mot_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md @@ -1,4 +1,4 @@ -English | [简体中文](mot.md) +English | [简体中文](pphuman_mot.md) # Detection and Tracking Module of PP-Human diff --git a/deploy/pipeline/docs/tutorials/mtmct.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md similarity index 96% rename from deploy/pipeline/docs/tutorials/mtmct.md rename to deploy/pipeline/docs/tutorials/pphuman_mtmct.md index d0835719b92aefded004ff95b594d18e77b4a9a4..894c18ee0d5541f082f1f70b0250549519768020 100644 --- a/deploy/pipeline/docs/tutorials/mtmct.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md @@ -1,4 +1,4 @@ -[English](mtmct_en.md) | 简体中文 +[English](pphuman_mtmct_en.md) | 简体中文 # PP-Human跨镜头跟踪模块 @@ -7,7 +7,7 @@ PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨 ## 使用方法 -1. 下载模型 [行人跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)和[REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下,修改配置文件中模型路径。也可简单起见直接用默认配置,自动下载模型。 MOT模型请参考[mot说明](./mot.md)文件下载。 +1. 下载模型 [行人跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)和[REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下,修改配置文件中模型路径。也可简单起见直接用默认配置,自动下载模型。 MOT模型请参考[mot说明](./pphuman_mot.md)文件下载。 2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下: ```python diff --git a/deploy/pipeline/docs/tutorials/mtmct_en.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md similarity index 96% rename from deploy/pipeline/docs/tutorials/mtmct_en.md rename to deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md index 756edec367b60c872bf8881e7731fff5f9c77d1f..0321d2a52d511a00b0c095a3878c3a959e646292 100644 --- a/deploy/pipeline/docs/tutorials/mtmct_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md @@ -1,4 +1,4 @@ -English | [简体中文](mtmct.md) +English | [简体中文](pphuman_mtmct.md) # Multi-Target Multi-Camera Tracking Module of PP-Human @@ -7,7 +7,7 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli ## How to Use -1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./mot.md). +1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./pphuman_mot.md). 2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is: ```python diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md new file mode 100644 index 0000000000000000000000000000000000000000..3f3740c587076571f393e5fc7ad47173504dfa9e --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md @@ -0,0 +1,20 @@ + +# PP-Vehicle属性识别模块 + +【应用介绍】 + +【模型下载】 + +## 使用方法 + +【配置项说明】 + +【使用命令】 + +【效果展示】 + +## 方案说明 + +【实现方案及特色】 + +## 参考文献 diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_mot.md b/deploy/pipeline/docs/tutorials/ppvehicle_mot.md new file mode 100644 index 0000000000000000000000000000000000000000..7022481526137770c14ace7440bbea4e99511edb --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_mot.md @@ -0,0 +1,20 @@ + +# PP-Vehicle车辆跟踪模块 + +【应用介绍】 + +【模型下载】 + +## 使用方法 + +【配置项说明】 + +【使用命令】 + +【效果展示】 + +## 方案说明 + +【实现方案及特色】 + +## 参考文献 diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_plate.md b/deploy/pipeline/docs/tutorials/ppvehicle_plate.md new file mode 100644 index 0000000000000000000000000000000000000000..9f3ea6fcbc29a90fa83259aab61acaddc79f703f --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_plate.md @@ -0,0 +1,20 @@ + +# PP-Vehicle车牌识别模块 + +【应用介绍】 + +【模型下载】 + +## 使用方法 + +【配置项说明】 + +【使用命令】 + +【效果展示】 + +## 方案说明 + +【实现方案及特色】 + +## 参考文献 diff --git a/docs/advanced_tutorials/customization/action_recognotion/README.md b/docs/advanced_tutorials/customization/action_recognotion/README.md index 3bc0d8bfe6a0e82c2ffbbc0c3687647aa195e4dc..d9adf2184592e37c343d743e3ce93c8a4dccb493 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/README.md +++ b/docs/advanced_tutorials/customization/action_recognotion/README.md @@ -50,5 +50,5 @@ 1. [基于人体id检测的行为识别](./idbased_det.md) 2. [基于人体id分类的行为识别](./idbased_clas.md) 3. [基于人体骨骼点的行为识别](./skeletonbased_rec.md) -4. [基于人体id跟踪的行为识别](../mot.md) +4. [基于人体id跟踪的行为识别](../pphuman_mot.md) 5. [基于视频分类的行为识别](./videobased_rec.md) diff --git a/docs/advanced_tutorials/customization/action_recognotion/README_en.md b/docs/advanced_tutorials/customization/action_recognotion/README_en.md index da62d9c1f315b45f6c4e24385e612e7455802c79..d04d426b7076abdd38a7317117f5daab6eeff0ad 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/README_en.md +++ b/docs/advanced_tutorials/customization/action_recognotion/README_en.md @@ -51,5 +51,5 @@ The following are detailed description for the five major categories of solution 1. [action recognition based on detection with human id.](./idbased_det_en.md) 2. [action recognition based on classification with human id.](./idbased_clas_en.md) 3. [action recognition based on skelenton.](./skeletonbased_rec_en.md) -4. [action recognition based on tracking with human id](../mot_en.md) +4. [action recognition based on tracking with human id](../pphuman_mot_en.md) 5. [action recognition based on video classification](./videobased_rec_en.md) diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md index 6eac407f6cd58eb9d06108403a1b1779b3c190a4..87c583b3b18d99e692f00c330ed37ccf1a13aced 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md @@ -53,7 +53,7 @@ data/ ## 模型优化 ### 检测-跟踪模型优化 -基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。 +基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 ### 半身图预测 diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md index 950f17b0acce2fb1a51c91eaf52877089a18ff1f..a38ba06e2b2d8144bf9021d293b0edbaab5997e1 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md @@ -54,7 +54,7 @@ data/ ## Model Optimization ### Detection-Tracking Model Optimization -The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization. +The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. ### Half-Body Prediction diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md index 88111242bcd652b89d26d430824d6e7cf565ba2d..c46bfc695690073a46a6ae47c49456b921ea9efe 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md @@ -15,7 +15,7 @@ ## 模型优化 ### 检测-跟踪模型优化 -基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。 +基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 ### 更大的分辨率 diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md index f3e48ac35a3bfbef2e19e8133c9c22e9d67f97bd..0c3cb1e9aeb9798fc3910ae7899fd8fd5f252d4d 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md @@ -14,7 +14,7 @@ The model of action recognition based on detection with human id directly recogn ## Model Optimization ### Detection-Tracking Model Optimization -The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization. +The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. ### Larger resolution diff --git a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md index 67ad877e12e33e68ea32609f8b1a805c9023688b..1aeddf32694e9d5ac63f7623071fa50af3c71aec 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md +++ b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md @@ -83,7 +83,7 @@ python prepare_dataset.py ## 模型优化 ### 检测-跟踪模型优化 -基于骨骼点的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../mot.md)对检测/跟踪模型进行优化。 +基于骨骼点的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 ### 关键点模型优化 骨骼点作为该方案的核心特征,对行人的骨骼点定位效果也决定了行为识别的整体效果。若发现在实际场景中对关键点坐标的识别结果有明显错误,从关键点组成的骨架图像看,已经难以辨别具体动作,可以参考[关键点检测任务二次开发](../keypoint_detection.md)对关键点模型进行优化。 diff --git a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md index 9c9d2e3a75eb4476344588eeca95aeb2f9086346..86ebc45321893c459b68b1c052e9f83ffc27b5d6 100644 --- a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md +++ b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md @@ -77,7 +77,7 @@ Now, we have available training data (`.npy`) and corresponding annotation files ## Model Optimization ### detection-tracking model optimization -The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization. +The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. ### keypoint model optimization As the core feature of the scheme, the skeleton point positioning performance also determines the overall effect of action recognition. If there are obvious errors in the recognition results of the keypoint coordinates of in the actual scene, it is difficult to distinguish the specific actions from the skeleton image composed of the keypoint. diff --git a/docs/advanced_tutorials/customization/keypoint_detection.md b/docs/advanced_tutorials/customization/keypoint_detection.md index 26db128c73e323ee2d6bb66c45e5cbbdcf355f0e..12285d98841083b8d22c0c5d08ae99d20125a10e 100644 --- a/docs/advanced_tutorials/customization/keypoint_detection.md +++ b/docs/advanced_tutorials/customization/keypoint_detection.md @@ -21,7 +21,7 @@ ### 检测-跟踪模型优化 在PaddleDetection中,关键点检测能力支持Top-Down、Bottom-Up两套方案,Top-Down先检测主体,再检测局部关键点,优点是精度较高,缺点是速度会随着检测对象的个数增加,Bottom-Up先检测关键点再组合到对应的部位上,优点是速度快,与检测对象个数无关,缺点是精度较低。关于两种方案的详情及对应模型,可参考[关键点检测系列模型](../../../configs/keypoint/README.md) -当使用Top-Down方案时,模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,会使关键点检测部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](./detection.md)以及[多目标跟踪任务二次开发](./mot.md)对检测/跟踪模型进行优化。 +当使用Top-Down方案时,模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,会使关键点检测部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](./detection.md)以及[多目标跟踪任务二次开发](./pphuman_mot.md)对检测/跟踪模型进行优化。 ### 使用符合场景的数据迭代 目前发布的关键点检测算法模型主要在`COCO`/ `AI Challenger`等开源数据集上迭代,这部分数据集中可能缺少与实际任务较为相似的监控场景(视角、光照等因素)、体育场景(存在较多非常规的姿态)。使用更符合实际任务场景的数据进行训练,有助于提升模型效果。 diff --git a/docs/advanced_tutorials/customization/attribute.md b/docs/advanced_tutorials/customization/pphuman_attribute.md similarity index 100% rename from docs/advanced_tutorials/customization/attribute.md rename to docs/advanced_tutorials/customization/pphuman_attribute.md diff --git a/docs/advanced_tutorials/customization/mot.md b/docs/advanced_tutorials/customization/pphuman_mot.md similarity index 100% rename from docs/advanced_tutorials/customization/mot.md rename to docs/advanced_tutorials/customization/pphuman_mot.md diff --git a/docs/advanced_tutorials/customization/mtmct.md b/docs/advanced_tutorials/customization/pphuman_mtmct.md similarity index 100% rename from docs/advanced_tutorials/customization/mtmct.md rename to docs/advanced_tutorials/customization/pphuman_mtmct.md