diff --git a/README_cn.md b/README_cn.md index b1e41647b0efb84a1d36ff361547f6580f370202..dce469e18a5cf96d2b1e8b04b9ad3d0c85be9387 100644 --- a/README_cn.md +++ b/README_cn.md @@ -27,7 +27,7 @@ - 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe),提供s/m/l/x版本,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。 - 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),精度提升2%左右,CPU预测速度提升63%,新增参数量0.7M的PicoDet-XS模型,提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。 - - 发布实时行人分析工具[PP-Human](deploy/pphuman),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。 + - 发布实时行人分析工具[PP-Human](deploy/pipeline),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。 - 新增[YOLOX](configs/yolox)目标检测模型,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8%。 - 2021.11.03: PaddleDetection发布[release/2.3版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3) @@ -367,7 +367,7 @@ **点击“ ✅ ”即可下载对应模型** -详细信息参考[文档](deploy/pphuman) +详细信息参考[文档](deploy/pipeline) diff --git a/README_en.md b/README_en.md index 7ec9de272d68690b67d3010fdf3c1ca5938e8308..820ed56b97a55a7b749ecd0bebf24b5cf7d4f9e5 100644 --- a/README_en.md +++ b/README_en.md @@ -22,7 +22,7 @@ English | [简体中文](README_cn.md) - Release GPU SOTA object detection series models (s/m/l/x) [PP-YOLOE](configs/ppyoloe), supporting s/m/l/x version, achieving mAP as 51.6% on COCO test dataset and 78.1 FPS on Nvidia V100 by PP-YOLOE-l, supporting AMP training and its training speed is 33% faster than PP-YOLOv2. - Release enhanced models of [PP-PicoDet](configs/picodet), including PP-PicoDet-XS model with 0.7M parameters, its mAP promoted ~2% on COCO, inference speed accelerated 63% on CPU, and post-processing integrated into the network to optimize deployment pipeline. - - Release real-time human analysis tool [PP-Human](deploy/pphuman), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition. + - Release real-time human analysis tool [PP-Human](deploy/pipeline), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition. - Release [YOLOX](configs/yolox), supporting nano/tiny/s/m/l/x version, achieving mAP as 51.8% on COCO val dataset by YOLOX-x. - 2021.11.03: Release [release/2.3](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.3) version. Release mobile object detection model ⚡[PP-PicoDet](configs/picodet), mobile keypoint detection model ⚡[PP-TinyPose](configs/keypoint/tiny_pose),Real-time tracking system [PP-Tracking](deploy/pptracking). Release object detection models, including [Swin-Transformer](configs/faster_rcnn), [TOOD](configs/tood), [GFL](configs/gfl), release [Sniper](configs/sniper) tiny object detection models and optimized [PP-YOLO-EB](configs/ppyolo) model for EdgeBoard. Release mobile keypoint detection model [Lite HRNet](configs/keypoint). @@ -335,7 +335,7 @@ The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of represen - [Pedestrian detection](configs/pedestrian/README.md) - [Vehicle detection](configs/vehicle/README.md) - Scienario Solution - - [Real-Time Human Analysis Tool PP-Human](deploy/pphuman) + - [Real-Time Human Analysis Tool PP-Human](deploy/pipeline) - Competition Solution - [Objects365 2019 Challenge champion model](static/docs/featured_model/champion_model/CACascadeRCNN_en.md) - [Best single model of Open Images 2019-Object Detection](static/docs/featured_model/champion_model/OIDV5_BASELINE_MODEL_en.md) diff --git a/configs/keypoint/README.md b/configs/keypoint/README.md index 03fa3b9790ad1655bee4257bb23e89ca26908387..9e4bac733f3ecbc65654b1727cbe89ef541f6d7d 100644 --- a/configs/keypoint/README.md +++ b/configs/keypoint/README.md @@ -82,7 +82,7 @@ MPII数据集 场景模型 | 模型 | 方案 | 输入尺寸 | 精度 | 预测速度 |模型权重 | 部署模型 | 说明| | :---- | ---|----- | :--------: | :--------: | :------------: |:------------: |:-------------------: | -| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pphuman/README.md) | +| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pipeline/README.md) | 我们同时推出了基于LiteHRNet(Top-Down)针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。 diff --git a/configs/keypoint/README_en.md b/configs/keypoint/README_en.md index 205a7d71844750336ed63a6913dbd5d2ec949973..915c8160ab2df6004bb2bed7c1460c66091ecd1f 100644 --- a/configs/keypoint/README_en.md +++ b/configs/keypoint/README_en.md @@ -93,7 +93,7 @@ MPII Dataset Model for Scenes | Model | Strategy | Input Size | Precision | Inference Speed |Model Weights | Model Inference and Deployment | description| | :---- | ---|----- | :--------: | :-------: |:------------: |:------------: |:-------------------: | -| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pphuman/README_en.md) | +| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pipeline/README_en.md) | We also release [PP-TinyPose](./tiny_pose/README_en.md), a real-time keypoint detection model optimized for mobile devices. Welcome to experience. diff --git a/deploy/pphuman/README.md b/deploy/pipeline/README.md similarity index 100% rename from deploy/pphuman/README.md rename to deploy/pipeline/README.md diff --git a/deploy/pphuman/__init__.py b/deploy/pipeline/__init__.py similarity index 100% rename from deploy/pphuman/__init__.py rename to deploy/pipeline/__init__.py diff --git a/deploy/pphuman/config/infer_cfg_pphuman.yml b/deploy/pipeline/config/infer_cfg_pphuman.yml similarity index 95% rename from deploy/pphuman/config/infer_cfg_pphuman.yml rename to deploy/pipeline/config/infer_cfg_pphuman.yml index ed5fb453dc3a19bd19bf7ed0803d99cecc1f7e39..60d986d8534373c0622b11b187c839962d0c1eef 100644 --- a/deploy/pphuman/config/infer_cfg_pphuman.yml +++ b/deploy/pipeline/config/infer_cfg_pphuman.yml @@ -10,7 +10,7 @@ DET: MOT: model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - tracker_config: deploy/pphuman/config/tracker_config.yml + tracker_config: deploy/pipeline/config/tracker_config.yml batch_size: 1 basemode: "idbased" enable: False diff --git a/deploy/pphuman/config/infer_cfg_ppvehicle.yml b/deploy/pipeline/config/infer_cfg_ppvehicle.yml similarity index 86% rename from deploy/pphuman/config/infer_cfg_ppvehicle.yml rename to deploy/pipeline/config/infer_cfg_ppvehicle.yml index 0177a8d22426693bcb80df3f5d0bdc88ddb67600..170266510518aca845ac3012e56c6be81b21772d 100644 --- a/deploy/pphuman/config/infer_cfg_ppvehicle.yml +++ b/deploy/pipeline/config/infer_cfg_ppvehicle.yml @@ -8,7 +8,7 @@ DET: MOT: model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/ - tracker_config: deploy/pphuman/config/tracker_config.yml + tracker_config: deploy/pipeline/config/tracker_config.yml batch_size: 1 basemode: "idbased" enable: False @@ -20,7 +20,7 @@ VEHICLE_PLATE: rec_model_dir: output_inference/ch_PP-OCRv3_rec_infer/ rec_image_shape: [3, 48, 320] rec_batch_num: 6 - word_dict_path: deploy/pphuman/ppvehicle/rec_word_dict.txt + word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt basemode: "idbased" enable: False diff --git a/deploy/pphuman/config/tracker_config.yml b/deploy/pipeline/config/tracker_config.yml similarity index 100% rename from deploy/pphuman/config/tracker_config.yml rename to deploy/pipeline/config/tracker_config.yml diff --git a/deploy/pphuman/datacollector.py b/deploy/pipeline/datacollector.py similarity index 97% rename from deploy/pphuman/datacollector.py rename to deploy/pipeline/datacollector.py index 50037b95f514468f788e340a24892a9d38d9d573..fa12fc318022b686c6c73595983b880e29625b66 100644 --- a/deploy/pphuman/datacollector.py +++ b/deploy/pipeline/datacollector.py @@ -46,7 +46,7 @@ class Result(object): class DataCollector(object): """ - DataCollector of pphuman Pipeline, collect results in every frames and assign it to each track ids. + DataCollector of Pipeline, collect results in every frames and assign it to each track ids. mainly used in mtmct. data struct: diff --git a/deploy/pphuman/docs/images/action.gif b/deploy/pipeline/docs/images/action.gif similarity index 100% rename from deploy/pphuman/docs/images/action.gif rename to deploy/pipeline/docs/images/action.gif diff --git a/deploy/pphuman/docs/images/attribute.gif b/deploy/pipeline/docs/images/attribute.gif similarity index 100% rename from deploy/pphuman/docs/images/attribute.gif rename to deploy/pipeline/docs/images/attribute.gif diff --git a/deploy/pphuman/docs/images/c1.gif b/deploy/pipeline/docs/images/c1.gif similarity index 100% rename from deploy/pphuman/docs/images/c1.gif rename to deploy/pipeline/docs/images/c1.gif diff --git a/deploy/pphuman/docs/images/c2.gif b/deploy/pipeline/docs/images/c2.gif similarity index 100% rename from deploy/pphuman/docs/images/c2.gif rename to deploy/pipeline/docs/images/c2.gif diff --git a/deploy/pphuman/docs/images/fight_demo.gif b/deploy/pipeline/docs/images/fight_demo.gif similarity index 100% rename from deploy/pphuman/docs/images/fight_demo.gif rename to deploy/pipeline/docs/images/fight_demo.gif diff --git a/deploy/pphuman/docs/images/mot.gif b/deploy/pipeline/docs/images/mot.gif similarity index 100% rename from deploy/pphuman/docs/images/mot.gif rename to deploy/pipeline/docs/images/mot.gif diff --git a/deploy/pphuman/docs/tutorials/QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/QUICK_STARTED.md similarity index 82% rename from deploy/pphuman/docs/tutorials/QUICK_STARTED.md rename to deploy/pipeline/docs/tutorials/QUICK_STARTED.md index 32818c3f2b3d942f74335583167bb2e3e4848164..2aceb01d8c541133e9e48b43d316a1f8cfc71bf4 100644 --- a/deploy/pphuman/docs/tutorials/QUICK_STARTED.md +++ b/deploy/pipeline/docs/tutorials/QUICK_STARTED.md @@ -50,7 +50,7 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模 ## 三、配置文件说明 -PP-Human相关配置位于```deploy/pphuman/config/infer_cfg_pphuman.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型 +PP-Human相关配置位于```deploy/pipeline/config/infer_cfg_pphuman.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型 功能及任务类型对应表单如下: @@ -69,7 +69,7 @@ visual: True MOT: model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - tracker_config: deploy/pphuman/config/tracker_config.yml + tracker_config: deploy/pipeline/config/tracker_config.yml batch_size: 1 basemode: "idbased" enable: False @@ -91,23 +91,23 @@ ATTR: ``` # 行人检测,指定配置文件路径和测试图片 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16] +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16] -# 行人跟踪,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] +# 行人跟踪,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] -# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` +# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` # 命令行中指定的模型路径优先级高于配置文件 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16] +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16] -# 行人属性识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True``` -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] +# 行人属性识别,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] -# 行为识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True``` -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] +# 行为识别,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] -# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True``` -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16] +# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16] ``` ### 4.1 参数说明 diff --git a/deploy/pphuman/docs/tutorials/action.md b/deploy/pipeline/docs/tutorials/action.md similarity index 93% rename from deploy/pphuman/docs/tutorials/action.md rename to deploy/pipeline/docs/tutorials/action.md index 8c403784c6a9789b21e5d4a7383e78382f5d6b4b..8b096a85e6b19e2c532915321624007c39ff2977 100644 --- a/deploy/pphuman/docs/tutorials/action.md +++ b/deploy/pipeline/docs/tutorials/action.md @@ -45,16 +45,16 @@ SKELETON_ACTION: 1. 从上表链接中下载模型并解压到```./output_inference```路径下。 2. 目前行为识别模块仅支持视频输入,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ ``` 3. 若修改模型路径,有以下两种方式: - - ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 + - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 - 命令行中增加`--model_dir`修改模型路径: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN @@ -93,10 +93,10 @@ python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphum ### 使用方法 1. 从上表链接中下载预测部署模型并解压到`./output_inference`路径下; 2. 修改解压后`ppTSM`文件夹中的文件名称为`model.pdiparams、model.pdiparams.info和model.pdmodel`; -3. 修改配置文件`deploy/pphuman/config/infer_cfg_pphuman.yml`中`VIDEO_ACTION`下的`enable`为`True`; +3. 修改配置文件`deploy/pipeline/config/infer_cfg_pphuman.yml`中`VIDEO_ACTION`下的`enable`为`True`; 4. 输入视频,启动命令如下: ``` -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu ``` diff --git a/deploy/pphuman/docs/tutorials/action_en.md b/deploy/pipeline/docs/tutorials/action_en.md similarity index 93% rename from deploy/pphuman/docs/tutorials/action_en.md rename to deploy/pipeline/docs/tutorials/action_en.md index 0d059a3675c179ae954092250e55482c36d02a34..e534ec7cdbce562753d866ea354d66d765c4469a 100644 --- a/deploy/pphuman/docs/tutorials/action_en.md +++ b/deploy/pipeline/docs/tutorials/action_en.md @@ -55,19 +55,19 @@ SKELETON_ACTION: - Now the only available input is the video input in the action recognition module. set the "enable: True" in SKELETON_ACTION of infer_cfg_pphuman.yml. And then run the command: ```python - python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu ``` - There are two ways to modify the model path: - - In ```./deploy/pphuman/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path. + - In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path. - Add `--model_dir` in the command line to revise the model path: ```python - python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN diff --git a/deploy/pphuman/docs/tutorials/attribute.md b/deploy/pipeline/docs/tutorials/attribute.md similarity index 90% rename from deploy/pphuman/docs/tutorials/attribute.md rename to deploy/pipeline/docs/tutorials/attribute.md index 8fe4ac3497b45259a24420d3a67a1030a464e3eb..067a017e7dbdf65ae21eba83dbe4918daef94a0a 100644 --- a/deploy/pphuman/docs/tutorials/attribute.md +++ b/deploy/pipeline/docs/tutorials/attribute.md @@ -18,22 +18,22 @@ 1. 从上表链接中下载模型并解压到```./output_inference```路径下,并且设置infer_cfg_pphuman.yml中`ATTR`的enable: True 2. 图片输入时,启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu \ ``` 3. 视频输入时,启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ ``` 4. 若修改模型路径,有以下两种方式: - - ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 + - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 - **(推荐)**命令行中增加`--model_dir`修改模型路径: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --model_dir det=ppyoloe/ diff --git a/deploy/pphuman/docs/tutorials/attribute_en.md b/deploy/pipeline/docs/tutorials/attribute_en.md similarity index 90% rename from deploy/pphuman/docs/tutorials/attribute_en.md rename to deploy/pipeline/docs/tutorials/attribute_en.md index 00b0e32d41799f174e1f589114aaf9bc6af2cecd..3d8fab989482ffed72f0d510c7ce21bc485506ba 100644 --- a/deploy/pphuman/docs/tutorials/attribute_en.md +++ b/deploy/pipeline/docs/tutorials/attribute_en.md @@ -18,22 +18,22 @@ Pedestrian attribute recognition has been widely used in the intelligent communi 1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml 2. When inputting the image, run the command as follows: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu \ ``` 3. When inputting the video, run the command as follows: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ ``` 4. If you want to change the model path, there are two methods: - - In ```./deploy/pphuman/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. + - In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. - Add `--model_dir` in the command line to change the model path: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --model_dir det=ppyoloe/ diff --git a/deploy/pphuman/docs/tutorials/mot.md b/deploy/pipeline/docs/tutorials/mot.md similarity index 88% rename from deploy/pphuman/docs/tutorials/mot.md rename to deploy/pipeline/docs/tutorials/mot.md index 04d05616ed4481025e3d7e6d45442f23d420162b..3e64fb6e1370564c5ca8b9a81c16e00c688dc761 100644 --- a/deploy/pphuman/docs/tutorials/mot.md +++ b/deploy/pipeline/docs/tutorials/mot.md @@ -17,22 +17,22 @@ 1. 从上表链接中下载模型并解压到```./output_inference```路径下 2. 图片输入时,是纯检测任务,启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu ``` 3. 视频输入时,是跟踪任务,注意首先设置infer_cfg_pphuman.yml中的MOT配置的enable=True,然后启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu ``` 4. 若修改模型路径,有以下两种方式: - - ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。 + - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。 - 命令行中增加`--model_dir`修改模型路径: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --do_entrance_counting \ diff --git a/deploy/pphuman/docs/tutorials/mot_en.md b/deploy/pipeline/docs/tutorials/mot_en.md similarity index 87% rename from deploy/pphuman/docs/tutorials/mot_en.md rename to deploy/pipeline/docs/tutorials/mot_en.md index 86a64991ecc88585a7aba6fc070e968cef457bf3..05ad1f7fd84d2a0fed2b2aaa2464ae39bd682035 100644 --- a/deploy/pphuman/docs/tutorials/mot_en.md +++ b/deploy/pipeline/docs/tutorials/mot_en.md @@ -17,23 +17,23 @@ Pedestrian detection and tracking is widely used in the intelligent community, i 1. Download models from the links of the above table and unizp them to ```./output_inference```. 2. When use the image as input, it's a detection task, the start command is as follows: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu ``` 3. When use the video as input, it's a tracking task, first you should set the "enable: True" in MOT of infer_cfg_pphuman.yml, and then the start command is as follows: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu ``` 4. There are two ways to modify the model path: - - In `./deploy/pphuman/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path. + - In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path. - Add `--model_dir` in the command line to revise the model path: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ --model_dir det=ppyoloe/ diff --git a/deploy/pphuman/docs/tutorials/mtmct.md b/deploy/pipeline/docs/tutorials/mtmct.md similarity index 90% rename from deploy/pphuman/docs/tutorials/mtmct.md rename to deploy/pipeline/docs/tutorials/mtmct.md index 16a9fda8e0cb2d03bd548674ec5402e32fd0416a..54e407ab873771d0b393496a0c02b700954e90df 100644 --- a/deploy/pphuman/docs/tutorials/mtmct.md +++ b/deploy/pipeline/docs/tutorials/mtmct.md @@ -11,14 +11,14 @@ PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨 2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下: ```python -python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu +python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu ``` -3. 相关配置在`./deploy/pphuman/config/infer_cfg_pphuman.yml`文件中修改: +3. 相关配置在`./deploy/pipeline/config/infer_cfg_pphuman.yml`文件中修改: ```python -python3 deploy/pphuman/pipeline.py - --config deploy/pphuman/config/infer_cfg_pphuman.yml +python3 deploy/pipeline/pipeline.py + --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu --model_dir reid=reid_best/ diff --git a/deploy/pphuman/docs/tutorials/mtmct_en.md b/deploy/pipeline/docs/tutorials/mtmct_en.md similarity index 90% rename from deploy/pphuman/docs/tutorials/mtmct_en.md rename to deploy/pipeline/docs/tutorials/mtmct_en.md index 117481a21a8653ef0cc87641d06c39733455079b..756edec367b60c872bf8881e7731fff5f9c77d1f 100644 --- a/deploy/pphuman/docs/tutorials/mtmct_en.md +++ b/deploy/pipeline/docs/tutorials/mtmct_en.md @@ -11,14 +11,14 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli 2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is: ```python -python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu +python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu ``` -3. Configuration can be modified in `./deploy/pphuman/config/infer_cfg_pphuman.yml`. +3. Configuration can be modified in `./deploy/pipeline/config/infer_cfg_pphuman.yml`. ```python -python3 deploy/pphuman/pipeline.py - --config deploy/pphuman/config/infer_cfg_pphuman.yml +python3 deploy/pipeline/pipeline.py + --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu --model_dir reid=reid_best/ diff --git a/deploy/pphuman/pipe_utils.py b/deploy/pipeline/pipe_utils.py similarity index 100% rename from deploy/pphuman/pipe_utils.py rename to deploy/pipeline/pipe_utils.py diff --git a/deploy/pphuman/pipeline.py b/deploy/pipeline/pipeline.py similarity index 99% rename from deploy/pphuman/pipeline.py rename to deploy/pipeline/pipeline.py index c87a7c6ba2ab5cfa5fa721a5107cc41e643b6981..886d1e3355494eb3a377ce3dcb9446380b2ad2de 100644 --- a/deploy/pphuman/pipeline.py +++ b/deploy/pipeline/pipeline.py @@ -15,34 +15,25 @@ import os import yaml import glob -from collections import defaultdict - import cv2 import numpy as np import math import paddle import sys import copy -from collections import Sequence -from reid import ReID +from collections import Sequence, defaultdict from datacollector import DataCollector, Result -from mtmct import mtmct_process # add deploy path of PadleDetection to sys.path parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) +from pipe_utils import argsparser, print_arguments, merge_cfg, PipeTimer +from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint + from python.infer import Detector, DetectorPicoDet -from python.attr_infer import AttrDetector from python.keypoint_infer import KeyPointDetector from python.keypoint_postprocess import translate_to_ori_images - -from python.video_action_infer import VideoActionRecognizer -from python.action_infer import SkeletonActionRecognizer, DetActionRecognizer, ClsActionRecognizer -from python.action_utils import KeyPointBuff, ActionVisualHelper - -from pipe_utils import argsparser, print_arguments, merge_cfg, PipeTimer -from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint from python.preprocess import decode_image, ShortSizeScale from python.visualize import visualize_box_mask, visualize_attr, visualize_pose, visualize_action, visualize_vehicleplate @@ -50,6 +41,13 @@ from pptracking.python.mot_sde_infer import SDE_Detector from pptracking.python.mot.visualize import plot_tracking_dict from pptracking.python.mot.utils import flow_statistic +from pphuman.attr_infer import AttrDetector +from pphuman.video_action_infer import VideoActionRecognizer +from pphuman.action_infer import SkeletonActionRecognizer, DetActionRecognizer, ClsActionRecognizer +from pphuman.action_utils import KeyPointBuff, ActionVisualHelper +from pphuman.reid import ReID +from pphuman.mtmct import mtmct_process + from ppvehicle.vehicle_plate import PlateRecognizer from ppvehicle.vehicle_attr import VehicleAttr diff --git a/deploy/python/action_infer.py b/deploy/pipeline/pphuman/action_infer.py similarity index 99% rename from deploy/python/action_infer.py rename to deploy/pipeline/pphuman/action_infer.py index 4c61f06c265b48112b3c9e3da3eb105688546851..563d6f7f5db910a23288d7b683df6c88f5dd2150 100644 --- a/deploy/python/action_infer.py +++ b/deploy/pipeline/pphuman/action_infer.py @@ -28,9 +28,9 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) from paddle.inference import Config, create_predictor -from utils import argsparser, Timer, get_current_memory_mb -from benchmark_utils import PaddleInferBenchmark -from infer import Detector, print_arguments +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark +from python.infer import Detector, print_arguments from attr_infer import AttrDetector diff --git a/deploy/python/action_utils.py b/deploy/pipeline/pphuman/action_utils.py similarity index 100% rename from deploy/python/action_utils.py rename to deploy/pipeline/pphuman/action_utils.py diff --git a/deploy/python/attr_infer.py b/deploy/pipeline/pphuman/attr_infer.py similarity index 97% rename from deploy/python/attr_infer.py rename to deploy/pipeline/pphuman/attr_infer.py index ba034639a959a89df9ed49b7256b316ab541773f..f783d5e5c857a374c8bfb241eb5bebd18bf9cd84 100644 --- a/deploy/python/attr_infer.py +++ b/deploy/pipeline/pphuman/attr_infer.py @@ -29,11 +29,11 @@ import sys parent_path = os.path.abspath(os.path.join(__file__, *(['..']))) sys.path.insert(0, parent_path) -from benchmark_utils import PaddleInferBenchmark -from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine -from visualize import visualize_attr -from utils import argsparser, Timer, get_current_memory_mb -from infer import Detector, get_test_images, print_arguments, load_predictor +from python.benchmark_utils import PaddleInferBenchmark +from python.preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine +from python.visualize import visualize_attr +from python.utils import argsparser, Timer, get_current_memory_mb +from python.infer import Detector, get_test_images, print_arguments, load_predictor from PIL import Image, ImageDraw, ImageFont diff --git a/deploy/pphuman/mtmct.py b/deploy/pipeline/pphuman/mtmct.py similarity index 99% rename from deploy/pphuman/mtmct.py rename to deploy/pipeline/pphuman/mtmct.py index 7c4d571155e932dc7bb4cf77b10ab5cb6c0be66c..721008f638b543c9f276c0e86877ecbe32eb2c46 100644 --- a/deploy/pphuman/mtmct.py +++ b/deploy/pipeline/pphuman/mtmct.py @@ -12,7 +12,6 @@ # See the License for the specific language governing permissions and # limitations under the License. -import motmetrics as mm from pptracking.python.mot.visualize import plot_tracking import os import re diff --git a/deploy/pphuman/reid.py b/deploy/pipeline/pphuman/reid.py similarity index 100% rename from deploy/pphuman/reid.py rename to deploy/pipeline/pphuman/reid.py diff --git a/deploy/python/video_action_infer.py b/deploy/pipeline/pphuman/video_action_infer.py similarity index 96% rename from deploy/python/video_action_infer.py rename to deploy/pipeline/pphuman/video_action_infer.py index 865f47d41ac708f667edfb5a022a0315d69392bd..3a10c033343063b24722f07958c6daa394fc9e0e 100644 --- a/deploy/python/video_action_infer.py +++ b/deploy/pipeline/pphuman/video_action_infer.py @@ -29,9 +29,9 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) from paddle.inference import Config, create_predictor -from utils import argsparser, Timer, get_current_memory_mb -from benchmark_utils import PaddleInferBenchmark -from infer import Detector, print_arguments +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark +from python.infer import Detector, print_arguments from video_action_preprocess import VideoDecoder, Sampler, Scale, CenterCrop, Normalization, Image2Array @@ -96,8 +96,8 @@ class VideoActionRecognizer(object): self.recognize_times = Timer() - model_file_path = os.path.join(model_dir, "model.pdmodel") - params_file_path = os.path.join(model_dir, "model.pdiparams") + model_file_path = os.path.join(model_dir, "ppTSM.pdmodel") + params_file_path = os.path.join(model_dir, "ppTSM.pdiparams") self.config = Config(model_file_path, params_file_path) if device == "GPU" or device == "gpu": diff --git a/deploy/python/video_action_preprocess.py b/deploy/pipeline/pphuman/video_action_preprocess.py similarity index 100% rename from deploy/python/video_action_preprocess.py rename to deploy/pipeline/pphuman/video_action_preprocess.py diff --git a/deploy/pphuman/ppvehicle/rec_word_dict.txt b/deploy/pipeline/ppvehicle/rec_word_dict.txt similarity index 100% rename from deploy/pphuman/ppvehicle/rec_word_dict.txt rename to deploy/pipeline/ppvehicle/rec_word_dict.txt diff --git a/deploy/pphuman/ppvehicle/vehicle_attr.py b/deploy/pipeline/ppvehicle/vehicle_attr.py similarity index 96% rename from deploy/pphuman/ppvehicle/vehicle_attr.py rename to deploy/pipeline/ppvehicle/vehicle_attr.py index e937fade73ed375e3c713937236644eedbfde003..82db40b3189571452b36a870049b5c17bdaf554d 100644 --- a/deploy/pphuman/ppvehicle/vehicle_attr.py +++ b/deploy/pipeline/ppvehicle/vehicle_attr.py @@ -28,10 +28,10 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 3))) sys.path.insert(0, parent_path) from paddle.inference import Config, create_predictor -from utils import argsparser, Timer, get_current_memory_mb -from benchmark_utils import PaddleInferBenchmark +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark from python.infer import Detector, print_arguments -from python.attr_infer import AttrDetector +from pipeline.pphuman.attr_infer import AttrDetector class VehicleAttr(AttrDetector): diff --git a/deploy/pphuman/ppvehicle/vehicle_plate.py b/deploy/pipeline/ppvehicle/vehicle_plate.py similarity index 89% rename from deploy/pphuman/ppvehicle/vehicle_plate.py rename to deploy/pipeline/ppvehicle/vehicle_plate.py index 5e0c5af95958fbd9dea3996b58df2762ed452bf8..42e5ba4b0a1cb1bfcd7fb1422cf8b700d904294c 100644 --- a/deploy/pphuman/ppvehicle/vehicle_plate.py +++ b/deploy/pipeline/ppvehicle/vehicle_plate.py @@ -29,9 +29,9 @@ sys.path.insert(0, parent_path) from python.infer import get_test_images from python.preprocess import preprocess, NormalizeImage, Permute, Resize_Mult32 -from pphuman.ppvehicle.vehicle_plateutils import create_predictor, get_infer_gpuid, get_rotate_crop_image, draw_boxes -from pphuman.ppvehicle.vehicleplate_postprocess import build_post_process -from pphuman.pipe_utils import merge_cfg, print_arguments, argsparser +from pipeline.ppvehicle.vehicle_plateutils import create_predictor, get_infer_gpuid, get_rotate_crop_image, draw_boxes +from pipeline.ppvehicle.vehicleplate_postprocess import build_post_process +from pipeline.pipe_utils import merge_cfg, print_arguments, argsparser class PlateDetector(object): @@ -62,18 +62,17 @@ class PlateDetector(object): self.predictor, self.input_tensor, self.output_tensors, self.config = create_predictor( args, cfg, 'det') - def preprocess(self, image_list): + def preprocess(self, im_path): preprocess_ops = [] for op_type, new_op_info in self.pre_process_list.items(): preprocess_ops.append(eval(op_type)(**new_op_info)) input_im_lst = [] input_im_info_lst = [] - for im_path in image_list: - im, im_info = preprocess(im_path, preprocess_ops) - input_im_lst.append(im) - input_im_info_lst.append(im_info['im_shape'] / - im_info['scale_factor']) + + im, im_info = preprocess(im_path, preprocess_ops) + input_im_lst.append(im) + input_im_info_lst.append(im_info['im_shape'] / im_info['scale_factor']) return np.stack(input_im_lst, axis=0), input_im_info_lst @@ -119,27 +118,28 @@ class PlateDetector(object): def predict_image(self, img_list): st = time.time() - img, shape_list = self.preprocess(img_list) - if img is None: - return None, 0 - - self.input_tensor.copy_from_cpu(img) - self.predictor.run() - outputs = [] - for output_tensor in self.output_tensors: - output = output_tensor.copy_to_cpu() - outputs.append(output) - - preds = {} - preds['maps'] = outputs[0] - - #self.predictor.try_shrink_memory() - post_result = self.postprocess_op(preds, shape_list) - dt_batch_boxes = [] - for idx in range(len(post_result)): - org_shape = img_list[idx].shape - dt_boxes = post_result[idx]['points'] + for image in img_list: + img, shape_list = self.preprocess(image) + if img is None: + return None, 0 + + self.input_tensor.copy_from_cpu(img) + self.predictor.run() + outputs = [] + for output_tensor in self.output_tensors: + output = output_tensor.copy_to_cpu() + outputs.append(output) + + preds = {} + preds['maps'] = outputs[0] + + #self.predictor.try_shrink_memory() + post_result = self.postprocess_op(preds, shape_list) + # print("post_result length:{}".format(len(post_result))) + + org_shape = image.shape + dt_boxes = post_result[0]['points'] dt_boxes = self.filter_tag_det_res(dt_boxes, org_shape) dt_batch_boxes.append(dt_boxes) diff --git a/deploy/pphuman/ppvehicle/vehicle_plateutils.py b/deploy/pipeline/ppvehicle/vehicle_plateutils.py similarity index 100% rename from deploy/pphuman/ppvehicle/vehicle_plateutils.py rename to deploy/pipeline/ppvehicle/vehicle_plateutils.py diff --git a/deploy/pphuman/ppvehicle/vehicleplate_postprocess.py b/deploy/pipeline/ppvehicle/vehicleplate_postprocess.py similarity index 100% rename from deploy/pphuman/ppvehicle/vehicleplate_postprocess.py rename to deploy/pipeline/ppvehicle/vehicleplate_postprocess.py diff --git a/deploy/pphuman/tools/clip_video.py b/deploy/pipeline/tools/clip_video.py similarity index 100% rename from deploy/pphuman/tools/clip_video.py rename to deploy/pipeline/tools/clip_video.py diff --git a/deploy/pphuman/tools/split_fight_train_test_dataset.py b/deploy/pipeline/tools/split_fight_train_test_dataset.py similarity index 100% rename from deploy/pphuman/tools/split_fight_train_test_dataset.py rename to deploy/pipeline/tools/split_fight_train_test_dataset.py diff --git a/deploy/python/README.md b/deploy/python/README.md index cb32ad468b58ff1d0cf0f9cb5f625822c8959393..c24aa7be6dd32bb0762d269afd9f59b39d5ab9a8 100644 --- a/deploy/python/README.md +++ b/deploy/python/README.md @@ -52,7 +52,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc ``` **注意:** - 关键点检测模型导出和预测具体可参照[keypoint](../../configs/keypoint/README.md),可分别在各个模型的文档中查找具体用法; - - 此目录下的关键点检测部署为基础前向功能,更多关键点检测功能可使用PP-Human项目,参照[pphuman](../pphuman/README.md); + - 此目录下的关键点检测部署为基础前向功能,更多关键点检测功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md); ### 2.3 多目标跟踪 @@ -70,7 +70,7 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc **注意:** - 多目标跟踪模型导出和预测具体可参照[mot]](../../configs/mot/README.md),可分别在各个模型的文档中查找具体用法; - - 此目录下的跟踪部署为基础前向功能以及联合关键点部署,更多跟踪功能可使用PP-Human项目,参照[pphuman](../pphuman/README.md),或PP-Tracking项目(绘制轨迹、出入口流量计数),参照[pptracking](../pptracking/README.md); + - 此目录下的跟踪部署为基础前向功能以及联合关键点部署,更多跟踪功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md),或PP-Tracking项目(绘制轨迹、出入口流量计数),参照[pptracking](../pptracking/README.md); 参数说明如下: diff --git a/deploy/python/visualize.py b/deploy/python/visualize.py index 1d50d68c8f3e4a5ac12ef68d5efa73d8bb1d17fb..8f0da233df29ddc617cef99fe415b9ec7ed39efc 100644 --- a/deploy/python/visualize.py +++ b/deploy/python/visualize.py @@ -428,7 +428,7 @@ def visualize_vehicleplate(im, results, boxes=None): if text == "": continue text_w = int(box[2]) - text_h = int(box[5]) + text_h = int(box[5] + box[3]) text_loc = (text_w, text_h) cv2.putText( im, diff --git a/docs/advanced_tutorials/customization/action.md b/docs/advanced_tutorials/customization/action.md index c7fbe723bf71fc182341f4d65c5b9de9272aa404..a6d70ff10519253502f26fd04d7652b79c3e3c03 100644 --- a/docs/advanced_tutorials/customization/action.md +++ b/docs/advanced_tutorials/customization/action.md @@ -55,12 +55,12 @@ python split_fight_train_test_dataset.py "rawframes" 2 0.8 参数说明:“rawframes”为视频帧存放的文件夹;2表示目录结构为两级,第二级表示每个行为对应的子文件夹;0.8表示训练集比例。 -其中`split_fight_train_test_dataset.py`文件在`deploy/pphuman/tools`路径下。 +其中`split_fight_train_test_dataset.py`文件在`deploy/pipeline/tools`路径下。 执行完命令后会最终生成fight_train_list.txt和fight_val_list.txt两个文件。打架的标签为1,非打架的标签为0。 #### 视频裁剪 -对于未裁剪的视频,需要先进行裁剪才能用于模型训练,`deploy/pphuman/tools/clip_video.py`中给出了视频裁剪的函数`cut_video`,输入为视频路径,裁剪的起始帧和结束帧以及裁剪后的视频保存路径。 +对于未裁剪的视频,需要先进行裁剪才能用于模型训练,`deploy/pipeline/tools/clip_video.py`中给出了视频裁剪的函数`cut_video`,输入为视频路径,裁剪的起始帧和结束帧以及裁剪后的视频保存路径。 ## 模型优化