提交 ac41cb08 编写于 作者: Z zhiboniu 提交者: zhiboniu

relocate pipeline files

上级 bccee30e
......@@ -27,7 +27,7 @@
- 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe),提供s/m/l/x版本,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。
- 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),精度提升2%左右,CPU预测速度提升63%,新增参数量0.7M的PicoDet-XS模型,提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。
- 发布实时行人分析工具[PP-Human](deploy/pphuman),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。
- 发布实时行人分析工具[PP-Human](deploy/pipeline),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。
- 新增[YOLOX](configs/yolox)目标检测模型,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8%。
- 2021.11.03: PaddleDetection发布[release/2.3版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3)
......@@ -367,7 +367,7 @@
**点击“ ✅ ”即可下载对应模型**
详细信息参考[文档](deploy/pphuman)
详细信息参考[文档](deploy/pipeline)
</details>
......
......@@ -22,7 +22,7 @@ English | [简体中文](README_cn.md)
- Release GPU SOTA object detection series models (s/m/l/x) [PP-YOLOE](configs/ppyoloe), supporting s/m/l/x version, achieving mAP as 51.6% on COCO test dataset and 78.1 FPS on Nvidia V100 by PP-YOLOE-l, supporting AMP training and its training speed is 33% faster than PP-YOLOv2.
- Release enhanced models of [PP-PicoDet](configs/picodet), including PP-PicoDet-XS model with 0.7M parameters, its mAP promoted ~2% on COCO, inference speed accelerated 63% on CPU, and post-processing integrated into the network to optimize deployment pipeline.
- Release real-time human analysis tool [PP-Human](deploy/pphuman), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition.
- Release real-time human analysis tool [PP-Human](deploy/pipeline), which is based on data from real-life situations, supporting pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics and action recognition.
- Release [YOLOX](configs/yolox), supporting nano/tiny/s/m/l/x version, achieving mAP as 51.8% on COCO val dataset by YOLOX-x.
- 2021.11.03: Release [release/2.3](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.3) version. Release mobile object detection model ⚡[PP-PicoDet](configs/picodet), mobile keypoint detection model ⚡[PP-TinyPose](configs/keypoint/tiny_pose),Real-time tracking system [PP-Tracking](deploy/pptracking). Release object detection models, including [Swin-Transformer](configs/faster_rcnn), [TOOD](configs/tood), [GFL](configs/gfl), release [Sniper](configs/sniper) tiny object detection models and optimized [PP-YOLO-EB](configs/ppyolo) model for EdgeBoard. Release mobile keypoint detection model [Lite HRNet](configs/keypoint).
......@@ -335,7 +335,7 @@ The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of represen
- [Pedestrian detection](configs/pedestrian/README.md)
- [Vehicle detection](configs/vehicle/README.md)
- Scienario Solution
- [Real-Time Human Analysis Tool PP-Human](deploy/pphuman)
- [Real-Time Human Analysis Tool PP-Human](deploy/pipeline)
- Competition Solution
- [Objects365 2019 Challenge champion model](static/docs/featured_model/champion_model/CACascadeRCNN_en.md)
- [Best single model of Open Images 2019-Object Detection](static/docs/featured_model/champion_model/OIDV5_BASELINE_MODEL_en.md)
......
......@@ -82,7 +82,7 @@ MPII数据集
场景模型
| 模型 | 方案 | 输入尺寸 | 精度 | 预测速度 |模型权重 | 部署模型 | 说明|
| :---- | ---|----- | :--------: | :--------: | :------------: |:------------: |:-------------------: |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pphuman/README.md) |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pipeline/README.md) |
我们同时推出了基于LiteHRNet(Top-Down)针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。
......
......@@ -93,7 +93,7 @@ MPII Dataset
Model for Scenes
| Model | Strategy | Input Size | Precision | Inference Speed |Model Weights | Model Inference and Deployment | description|
| :---- | ---|----- | :--------: | :-------: |:------------: |:------------: |:-------------------: |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pphuman/README_en.md) |
| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pipeline/README_en.md) |
We also release [PP-TinyPose](./tiny_pose/README_en.md), a real-time keypoint detection model optimized for mobile devices. Welcome to experience.
......
......@@ -10,7 +10,7 @@ DET:
MOT:
model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/
tracker_config: deploy/pphuman/config/tracker_config.yml
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
basemode: "idbased"
enable: False
......
......@@ -8,7 +8,7 @@ DET:
MOT:
model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/
tracker_config: deploy/pphuman/config/tracker_config.yml
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
basemode: "idbased"
enable: False
......@@ -20,7 +20,7 @@ VEHICLE_PLATE:
rec_model_dir: output_inference/ch_PP-OCRv3_rec_infer/
rec_image_shape: [3, 48, 320]
rec_batch_num: 6
word_dict_path: deploy/pphuman/ppvehicle/rec_word_dict.txt
word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt
basemode: "idbased"
enable: False
......
......@@ -46,7 +46,7 @@ class Result(object):
class DataCollector(object):
"""
DataCollector of pphuman Pipeline, collect results in every frames and assign it to each track ids.
DataCollector of Pipeline, collect results in every frames and assign it to each track ids.
mainly used in mtmct.
data struct:
......
......@@ -50,7 +50,7 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模
## 三、配置文件说明
PP-Human相关配置位于```deploy/pphuman/config/infer_cfg_pphuman.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型
PP-Human相关配置位于```deploy/pipeline/config/infer_cfg_pphuman.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型
功能及任务类型对应表单如下:
......@@ -69,7 +69,7 @@ visual: True
MOT:
model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/
tracker_config: deploy/pphuman/config/tracker_config.yml
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
basemode: "idbased"
enable: False
......@@ -91,23 +91,23 @@ ATTR:
```
# 行人检测,指定配置文件路径和测试图片
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
# 命令行中指定的模型路径优先级高于配置文件
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
# 行人属性识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人属性识别,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行为识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行为识别,指定配置文件路径和测试视频,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16]
# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中```deploy/pipeline/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16]
```
### 4.1 参数说明
......
......@@ -45,16 +45,16 @@ SKELETON_ACTION:
1. 从上表链接中下载模型并解压到```./output_inference```路径下。
2. 目前行为识别模块仅支持视频输入,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
```
3. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
......@@ -93,10 +93,10 @@ python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphum
### 使用方法
1. 从上表链接中下载预测部署模型并解压到`./output_inference`路径下;
2. 修改解压后`ppTSM`文件夹中的文件名称为`model.pdiparams、model.pdiparams.info和model.pdmodel`
3. 修改配置文件`deploy/pphuman/config/infer_cfg_pphuman.yml``VIDEO_ACTION`下的`enable``True`
3. 修改配置文件`deploy/pipeline/config/infer_cfg_pphuman.yml``VIDEO_ACTION`下的`enable``True`
4. 输入视频,启动命令如下:
```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
......
......@@ -55,19 +55,19 @@ SKELETON_ACTION:
- Now the only available input is the video input in the action recognition module. set the "enable: True" in SKELETON_ACTION of infer_cfg_pphuman.yml. And then run the command:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
- There are two ways to modify the model path:
- In ```./deploy/pphuman/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
......
......@@ -18,22 +18,22 @@
1. 从上表链接中下载模型并解压到```./output_inference```路径下,并且设置infer_cfg_pphuman.yml中`ATTR`的enable: True
2. 图片输入时,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu \
```
3. 视频输入时,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
```
4. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- **(推荐)**命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir det=ppyoloe/
......
......@@ -18,22 +18,22 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml
2. When inputting the image, run the command as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu \
```
3. When inputting the video, run the command as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
```
4. If you want to change the model path, there are two methods:
- In ```./deploy/pphuman/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- Add `--model_dir` in the command line to change the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir det=ppyoloe/
......
......@@ -17,22 +17,22 @@
1. 从上表链接中下载模型并解压到```./output_inference```路径下
2. 图片输入时,是纯检测任务,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu
```
3. 视频输入时,是跟踪任务,注意首先设置infer_cfg_pphuman.yml中的MOT配置的enable=True,然后启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--do_entrance_counting \
......
......@@ -17,23 +17,23 @@ Pedestrian detection and tracking is widely used in the intelligent community, i
1. Download models from the links of the above table and unizp them to ```./output_inference```.
2. When use the image as input, it's a detection task, the start command is as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu
```
3. When use the video as input, it's a tracking task, first you should set the "enable: True" in MOT of infer_cfg_pphuman.yml, and then the start command is as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. There are two ways to modify the model path:
- In `./deploy/pphuman/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir det=ppyoloe/
......
......@@ -11,14 +11,14 @@ PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨
2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下:
```python
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
```
3. 相关配置在`./deploy/pphuman/config/infer_cfg_pphuman.yml`文件中修改:
3. 相关配置在`./deploy/pipeline/config/infer_cfg_pphuman.yml`文件中修改:
```python
python3 deploy/pphuman/pipeline.py
--config deploy/pphuman/config/infer_cfg_pphuman.yml
python3 deploy/pipeline/pipeline.py
--config deploy/pipeline/config/infer_cfg_pphuman.yml
--video_dir=[your_video_file_directory]
--device=gpu
--model_dir reid=reid_best/
......
......@@ -11,14 +11,14 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
```python
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
```
3. Configuration can be modified in `./deploy/pphuman/config/infer_cfg_pphuman.yml`.
3. Configuration can be modified in `./deploy/pipeline/config/infer_cfg_pphuman.yml`.
```python
python3 deploy/pphuman/pipeline.py
--config deploy/pphuman/config/infer_cfg_pphuman.yml
python3 deploy/pipeline/pipeline.py
--config deploy/pipeline/config/infer_cfg_pphuman.yml
--video_dir=[your_video_file_directory]
--device=gpu
--model_dir reid=reid_best/
......
......@@ -15,34 +15,25 @@
import os
import yaml
import glob
from collections import defaultdict
import cv2
import numpy as np
import math
import paddle
import sys
import copy
from collections import Sequence
from reid import ReID
from collections import Sequence, defaultdict
from datacollector import DataCollector, Result
from mtmct import mtmct_process
# add deploy path of PadleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
sys.path.insert(0, parent_path)
from pipe_utils import argsparser, print_arguments, merge_cfg, PipeTimer
from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint
from python.infer import Detector, DetectorPicoDet
from python.attr_infer import AttrDetector
from python.keypoint_infer import KeyPointDetector
from python.keypoint_postprocess import translate_to_ori_images
from python.video_action_infer import VideoActionRecognizer
from python.action_infer import SkeletonActionRecognizer, DetActionRecognizer, ClsActionRecognizer
from python.action_utils import KeyPointBuff, ActionVisualHelper
from pipe_utils import argsparser, print_arguments, merge_cfg, PipeTimer
from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint
from python.preprocess import decode_image, ShortSizeScale
from python.visualize import visualize_box_mask, visualize_attr, visualize_pose, visualize_action, visualize_vehicleplate
......@@ -50,6 +41,13 @@ from pptracking.python.mot_sde_infer import SDE_Detector
from pptracking.python.mot.visualize import plot_tracking_dict
from pptracking.python.mot.utils import flow_statistic
from pphuman.attr_infer import AttrDetector
from pphuman.video_action_infer import VideoActionRecognizer
from pphuman.action_infer import SkeletonActionRecognizer, DetActionRecognizer, ClsActionRecognizer
from pphuman.action_utils import KeyPointBuff, ActionVisualHelper
from pphuman.reid import ReID
from pphuman.mtmct import mtmct_process
from ppvehicle.vehicle_plate import PlateRecognizer
from ppvehicle.vehicle_attr import VehicleAttr
......
......@@ -28,9 +28,9 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
sys.path.insert(0, parent_path)
from paddle.inference import Config, create_predictor
from utils import argsparser, Timer, get_current_memory_mb
from benchmark_utils import PaddleInferBenchmark
from infer import Detector, print_arguments
from python.utils import argsparser, Timer, get_current_memory_mb
from python.benchmark_utils import PaddleInferBenchmark
from python.infer import Detector, print_arguments
from attr_infer import AttrDetector
......
......@@ -29,11 +29,11 @@ import sys
parent_path = os.path.abspath(os.path.join(__file__, *(['..'])))
sys.path.insert(0, parent_path)
from benchmark_utils import PaddleInferBenchmark
from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine
from visualize import visualize_attr
from utils import argsparser, Timer, get_current_memory_mb
from infer import Detector, get_test_images, print_arguments, load_predictor
from python.benchmark_utils import PaddleInferBenchmark
from python.preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine
from python.visualize import visualize_attr
from python.utils import argsparser, Timer, get_current_memory_mb
from python.infer import Detector, get_test_images, print_arguments, load_predictor
from PIL import Image, ImageDraw, ImageFont
......
......@@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import motmetrics as mm
from pptracking.python.mot.visualize import plot_tracking
import os
import re
......
......@@ -29,9 +29,9 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
sys.path.insert(0, parent_path)
from paddle.inference import Config, create_predictor
from utils import argsparser, Timer, get_current_memory_mb
from benchmark_utils import PaddleInferBenchmark
from infer import Detector, print_arguments
from python.utils import argsparser, Timer, get_current_memory_mb
from python.benchmark_utils import PaddleInferBenchmark
from python.infer import Detector, print_arguments
from video_action_preprocess import VideoDecoder, Sampler, Scale, CenterCrop, Normalization, Image2Array
......@@ -96,8 +96,8 @@ class VideoActionRecognizer(object):
self.recognize_times = Timer()
model_file_path = os.path.join(model_dir, "model.pdmodel")
params_file_path = os.path.join(model_dir, "model.pdiparams")
model_file_path = os.path.join(model_dir, "ppTSM.pdmodel")
params_file_path = os.path.join(model_dir, "ppTSM.pdiparams")
self.config = Config(model_file_path, params_file_path)
if device == "GPU" or device == "gpu":
......
......@@ -28,10 +28,10 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 3)))
sys.path.insert(0, parent_path)
from paddle.inference import Config, create_predictor
from utils import argsparser, Timer, get_current_memory_mb
from benchmark_utils import PaddleInferBenchmark
from python.utils import argsparser, Timer, get_current_memory_mb
from python.benchmark_utils import PaddleInferBenchmark
from python.infer import Detector, print_arguments
from python.attr_infer import AttrDetector
from pipeline.pphuman.attr_infer import AttrDetector
class VehicleAttr(AttrDetector):
......
......@@ -29,9 +29,9 @@ sys.path.insert(0, parent_path)
from python.infer import get_test_images
from python.preprocess import preprocess, NormalizeImage, Permute, Resize_Mult32
from pphuman.ppvehicle.vehicle_plateutils import create_predictor, get_infer_gpuid, get_rotate_crop_image, draw_boxes
from pphuman.ppvehicle.vehicleplate_postprocess import build_post_process
from pphuman.pipe_utils import merge_cfg, print_arguments, argsparser
from pipeline.ppvehicle.vehicle_plateutils import create_predictor, get_infer_gpuid, get_rotate_crop_image, draw_boxes
from pipeline.ppvehicle.vehicleplate_postprocess import build_post_process
from pipeline.pipe_utils import merge_cfg, print_arguments, argsparser
class PlateDetector(object):
......@@ -62,18 +62,17 @@ class PlateDetector(object):
self.predictor, self.input_tensor, self.output_tensors, self.config = create_predictor(
args, cfg, 'det')
def preprocess(self, image_list):
def preprocess(self, im_path):
preprocess_ops = []
for op_type, new_op_info in self.pre_process_list.items():
preprocess_ops.append(eval(op_type)(**new_op_info))
input_im_lst = []
input_im_info_lst = []
for im_path in image_list:
im, im_info = preprocess(im_path, preprocess_ops)
input_im_lst.append(im)
input_im_info_lst.append(im_info['im_shape'] /
im_info['scale_factor'])
im, im_info = preprocess(im_path, preprocess_ops)
input_im_lst.append(im)
input_im_info_lst.append(im_info['im_shape'] / im_info['scale_factor'])
return np.stack(input_im_lst, axis=0), input_im_info_lst
......@@ -119,27 +118,28 @@ class PlateDetector(object):
def predict_image(self, img_list):
st = time.time()
img, shape_list = self.preprocess(img_list)
if img is None:
return None, 0
self.input_tensor.copy_from_cpu(img)
self.predictor.run()
outputs = []
for output_tensor in self.output_tensors:
output = output_tensor.copy_to_cpu()
outputs.append(output)
preds = {}
preds['maps'] = outputs[0]
#self.predictor.try_shrink_memory()
post_result = self.postprocess_op(preds, shape_list)
dt_batch_boxes = []
for idx in range(len(post_result)):
org_shape = img_list[idx].shape
dt_boxes = post_result[idx]['points']
for image in img_list:
img, shape_list = self.preprocess(image)
if img is None:
return None, 0
self.input_tensor.copy_from_cpu(img)
self.predictor.run()
outputs = []
for output_tensor in self.output_tensors:
output = output_tensor.copy_to_cpu()
outputs.append(output)
preds = {}
preds['maps'] = outputs[0]
#self.predictor.try_shrink_memory()
post_result = self.postprocess_op(preds, shape_list)
# print("post_result length:{}".format(len(post_result)))
org_shape = image.shape
dt_boxes = post_result[0]['points']
dt_boxes = self.filter_tag_det_res(dt_boxes, org_shape)
dt_batch_boxes.append(dt_boxes)
......
......@@ -52,7 +52,7 @@ python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferenc
```
**注意:**
- 关键点检测模型导出和预测具体可参照[keypoint](../../configs/keypoint/README.md),可分别在各个模型的文档中查找具体用法;
- 此目录下的关键点检测部署为基础前向功能,更多关键点检测功能可使用PP-Human项目,参照[pphuman](../pphuman/README.md)
- 此目录下的关键点检测部署为基础前向功能,更多关键点检测功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md)
### 2.3 多目标跟踪
......@@ -70,7 +70,7 @@ python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inferenc
**注意:**
- 多目标跟踪模型导出和预测具体可参照[mot]](../../configs/mot/README.md),可分别在各个模型的文档中查找具体用法;
- 此目录下的跟踪部署为基础前向功能以及联合关键点部署,更多跟踪功能可使用PP-Human项目,参照[pphuman](../pphuman/README.md),或PP-Tracking项目(绘制轨迹、出入口流量计数),参照[pptracking](../pptracking/README.md)
- 此目录下的跟踪部署为基础前向功能以及联合关键点部署,更多跟踪功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md),或PP-Tracking项目(绘制轨迹、出入口流量计数),参照[pptracking](../pptracking/README.md)
参数说明如下:
......
......@@ -428,7 +428,7 @@ def visualize_vehicleplate(im, results, boxes=None):
if text == "":
continue
text_w = int(box[2])
text_h = int(box[5])
text_h = int(box[5] + box[3])
text_loc = (text_w, text_h)
cv2.putText(
im,
......
......@@ -55,12 +55,12 @@ python split_fight_train_test_dataset.py "rawframes" 2 0.8
参数说明:“rawframes”为视频帧存放的文件夹;2表示目录结构为两级,第二级表示每个行为对应的子文件夹;0.8表示训练集比例。
其中`split_fight_train_test_dataset.py`文件在`deploy/pphuman/tools`路径下。
其中`split_fight_train_test_dataset.py`文件在`deploy/pipeline/tools`路径下。
执行完命令后会最终生成fight_train_list.txt和fight_val_list.txt两个文件。打架的标签为1,非打架的标签为0。
#### 视频裁剪
对于未裁剪的视频,需要先进行裁剪才能用于模型训练,`deploy/pphuman/tools/clip_video.py`中给出了视频裁剪的函数`cut_video`,输入为视频路径,裁剪的起始帧和结束帧以及裁剪后的视频保存路径。
对于未裁剪的视频,需要先进行裁剪才能用于模型训练,`deploy/pipeline/tools/clip_video.py`中给出了视频裁剪的函数`cut_video`,输入为视频路径,裁剪的起始帧和结束帧以及裁剪后的视频保存路径。
## 模型优化
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册