未验证 提交 5019c3bc 编写于 作者: F Feng Ni 提交者: GitHub

fix pphuman cfg doc (#6288)

* fix pphuman cfg doc, test=document_fix

* fix doc, test=document_fix

* remove en readme doc, test=document_fix
上级 47d40bf5
......@@ -26,13 +26,17 @@ PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能
## 🗳 模型库
| 任务 | 适用场景 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 |
|:-----:|:------------:|:---------- |:--------:|:----------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------:|
| 目标检测 | 图片输入 | mAP: 56.3 | 28.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪 | 视频输入 | MOTA: 72.0 | 33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) |
| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | 视频输入 跨镜跟踪 | mAP: 98.8 | 单人1.5ms | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
| :---------: |:---------: |:--------------- | :-------: | :------: | :------: |
| 目标检测(高精度) | 图片输入 | mAP: 56.6 | 28.0ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标检测(轻量级) | 图片输入 | mAP: 53.2 | 22.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| 目标跟踪(高精度) | 视频输入 | MOTA: 79.5 | 33.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪(轻量级) | 视频输入 | MOTA: 69.1 | 27.2ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | 视频输入 跨镜跟踪 | mAP: 98.8 | 单人1.5ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
下载预测部署模型并解压存放至`./output_inference`新建目录中
## 📚 文档教程
......
English | [简体中文](README.md)
# PP-Human— a Real-Time Pedestrian Analysis Tool
PP-Human serves as the first open-source tool of real-time pedestrian anaylsis relying on the PaddlePaddle deep learning framework. Versatile and efficient in deployment, it has been used in various senarios. PP-Human
offers many input options, including image/single-camera video/multi-camera video, and covers multi-object tracking, attribute recognition, and action recognition. PP-Human can be applied to intelligent traffic, the intelligent community, industiral patrol, and so on. It supports server-side deployment and TensorRT acceleration,and achieves real-time analysis on the T4 server.
Community intelligent management supportted by PP-Human, please refer to this [AI Studio project](https://aistudio.baidu.com/aistudio/projectdetail/3679564) for quick start tutorial.
Full-process operation tutorial of PP-Human, covering training, deployment, action expansion, please refer to this [AI Studio project](https://aistudio.baidu.com/aistudio/projectdetail/3842982).
## I. Environment Preparation
Requirement: PaddleDetection version >= release/2.4 or develop
The installation of PaddlePaddle and PaddleDetection
```
# PaddlePaddle CUDA10.1
python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
# PaddlePaddle CPU
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
# Clone the PaddleDetection repository
cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git
# Install other dependencies
cd PaddleDetection
pip install -r requirements.txt
```
1. For details of the installation, please refer to this [document](../../docs/tutorials/INSTALL.md)
2. Please install `Paddle-TensorRT` if your want speedup inference by TensorRT. You can download the whl package from [Paddle-whl-list](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python), or prepare the envs by yourself follows the [Install-Guide](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html).
## II. Quick Start
### 1. Model Download
To make users have access to models of different scenarios, PP-Human provides pre-trained models of object detection, attribute recognition, behavior recognition, and ReID.
| Task | Scenario | Precision | Inference Speed(FPS) | Model Weights |Model Inference and Deployment |
| :---------: |:---------: |:--------------- | :-------: | :------: | :------: |
| Object Detection(high-precision) | Image/Video Input | mAP: 56.6 | 28.0ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Object Detection(light-weight) | Image/Video Input | mAP: 53.2 | 22.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| Object Tracking(high-precision) | Image/Video Input | MOTA: 79.5 | 33.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Object Tracking(light-weight) | Image/Video Input | MOTA: 69.1 | 27.2ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| Attribute Recognition | Image/Video Input Attribute Recognition | mA: 94.86 | 2ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| Keypoint Detection | Video Input Falling Recognition | AP: 87.1 | 2.9ms per person | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| Falling Recognition | Video Input Falling Recognition | Precision 96.43 | 2.7ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| ReID | Multi-Target Multi-Camera Tracking | mAP: 98.8 | 1.5ms per person | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) |
Then, unzip the downloaded model to the folder `./output_inference`.
**Note: **
- The model precision is decided by the fusion of datasets which include open-source datasets and enterprise ones.
- The precision on ReID model is evaluated on Market1501.
- The inference speed is tested on T4, using TensorRT FP16. The pipeline of preprocess, prediction and postprocess is included.
### 2. Preparation of Configuration Files
Configuration files of PP-Human are stored in ```deploy/pphuman/config/infer_cfg.yml```. Different tasks are for different functions, so you need to set the task type beforhand.
Their correspondence is as follows:
| Input | Function | Task Type | Config |
|-------|-------|----------|-----|
| Image | Attribute Recognition | Object Detection Attribute Recognition | DET ATTR |
| Single-Camera Video | Attribute Recognition | Multi-Object Tracking Attribute Recognition | MOT ATTR |
| Single-Camera Video | Behavior Recognition | Multi-Object Tracking Keypoint Detection Falling Recognition | MOT KPT SKELETON_ACTION |
For example, for the attribute recognition with the video input, its task types contain multi-object tracking and attribute recognition, and the config is:
```
crop_thresh: 0.5
attr_thresh: 0.5
visual: True
MOT:
model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/
tracker_config: deploy/pphuman/config/tracker_config.yml
batch_size: 1
ATTR:
model_dir: output_inference/strongbaseline_r50_30e_pa100k/
batch_size: 8
```
**Note: **
- For different tasks, users should set the "enable" to "True" in coresponding configs in the infer_cfg.yml file.
- if only need to change the model path, users could add `--model_dir det=ppyoloe/` in command line and do not need to set config file. For details info please refer to doc below.
### 3. Inference and Deployment
```
# Pedestrian detection. Specify the config file path and test images
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
# Pedestrian tracking. Specify the config file path and test videos
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# Pedestrian tracking. Specify the config file path, the model path and test videos
# The model path specified on the command line prioritizes over the config file
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
# Attribute recognition. Specify the config file path and test videos, and set the "enable" to "True" in ATTR of infer_cfg.yml
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# Action Recognition. Specify the config file path and test videos, and set the "enable" to "True" in corresponding action configs of infer_cfg.yml
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# Pedestrian Multi-Target Multi-Camera tracking. Specify the config file path and the directory of test videos, and set the "enable" to "True" in REID in infer_cfg.yml
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16]
```
Other usage please refer to [sub-task docs](./docs)
### 3.1 Description of Parameters
| Parameter | Optional or not| Meaning |
|-------|-------|----------|
| --config | Yes | Config file path |
| --model_dir | Option | the model paths of different tasks in PP-Human, with a priority higher than config files. For example, `--model_dir det=better_det/ attr=better_attr/` |
| --image_file | Option | Images to-be-predicted |
| --image_dir | Option | The path of folders of to-be-predicted images |
| --video_file | Option | Videos to-be-predicted |
| --camera_id | Option | ID of the inference camera is -1 by default (means inference without cameras,and it can be set to 0 - (number of cameras-1)), and during the inference, click `q` on the visual interface to exit and output the inference result to output/output.mp4|
| --device | Option | During the operation,available devices are `CPU/GPU/XPU`,and the default is `CPU`|
| --output_dir | Option| The default root directory which stores the visualization result is output/|
| --run_mode | Option | When using GPU,the default one is paddle, and all these are available(paddle/trt_fp32/trt_fp16/trt_int8).|
| --enable_mkldnn | Option |Enable the MKLDNN acceleration or not in the CPU inference, and the default value is false |
| --cpu_threads | Option| The default CPU thread is 1 |
| --trt_calib_mode | Option| Enable calibration on TensorRT or not, and the default is False. When using the int8 of TensorRT,it should be set to True; When using the model quantized by PaddleSlim, it should be set to False. |
## III. Introduction to the Solution
The overall solution of PP-Human is as follows:
<div width="1000" align="center">
<img src="https://user-images.githubusercontent.com/48054808/160078395-e7b8f2db-1d1c-439a-91f4-2692fac25511.png"/>
</div>
### 1. Object Detection
- Use PP-YOLOE L as the model of object detection
- For details, please refer to [PP-YOLOE](../../configs/ppyoloe/) and [Detection and Tracking](docs/mot_en.md)
### 2. Multi-Object Tracking
- Conduct multi-object tracking with the SDE solution
- Use PP-YOLOE L as the detection model
- Use the Bytetrack solution to track modules
- For details, refer to [Bytetrack](configs/mot/bytetrack) and [Detection and Tracking](docs/mot_en.md)
### 3. Multi-Camera Tracking
- Use PP-YOLOE + Bytetrack to obtain the tracks of single-camera multi-object tracking
- Use ReID(centroid network)to extract features of the detection result of each frame
- Match the features of multi-camera tracks to get the cross-camera tracking result
- For details, please refer to [Multi-Camera Tracking](docs/mtmct_en.md)
### 4. Attribute Recognition
- Use PP-YOLOE + Bytetrack to track humans
- Use StrongBaseline(a multi-class model)to conduct attribute recognition, and the main attributes include age, gender, hats, eyes, clothing, and backpacks.
- For details, please refer to [Attribute Recognition](docs/attribute_en.md)
### 5. Falling Recognition
- Use PP-YOLOE + Bytetrack to track humans
- Use HRNet for keypoint detection and get the information of the 17 key points in the human body
- According to the changes of the key points of the same person within 50 frames, judge whether the action made by the person within 50 frames is a fall with the help of ST-GCN
- For details, please refer to [Falling Recognition](docs/action_en.md)
......@@ -31,8 +31,10 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模
| 任务 | 适用场景 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 |
| :---------: |:---------: |:--------------- | :-------: | :------: | :------: |
| 目标检测 | 图片输入 | mAP: 56.3 | 28.0ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪 | 视频输入 | MOTA: 72.0 | 33.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标检测(高精度) | 图片输入 | mAP: 56.6 | 28.0ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标检测(轻量级) | 图片输入 | mAP: 53.2 | 22.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| 目标跟踪(高精度) | 视频输入 | MOTA: 79.5 | 33.1ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| 目标跟踪(轻量级) | 视频输入 | MOTA: 69.1 | 27.2ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) |
| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) |
| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
......@@ -48,7 +50,7 @@ PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模
## 三、配置文件说明
PP-Human相关配置位于```deploy/pphuman/config/infer_cfg.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型
PP-Human相关配置位于```deploy/pphuman/config/infer_cfg_pphuman.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型
功能及任务类型对应表单如下:
......@@ -70,6 +72,7 @@ MOT:
tracker_config: deploy/pphuman/config/tracker_config.yml
batch_size: 1
basemode: "idbased"
enable: False
ATTR:
model_dir: output_inference/strongbaseline_r50_30e_pa100k/
......@@ -81,30 +84,30 @@ ATTR:
**注意:**
- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True, 其basemode类型会在代码中开启依赖的基础能力模型,比如跟踪模型。
- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,无需修改配置文件,详细说明参考下方参数说明文档
- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。
### 四、预测部署
```
# 行人检测,指定配置文件路径和测试图片
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径和测试视频
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跟踪,指定配置文件路径,模型路径和测试视频
# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True```
# 命令行中指定的模型路径优先级高于配置文件
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
# 行人属性识别,指定配置文件路径和测试视频,在配置文件中ATTR部分开启enable选项。
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人属性识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行为识别,指定配置文件路径和测试视频,在配置文件中对应行为识别功能开启enable选项。
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行为识别,指定配置文件路径和测试视频,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中REID部分开启enable选项。
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16]
# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件中```deploy/pphuman/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True```
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16]
```
### 4.1 参数说明
......
......@@ -26,7 +26,7 @@
4. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。
## 配置说明
[配置文件](../config/infer_cfg.yml)中与行为识别相关的参数如下:
[配置文件](../config/infer_cfg_pphuman.yml)中与行为识别相关的参数如下:
```
SKELETON_ACTION:
model_dir: output_inference/STGCN # 模型所在路径
......@@ -40,18 +40,18 @@ SKELETON_ACTION:
## 使用方法
1. 从上表链接中下载模型并解压到```./output_inference```路径下。
2. 目前行为识别模块仅支持视频输入,设置infer_cfg.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下:
2. 目前行为识别模块仅支持视频输入,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
```
3. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
......
......@@ -32,7 +32,7 @@ Note:
## Description of Configuration
Parameters related to action recognition in the [config file](../config/infer_cfg.yml) are as follow:
Parameters related to action recognition in the [config file](../config/infer_cfg_pphuman.yml) are as follow:
```
SKELETON_ACTION:
......@@ -52,22 +52,22 @@ SKELETON_ACTION:
- Download models from the links of the above table and unzip them to ```./output_inference```.
- Now the only available input is the video input in the action recognition module. set the "enable: True" in SKELETON_ACTION of infer_cfg.yml. And then run the command:
- Now the only available input is the video input in the action recognition module. set the "enable: True" in SKELETON_ACTION of infer_cfg_pphuman.yml. And then run the command:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
- There are two ways to modify the model path:
- In ```./deploy/pphuman/config/infer_cfg.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- In ```./deploy/pphuman/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
......
......@@ -15,30 +15,27 @@
## 使用方法
1. 从上表链接中下载模型并解压到```./output_inference```路径下
1. 从上表链接中下载模型并解压到```./output_inference```路径下,并且设置infer_cfg_pphuman.yml中`ATTR`的enable: True
2. 图片输入时,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu \
--enable_attr=True
```
3. 视频输入时,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--enable_attr=True
```
4. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- **(推荐)**命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--enable_attr=True \
--model_dir det=ppyoloe/
```
......
......@@ -15,30 +15,27 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
## Instruction
1. Download the model from the link in the above table, and unzip it to```./output_inference```.
1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml
2. When inputting the image, run the command as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu \
--enable_attr=True
```
3. When inputting the video, run the command as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--enable_attr=True
```
4. If you want to change the model path, there are two methods:
- In ```./deploy/pphuman/config/infer_cfg.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- In ```./deploy/pphuman/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- Add `--model_dir` in the command line to change the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--enable_attr=True \
--model_dir det=ppyoloe/
```
......
......@@ -15,24 +15,24 @@
## 使用方法
1. 从上表链接中下载模型并解压到```./output_inference```路径下
2. 图片输入时,启动命令如下
2. 图片输入时,是纯检测任务,启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu
```
3. 视频输入时,启动命令如下
3. 视频输入时,是跟踪任务,注意首先设置infer_cfg_pphuman.yml中的MOT配置的enable=True,然后启动命令如下
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. 若修改模型路径,有以下两种方式:
- ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- ```./deploy/pphuman/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--do_entrance_counting \
......
......@@ -15,25 +15,25 @@ Pedestrian detection and tracking is widely used in the intelligent community, i
## How to Use
1. Download models from the links of the above table and unizp them to ```./output_inference```.
2. During the image input, the start command is as follows:
2. When use the image as input, it's a detection task, the start command is as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu
```
3. In the video input, the start command is as follows:
3. When use the video as input, it's a tracking task, first you should set the "enable: True" in MOT of infer_cfg_pphuman.yml, and then the start command is as follows:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. There are two ways to modify the model path:
- In `./deploy/pphuman/config/infer_cfg.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- In `./deploy/pphuman/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
```python
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \
python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir det=ppyoloe/
......
......@@ -9,16 +9,16 @@ PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨
1. 下载模型 [REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下, MOT模型请参考[mot说明](./mot.md)文件下载。
2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg.yml 中的REID选择中的enable=True, 命令如下:
2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下:
```python
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=[your_video_file_directory] --device=gpu
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
```
3. 相关配置在`./deploy/pphuman/config/infer_cfg.yml`文件中修改:
3. 相关配置在`./deploy/pphuman/config/infer_cfg_pphuman.yml`文件中修改:
```python
python3 deploy/pphuman/pipeline.py
--config deploy/pphuman/config/infer_cfg.yml
--config deploy/pphuman/config/infer_cfg_pphuman.yml
--video_dir=[your_video_file_directory]
--device=gpu
--model_dir reid=reid_best/
......
......@@ -9,16 +9,16 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./mot.md).
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg.yml. The command line is:
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
```python
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=[your_video_file_directory] --device=gpu
python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu
```
3. Configuration can be modified in `./deploy/pphuman/config/infer_cfg.yml`.
3. Configuration can be modified in `./deploy/pphuman/config/infer_cfg_pphuman.yml`.
```python
python3 deploy/pphuman/pipeline.py
--config deploy/pphuman/config/infer_cfg.yml
--config deploy/pphuman/config/infer_cfg_pphuman.yml
--video_dir=[your_video_file_directory]
--device=gpu
--model_dir reid=reid_best/
......
......@@ -869,6 +869,7 @@ class PipePredictor(object):
online_scores[0] = scores
online_ids[0] = ids
if mot_res is not None:
image = plot_tracking_dict(
image,
num_classes,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册