| Single-Camera Video | Attribute Recognition | Multi-Object Tracking Attribute Recognition | MOT ATTR |
| Single-Camera Video | Behavior Recognition | Multi-Object Tracking Keypoint Detection Falling Recognition | MOT KPT FALLING |
| Single-Camera Video | Behavior Recognition | Multi-Object Tracking Keypoint Detection Falling Recognition | MOT KPT SKELETON_ACTION |
For example, for the attribute recognition with the video input, its task types contain multi-object tracking and attribute recognition, and the config is:
...
...
@@ -89,7 +89,7 @@ ATTR:
**Note: **
- For different tasks, users could add `--enable_attr=True` or `--enable_falling=True` in command line and do not need to set config file.
- For different tasks, users should set the "enable" to "True" in coresponding configs in the infer_cfg.yml file.
- if only need to change the model path, users could add `--model_dir det=ppyoloe/` in command line and do not need to set config file. For details info please refer to doc below.
# Pedestrian Multi-Target Multi-Camera tracking. Specify the config file path and the directory of test videos, and set the "enable" to "True" in REID in infer_cfg.yml
| --camera_id | Option | ID of the inference camera is -1 by default (means inference without cameras,and it can be set to 0 - (number of cameras-1)), and during the inference, click `q` on the visual interface to exit and output the inference result to output/output.mp4|
| --enable_attr| Option | Enable attribute recognition or not |
| --enable_falling| Option | Enable action recognition or not |
| --device | Option | During the operation,available devices are `CPU/GPU/XPU`,and the default is `CPU`|
| --output_dir | Option| The default root directory which stores the visualization result is output/|
| --run_mode | Option | When using GPU,the default one is paddle, and all these are available(paddle/trt_fp32/trt_fp16/trt_int8).|
Parameters related to action recognition in the [config file](../config/infer_cfg.yml) are as follow:
```
FALLING:
SKELETON_ACTION:
model_dir: output_inference/STGCN # Path of the model
batch_size: 1 # The size of the inference batch. The only avilable size for inference is 1.
max_frames: 50 # The number of frames of action segments. When frames of time-ordered skeleton keypoints of each pedestrian ID achieve the max value,the action type will be judged by the action recognition model. If the setting is the same as the training, there will be an ideal inference result.
display_frames: 80 # The number of display frames. When the inferred action type is falling down, the time length of the act will be displayed in the ID.
coord_size: [384, 512] # The unified size of the coordinate, which is the best when it is the same as the training setting.
basemode: "skeletonbased" #the models which is based on,whether we need the skeleton model.
enable: False #whether to enable this function
```
...
...
@@ -50,18 +52,17 @@ FALLING:
- Download models from the links of the above table and unzip them to ```./output_inference```.
- Now the only available input is the video input in the action recognition module. The start command is:
- Now the only available input is the video input in the action recognition module. set the "enable: True" in SKELETON_ACTION of infer_cfg.yml. And then run the command:
- In ```./deploy/pphuman/config/infer_cfg.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `FALLING`respectively, and modify the corresponding path of each field into the expected path.
- In ```./deploy/pphuman/config/infer_cfg.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION`respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
@@ -9,7 +9,7 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./mot.md).
2. In the MTMCT mode, input videos are required to be put in the same directory. The command line is:
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg.yml. The command line is: