@@ -60,9 +60,9 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
...
@@ -60,9 +60,9 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
## How to Use
## How to Use
- Download models from the links of the above table and unzip them to ```./output_inference```.
1. Download models `Pedestrian Detection/Tracking`, `Keypoint Detection` and `Falling Recognition` from the links in the Model Zoo and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
- Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command:
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command:
@@ -70,7 +70,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
...
@@ -70,7 +70,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
--device=gpu
--device=gpu
```
```
- There are two ways to modify the model path:
3. There are two ways to modify the model path:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path:
- Add `--model_dir` in the command line to revise the model path:
...
@@ -81,6 +81,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
...
@@ -81,6 +81,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
...
@@ -98,7 +99,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
...
@@ -98,7 +99,7 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
```
```
- The falling action recognition model uses [ST-GCN](https://arxiv.org/abs/1801.07455), and employ the [PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md) toolkit to complete model training.
- The falling action recognition model uses [ST-GCN](https://arxiv.org/abs/1801.07455), and employ the [PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md) toolkit to complete model training.
<divalign="center"><imgsrc="../images/calling.gif"width='1000'/><center>Data source and copyright owner:Skyinfor
<divalign="center"><imgsrc="../images/calling.gif"width='1000'/><center>Data source and copyright owner:Skyinfor
Technology. Thanks for the provision of actual scenario data, which are only
Technology. Thanks for the provision of actual scenario data, which are only
...
@@ -122,9 +123,9 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
...
@@ -122,9 +123,9 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
### How to Use
### How to Use
1. Download models from the links of the above table and unzip them to ```./output_inference```.
1. Download models `Pedestrian Detection/Tracking` and `Calling Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_CLSACTION` in infer_cfg_pphuman.yml.
2. Now the only available input is the video input in the action recognition module. Set the "enable: True" of `ID_BASED_CLSACTION` in infer_cfg_pphuman.yml.
3. Run this command:
3. Run this command:
```python
```python
...
@@ -132,6 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
...
@@ -132,6 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
--video_file=test_video.mp4 \
--video_file=test_video.mp4 \
--device=gpu
--device=gpu
```
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
...
@@ -168,7 +170,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
...
@@ -168,7 +170,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
### How to Use
### How to Use
1. Download models from the links of the above table and unzip them to ```./output_inference```.
1. Download models `Pedestrian Detection/Tracking` and `Smoking Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml.
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml.
...
@@ -178,6 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
...
@@ -178,6 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
--video_file=test_video.mp4 \
--video_file=test_video.mp4 \
--device=gpu
--device=gpu
```
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
...
@@ -223,18 +226,20 @@ VIDEO_ACTION: # Config for detection-based action recognition model
...
@@ -223,18 +226,20 @@ VIDEO_ACTION: # Config for detection-based action recognition model
### How to Use
### How to Use
1. Download models from the links of the above table and unzip them to ```./output_inference```.
1. Download model`Fighting Detection` from the links of the above table and unzip it to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Modify the file names in the `ppTSM` folder to `model.pdiparams, model.pdiparams.info and model.pdmodel`;
2. Modify the file names in the `ppTSM` folder to `model.pdiparams, model.pdiparams.info and model.pdmodel`;
3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml.
3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml.
5. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md).
The result is shown as follow:
The result is shown as follow:
...
@@ -252,14 +257,14 @@ The current fight recognition model is using [PP-TSM](https://github.com/PaddleP
...
@@ -252,14 +257,14 @@ The current fight recognition model is using [PP-TSM](https://github.com/PaddleP
The pretrained models are provided and can be used directly, including pedestrian detection/ tracking, keypoint detection, smoking, calling and fighting recognition. If users need to train custom action or optimize the model performance, please refer the link below.
The pretrained models are provided and can be used directly, including pedestrian detection/ tracking, keypoint detection, smoking, calling and fighting recognition. If users need to train custom action or optimize the model performance, please refer the link below.
mkdir{root of PaddleVideo}/applications/PPHuman/datasets/annotations
mv det_keypoint_unite_image_results.json {root of PaddleVideo}/applications/PPHuman/datasets/annotations/det_keypoint_unite_image_results_{video_id}_{camera_id}.json
cd{root of PaddleVideo}/applications/PPHuman/datasets/