4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
...
@@ -133,7 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
...
@@ -133,7 +133,7 @@ ID_BASED_CLSACTION: # config for classfication-based action recognition model
--video_file=test_video.mp4 \
--video_file=test_video.mp4 \
--device=gpu
--device=gpu
```
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
...
@@ -180,7 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
...
@@ -180,7 +180,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
--video_file=test_video.mp4 \
--video_file=test_video.mp4 \
--device=gpu
--device=gpu
```
```
4. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md)
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
...
@@ -238,7 +238,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model
...
@@ -238,7 +238,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model
--video_file=test_video.mp4 \
--video_file=test_video.mp4 \
--device=gpu
--device=gpu
```
```
5. For detailed parameter description, please refer to [Parameter Description](./QUICK_STARTED.md).
5. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md).
@@ -12,7 +12,7 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
...
@@ -12,7 +12,7 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The inference speed is V100, the speed of using TensorRT FP16.
2. The inference speed is V100, the speed of using TensorRT FP16.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./mot_en.md). The High precision and Faster model are both available.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./pphuman_mot_en.md). The High precision and Faster model are both available.
4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`.
4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`.
## Instruction
## Instruction
...
@@ -28,7 +28,7 @@ ATTR: #modul
...
@@ -28,7 +28,7 @@ ATTR: #modul
enable: False #whether to enable this model
enable: False #whether to enable this model
```
```
2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./QUICK_STARTED.md#41-参数说明) for more details):
2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./PPHuman_QUICK_STARTED.md#41-参数说明) for more details):
# Multi-Target Multi-Camera Tracking Module of PP-Human
# Multi-Target Multi-Camera Tracking Module of PP-Human
...
@@ -7,7 +7,7 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
...
@@ -7,7 +7,7 @@ The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleli
## How to Use
## How to Use
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./mot.md).
1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./pphuman_mot.md).
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is:
The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
@@ -14,7 +14,7 @@ The model of action recognition based on detection with human id directly recogn
...
@@ -14,7 +14,7 @@ The model of action recognition based on detection with human id directly recogn
## Model Optimization
## Model Optimization
### Detection-Tracking Model Optimization
### Detection-Tracking Model Optimization
The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
@@ -77,7 +77,7 @@ Now, we have available training data (`.npy`) and corresponding annotation files
...
@@ -77,7 +77,7 @@ Now, we have available training data (`.npy`) and corresponding annotation files
## Model Optimization
## Model Optimization
### detection-tracking model optimization
### detection-tracking model optimization
The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../mot_en.md) for detection/track model optimization.
The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization.
### keypoint model optimization
### keypoint model optimization
As the core feature of the scheme, the skeleton point positioning performance also determines the overall effect of action recognition. If there are obvious errors in the recognition results of the keypoint coordinates of in the actual scene, it is difficult to distinguish the specific actions from the skeleton image composed of the keypoint.
As the core feature of the scheme, the skeleton point positioning performance also determines the overall effect of action recognition. If there are obvious errors in the recognition results of the keypoint coordinates of in the actual scene, it is difficult to distinguish the specific actions from the skeleton image composed of the keypoint.