- 🔥🔥🔥 **2022.8.20:PP-Vehicle was first launched with four major toolbox for vehicle analysis,and it also provide detailed documentation for user to train with their own datas and model optimize.**
- 🔥🔥🔥 **PP-YOLOE-PLUS-Tiny was launched for Jetson deploy, which has achieved 20fps while four rtsp streams work at the same time; PP-Vehicle was launched with retrograde and lane line press.**
- 🔥 **2022.8.20:PP-Vehicle was first launched with four major toolbox for vehicle analysis,and it also provide detailed documentation for user to train with their own datas and model optimize.**
- 🔥 2022.7.13:PP-Human v2 launched with a full upgrade of four industrial features: behavior analysis, attributes recognition, visitor traffic statistics and ReID. It provides a strong core algorithm for pedestrian detection, tracking and attribute analysis with a simple and detailed development process and model optimization strategy.
- 2022.4.18: Add PP-Human practical tutorials, including training, deployment, and action expansion. Details for AIStudio project please see [Link](https://aistudio.baidu.com/aistudio/projectdetail/3842982)
...
...
@@ -41,7 +42,8 @@
| **Vehicle Attributes** | Identify 10 vehicle colors and 9 models <br/><br/> More powerfull backbone: PP-HGNet/PP-LCNet, with higher accuracy and faster speed <br/><br/> accuracy of model: 90.81 <br/><br/> | <imgtitle=""src="https://user-images.githubusercontent.com/48054808/185044490-00edd930-1885-4e79-b3d4-3a39a77dea93.gif"alt=""width="207"> |
| **Illegal Parking** | Easy to use with one line command, and define the illegal area by yourself <br/><br/> Get the license of illegal car <br/><br/> | <imgtitle=""src="https://user-images.githubusercontent.com/48054808/185028419-58ae0af8-a035-42e7-9583-25f5e4ce0169.png"alt=""width="209"> |
| **in-out counting** | Easy to use with one line command, and define the in-out line by yourself <br/><br/> Target route visualize with high tracking performance | <imgtitle=""src="https://user-images.githubusercontent.com/48054808/185028798-9e07379f-7486-4266-9d27-3aec943593e0.gif"alt=""width="200"> |
| **vehicle retrograde** | Easy to use with one line command <br/><br/> High precision Segmetation model PP-LiteSeg | <imgtitle=""src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_retrograde.gif"alt=""width="200"> |
| **vehicle press line** | Easy to use with one line command <br/><br/> High precision Segmetation model PP-LiteSeg | <imgtitle=""src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_press.gif"alt=""width="200"> |
The online stream decode based on opencv Capture function, normally support rtsp and rtmp.
- rtsp pull stream
For rtsp pull stream, use `--rtsp RTSP [RTSP ...]` parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows
1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.
It's very easy to use: 1) download the [release package](https://github.com/aler9/rtsp-simple-server/releases) which is compatible with your workspace. 2) run command './rtsp-simple-server', which works as a rtsp server.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s or ppyoloe_plus_tiny in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.
### Jetson Deployment
### Nvidia_Jetson
Due to the large gap in computing power of the Jetson platform compared to the server, we suggest:
1. choose a lightweight model, especially for tracking model, `ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` is recommended
2. For frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
1. choose a lightweight model, we provide a new model named [PP-YOLOE-Plus Tiny](../../../../configs/pphuman/README.md),which achieve 20fps with four rtsp streams work togather on Jetson AGX.
2. For further speedup, you can set frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
PP-YOLOE-Plus Tiny module speed test data on AGX:(three people in video, for example of attribute,the whole time cost per frame is 13.3+5.2*3≈29ms)
| module | time cost per frame(ms) | speed(fps) |
|:----------|:----------|:----------|
| tracking | 13 | 77 |
| Attribute | 29 | 34 |
| falldown | 64.5 | 15.5 |
| smoking | 68.8 | 14.5 |
| calling | 22.5 | 44.5 |
| fighting | 3.98 | 251 |
With this recommended configuration, it is possible to achieve higher speeds on the TX2 platform. It has been tested with attribute case, with speeds up to 20fps. The configuration file can be modified directly (recommended) or from the command line (not recommended due to its long fields).
### Parameters
...
...
@@ -172,8 +191,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
| --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input |
| --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 |
| --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. |
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.
|
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.|
| --output_dir | Option | The root directory for the visualization results, and the default is output/ |
| --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional |
| --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False |
...
...
@@ -192,14 +210,14 @@ The overall solution for PP-Human v2 is shown in the graph below:
### Pedestrian detection
- Take PP-YOLOE L as the object detection model
- For detailed documentation, please refer to [PP-YOLOE](... /... /... /... /configs/ppyoloe/) and [Multiple-Object-Tracking](pphuman_mot_en.md)
- For detailed documentation, please refer to [PP-YOLOE](../../../../configs/ppyoloe/) and [Multiple-Object-Tracking](pphuman_mot_en.md)
### Pedestrian tracking
- Vehicle tracking by SDE solution
- Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models
- Adopt the OC-SORT solution for racking module
- Refer to [OC-SORT](... /... /... /... /configs/mot/ocsort) and [Multi-Object Tracking](pphuman_mot_en.md) for details
- Refer to [OC-SORT](../../../../configs/mot/ocsort) and [Multi-Object Tracking](pphuman_mot_en.md) for details
| Lane line Segmentation | 47ms | [Lane line Segmentation](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) | 47M |
Download the model and unzip it into the `. /output_inference` folder.
...
...
@@ -60,7 +65,7 @@ In the configuration file, the model path defaults to the download path of the m
**Notes:**
- The accuracy of detection tracking model is obtained from the joint dataset PPVehicle (integration of the public dataset BDD100K-MOT and UA-DETRAC). For more details, please refer to [PP-Vehicle](... /... /... /... /configs/ppvehicle)
- The accuracy of detection tracking model is obtained from the joint dataset PPVehicle (integration of the public dataset BDD100K-MOT and UA-DETRAC). For more details, please refer to [PP-Vehicle](../../../../configs/ppvehicle)
- Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing.
The online stream decode based on opencv Capture function, normally support rtsp and rtmp.
- rtsp pull stream
For rtsp pull stream, use --rtsp RTSP [RTSP ...] parameter to specify one or more rtsp streams. Separate the multiple addresses with a space, or replace the video address directly after the video_file with the rtsp stream address), examples as follows
1. rtsp push stream is based on [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), please enable this serving first.
It's very easy to use: 1) download the [release package](https://github.com/aler9/rtsp-simple-server/releases) which is compatible with your workspace. 2) run command './rtsp-simple-server', which works as a rtsp server.
2. the output visualize will be frozen frequently if the model cost too much time, we suggest to use faster model like ppyoloe_s in tracking, this is simply replace mot_ppyoloe_l_36e_pipeline.zip with mot_ppyoloe_s_36e_pipeline.zip in model config yaml file.
### Jetson Deployment
### Nvidia_Jetson
Due to the large gap in computing power of the Jetson platform compared to the server, we suggest:
1. choose a lightweight model, especially for tracking model, `ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip` is recommended
2. For frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
1. choose a lightweight model, we provide a new model named [PP-YOLOE-Plus Tiny](../../../../configs/ppvehicle/README.md),which achieve 20fps with four rtsp streams work togather on Jetson AGX.
2. For further speedup, you can set frame skipping of tracking; we recommend 2 or 3: `skip_frame_num: 3`
PP-YOLOE-Plus Tiny module speed test data on AGX:(a single car in the test video)
| module | time cost per frame(ms) | speed(fps) |
|:----------|:----------|:----------|
| tracking | 13 | 77 |
| Attribute | 20.2 | 49.4 |
| Plate | - | - |
With this recommended configuration, it is possible to achieve higher speeds on the TX2 platform. It has been tested with attribute case, with speeds up to 20fps. The configuration file can be modified directly (recommended) or from the command line (not recommended due to its long fields).
### Parameters
...
...
@@ -176,8 +192,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
| --rtsp | Option | rtsp video stream address, supports one or more simultaneous streams input |
| --camera_id | Option | The camera ID for prediction, default is -1 ( for no camera prediction, can be set to 0 - (number of cameras - 1) ), press `q` in the visualization interface during the prediction process to output the prediction result to: output/output.mp4 |
| --device | Option | Running device, options include `CPU/GPU/XPU`, and the default is `CPU`. |
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.
|
| --pushurl | Option | push the output video to rtsp stream, normaly start with `rtsp://`; this has higher priority than local video save, while this is set, pipeline will not save local visualize video, the default is "", means this will not work now.|
| --output_dir | Option | The root directory for the visualization results, and the default is output/ |
| --run_mode | Option | For GPU, the default is paddle, with (paddle/trt_fp32/trt_fp16/trt_int8) as optional |
| --enable_mkldnn | Option | Whether to enable MKLDNN acceleration in CPU prediction, the default is False |
...
...
@@ -194,7 +209,7 @@ With this recommended configuration, it is possible to achieve higher speeds on
The overall solution for PP-Vehicle v2 is shown in the graph below:
@@ -202,14 +217,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below:
### Vehicle detection
- Take PP-YOLOE L as the object detection model
- For detailed documentation, please refer to [PP-YOLOE](... /... /... /... /configs/ppyoloe/) and [Multiple-Object-Tracking](ppvehicle_mot_en.md)
- For detailed documentation, please refer to [PP-YOLOE](../../../../configs/ppyoloe/) and [Multiple-Object-Tracking](ppvehicle_mot_en.md)
### Vehicle tracking
- Vehicle tracking by SDE solution
- Adopt PP-YOLOE L (high precision) and S (lightweight) for detection models
- Adopt the OC-SORT solution for racking module
- Refer to [OC-SORT](... /... /... /... /configs/mot/ocsort) and [Multi-Object Tracking](ppvehicle_mot_en.md) for details
- Refer to [OC-SORT](../../../../configs/mot/ocsort) and [Multi-Object Tracking](ppvehicle_mot_en.md) for details
### Attribute Recognition
...
...
@@ -226,3 +241,14 @@ The overall solution for PP-Vehicle v2 is shown in the graph below:
- Use vehicle tracking model (high precision) PP-YOLOE L to determine whether the parking is illegal based on the vehicle's trajectory and the designated illegal parking area. If it is illegal parking, display the illegal parking plate number.
- For details, please refer to [Illegal Parking Detection](ppvehicle_illegal_parking_en.md)
#### Vehicle Press Line
- Use segmentation model PP-LiteSeg to get the lane line in frame, combine it with vehicle route to find out the vehicle against traffic.
- For details, please refer to [Vehicle Press Line](ppvehicle_press_en.md)
#### Vehicle Retrograde
- Use segmentation model PP-LiteSeg to get the lane line in frame, combine it with vehicle detection box to juege if the car is pressing on lines.
- For details, please refer to [Vehicle Retrograde](ppvehicle_retrograde_en.md)
1. The detection/tracking model uses the PPVehicle dataset ( which integrates BDD100K-MOT and UA-DETRAC). The dataset merged car, truck, bus, van from BDD100K-MOT and car, bus, van from UA-DETRAC all into 1 class vehicle(1). The detection accuracy mAP was tested on the test set of PPVehicle, and the tracking accuracy MOTA was obtained on the test set of BDD100K-MOT (`car, truck, bus, van` were combined into 1 class `vehicle`). For more details about the training procedure, please refer to [ppvehicle](... /... /... /... /configs/ppvehicle).
1. The detection/tracking model uses the PPVehicle dataset ( which integrates BDD100K-MOT and UA-DETRAC). The dataset merged car, truck, bus, van from BDD100K-MOT and car, bus, van from UA-DETRAC all into 1 class vehicle(1). The detection accuracy mAP was tested on the test set of PPVehicle, and the tracking accuracy MOTA was obtained on the test set of BDD100K-MOT (`car, truck, bus, van` were combined into 1 class `vehicle`). For more details about the training procedure, please refer to [ppvehicle](../../../../configs/ppvehicle).
2. Inference speed is obtained at T4 with TensorRT FP16 enabled, which includes data pre-processing, model inference and post-processing.