diff --git a/deploy/pipeline/docs/tutorials/pphuman_action.md b/deploy/pipeline/docs/tutorials/pphuman_action.md index ee49282eb8ca0ab221dfa3bb6400da51de395716..67f09fdd0c243d7bc563ae4ff13158562d0123e7 100644 --- a/deploy/pipeline/docs/tutorials/pphuman_action.md +++ b/deploy/pipeline/docs/tutorials/pphuman_action.md @@ -52,21 +52,23 @@ SKELETON_ACTION: # 基于骨骼点的行为识别模型配置 ### 使用方法 1. 从`模型库`中下载`行人检测/跟踪`、`关键点识别`、`摔倒行为识别`三个预测部署模型并解压到```./output_inference```路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 2. 目前行为识别模块仅支持视频输入,根据期望开启的行为识别方案类型,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下: -```python -python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + ```bash + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ -``` + ``` + 3. 若修改模型路径,有以下两种方式: - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 - 命令行中--config后面紧跟着增加`-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx `修改模型路径: -```python -python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ - -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\ + ```bash + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN \ --video_file=test_video.mp4 \ --device=gpu -``` + ``` + 4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 diff --git a/deploy/pipeline/docs/tutorials/pphuman_action_en.md b/deploy/pipeline/docs/tutorials/pphuman_action_en.md index 3eab1d447736c26a354ca8527e391fe9fc1500ac..c7eacf4d8e5878d251734787ed64cab73170194f 100644 --- a/deploy/pipeline/docs/tutorials/pphuman_action_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_action_en.md @@ -63,20 +63,21 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model 2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command: - ```python - python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + ```bash + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu - ``` + ``` 3. There are two ways to modify the model path: - In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path. - Add `-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx ` in the command line following the --config to change the model path: - ```python -python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ - -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\ + + ```bash + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN \ --video_file=test_video.mp4 \ --device=gpu ``` @@ -172,7 +173,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model 2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml. 3. Run this command: - ```python + ```bash python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu @@ -229,7 +230,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model 3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml. 4. Run this command: - ```python + ```bash python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md index 46da107f2d30dec357b458da591f66371af1476a..46a4d131c54317d19ad2a4971dff4907ae8b87d5 100644 --- a/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md +++ b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md @@ -1,3 +1,4 @@ +[English](ppvehicle_attribute_en.md) | 简体中文 # PP-Vehicle属性识别模块 @@ -46,7 +47,7 @@ ### 配置项说明 -配置文件中与属性相关的参数如下: +[配置文件](../../config/infer_cfg_ppvehicle.yml)中与属性相关的参数如下: ``` VEHICLE_ATTR: model_dir: output_inference/vehicle_attribute_infer/ # 车辆属性模型调用路径 @@ -91,13 +92,13 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv 5. 若修改模型路径,有以下两种方式: - 方法一:`./deploy/pipeline/config/infer_cfg_ppvehicle.yml`下可以配置不同模型路径,属性识别模型修改`VEHICLE_ATTR`字段下配置 - - 方法二:命令行中增加--model_dir修改模型路径: + - 方法二:直接在命令行中增加`-o`,以覆盖配置文件中的默认模型路径: ```bash python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --model_dir vehicle_attr=output_inference/vehicle_attribute_infer + -o VEHICLE_ATTR.model_dir=output_inference/vehicle_attribute_infer ``` 测试效果如下: @@ -107,7 +108,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv ## 方案说明 -车辆属性模型使用了[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 的超轻量图像分类方案(PULC,Practical Ultra Lightweight image Classification)。关于该模型的数据准备、训练、测试等详细内容,请见[PULC 车辆属性识别模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/PULC/PULC_vehicle_attribute.md). +车辆属性识别模型使用了[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 的超轻量图像分类方案(PULC,Practical Ultra Lightweight image Classification)。关于该模型的数据准备、训练、测试等详细内容,请见[PULC 车辆属性识别模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/PULC/PULC_vehicle_attribute.md). 车辆属性识别模型选用了轻量级、高精度的PPLCNet。并在该模型的基础上,进一步使用了以下优化方案: diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_attribute_en.md b/deploy/pipeline/docs/tutorials/ppvehicle_attribute_en.md new file mode 100644 index 0000000000000000000000000000000000000000..6429abf9246ee4172865c5a035cca4443106c984 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_attribute_en.md @@ -0,0 +1,121 @@ +English | [简体中文](ppvehicle_attribute.md) + +# Attribute Recognition Module of PP-Vehicle + +Vehicle attribute recognition is widely used in smart cities, smart transportation and other scenarios. In PP-Vehicle, a vehicle attribute recognition module is integrated, which can identify vehicle color and model. + +| Task | Algorithm | Precision | Inference Speed | Download | +|-----------|------|-----------|----------|---------------------| +| Vehicle Detection/Tracking | PP-YOLOE | - | - | [Inference and Deployment Model](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | +| Vehicle Attribute Recognition | PPLCNet | 90.81 | 2.36 ms | [Inference and Deployment Model](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | + + +Note: +1. The inference speed of the attribute model is obtained from the test on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz, with the MKLDNN acceleration strategy enabled, and 10 threads. +2. For introductions, please refer to [PP-LCNet Series](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/models/PP-LCNet_en.md). Related paper is available on PP-LCNet paper +3. The training and test phase of vehicle attribute recognition model are both obtained from [VeRi dataset](https://www.v7labs.com/open-datasets/veri-dataset). + + +- The provided pre-trained model supports 10 colors and 9 models, which is the same with VeRi dataset. The details are as follows: + +```yaml +# Vehicle Colors +- "yellow" +- "orange" +- "green" +- "gray" +- "red" +- "blue" +- "white" +- "golden" +- "brown" +- "black" + +# Vehicle Models +- "sedan" +- "suv" +- "van" +- "hatchback" +- "mpv" +- "pickup" +- "bus" +- "truck" +- "estate" +``` + +## Instructions + +### Description of Configuration + +Parameters related to vehicle attribute recognition in the [config file](../../config/infer_cfg_ppvehicle.yml) are as follows: + +```yaml +VEHICLE_ATTR: + model_dir: output_inference/vehicle_attribute_infer/ # Path of the model + batch_size: 8 # The size of the inference batch + color_threshold: 0.5 # Threshold of color. Confidence is required to reach this threshold to determine the specific attribute, otherwise it will be 'Unknown‘. + type_threshold: 0.5 # Threshold of vehicle model. Confidence is required to reach this threshold to determine the specific attribute, otherwise it will be 'Unknown‘. + enable: False # Whether to enable this function +``` + +### How to Use +1. Download models `Vehicle Detection/Tracking` and `Vehicle Attribute Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path to use this function. + +2. Set the "enable: True" of `VEHICLE_ATTR` in infer_cfg_ppvehicle.yml. + +3. For image input, please run these commands. (Description of more parameters, please refer to [QUICK_STARTED - Parameter_Description](./PPVehicle_QUICK_STARTED.md). + +```bash +# For single image +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --image_file=test_image.jpg \ + --device=gpu + +# For folder contains one or multiple images +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --image_dir=images/ \ + --device=gpu +``` + +4. For video input, please run these commands. + +```bash +# For single video +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_file=test_video.mp4 \ + --device=gpu + +# For folder contains one or multiple videos +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_dir=test_videos/ \ + --device=gpu +``` + +5. There are two ways to modify the model path: + + - Method 1:Set paths of each model in `./deploy/pipeline/config/infer_cfg_ppvehicle.yml`. For vehicle attribute recognition, the path should be modified under the `VEHICLE_ATTR` field. + - Method 2: Directly add `-o` in command line to override the default model path in the configuration file: + +```bash +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + -o VEHICLE_ATTR.model_dir=output_inference/vehicle_attribute_infer +``` + +The result is shown as follow: + +