未验证 提交 76b581d5 编写于 作者: J JYChen 提交者: GitHub

add en doc for vehicle attr (#6638)

* add en doc for vehicle attr

* fix some grammer problems

* fix grammer

* fix command line error in doc
上级 3ef53438
......@@ -52,21 +52,23 @@ SKELETON_ACTION: # 基于骨骼点的行为识别模型配置
### 使用方法
1.`模型库`中下载`行人检测/跟踪``关键点识别``摔倒行为识别`三个预测部署模型并解压到```./output_inference```路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。
2. 目前行为识别模块仅支持视频输入,根据期望开启的行为识别方案类型,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
```
```
3. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中--config后面紧跟着增加`-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx `修改模型路径:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN \
--video_file=test_video.mp4 \
--device=gpu
```
```
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
......
......@@ -63,20 +63,21 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
```
3. There are two ways to modify the model path:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx ` in the command line following the --config to change the model path:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN \
--video_file=test_video.mp4 \
--device=gpu
```
......@@ -172,7 +173,7 @@ ID_BASED_DETACTION: # Config for detection-based action recognition model
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml.
3. Run this command:
```python
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
......@@ -229,7 +230,7 @@ VIDEO_ACTION: # Config for detection-based action recognition model
3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml.
4. Run this command:
```python
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
......
[English](ppvehicle_attribute_en.md) | 简体中文
# PP-Vehicle属性识别模块
......@@ -46,7 +47,7 @@
### 配置项说明
配置文件中与属性相关的参数如下:
[配置文件](../../config/infer_cfg_ppvehicle.yml)中与属性相关的参数如下:
```
VEHICLE_ATTR:
model_dir: output_inference/vehicle_attribute_infer/ # 车辆属性模型调用路径
......@@ -91,13 +92,13 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
5. 若修改模型路径,有以下两种方式:
- 方法一:`./deploy/pipeline/config/infer_cfg_ppvehicle.yml`下可以配置不同模型路径,属性识别模型修改`VEHICLE_ATTR`字段下配置
- 方法二:命令行中增加--model_dir修改模型路径:
- 方法二:直接在命令行中增加`-o`,以覆盖配置文件中的默认模型路径:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
--model_dir vehicle_attr=output_inference/vehicle_attribute_infer
-o VEHICLE_ATTR.model_dir=output_inference/vehicle_attribute_infer
```
测试效果如下:
......@@ -107,7 +108,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
</div>
## 方案说明
车辆属性模型使用了[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 的超轻量图像分类方案(PULC,Practical Ultra Lightweight image Classification)。关于该模型的数据准备、训练、测试等详细内容,请见[PULC 车辆属性识别模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/PULC/PULC_vehicle_attribute.md).
车辆属性识别模型使用了[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 的超轻量图像分类方案(PULC,Practical Ultra Lightweight image Classification)。关于该模型的数据准备、训练、测试等详细内容,请见[PULC 车辆属性识别模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/PULC/PULC_vehicle_attribute.md).
车辆属性识别模型选用了轻量级、高精度的PPLCNet。并在该模型的基础上,进一步使用了以下优化方案:
......
English | [简体中文](ppvehicle_attribute.md)
# Attribute Recognition Module of PP-Vehicle
Vehicle attribute recognition is widely used in smart cities, smart transportation and other scenarios. In PP-Vehicle, a vehicle attribute recognition module is integrated, which can identify vehicle color and model.
| Task | Algorithm | Precision | Inference Speed | Download |
|-----------|------|-----------|----------|---------------------|
| Vehicle Detection/Tracking | PP-YOLOE | - | - | [Inference and Deployment Model](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| Vehicle Attribute Recognition | PPLCNet | 90.81 | 2.36 ms | [Inference and Deployment Model](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) |
Note:
1. The inference speed of the attribute model is obtained from the test on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz, with the MKLDNN acceleration strategy enabled, and 10 threads.
2. For introductions, please refer to [PP-LCNet Series](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/models/PP-LCNet_en.md). Related paper is available on PP-LCNet paper
3. The training and test phase of vehicle attribute recognition model are both obtained from [VeRi dataset](https://www.v7labs.com/open-datasets/veri-dataset).
- The provided pre-trained model supports 10 colors and 9 models, which is the same with VeRi dataset. The details are as follows:
```yaml
# Vehicle Colors
- "yellow"
- "orange"
- "green"
- "gray"
- "red"
- "blue"
- "white"
- "golden"
- "brown"
- "black"
# Vehicle Models
- "sedan"
- "suv"
- "van"
- "hatchback"
- "mpv"
- "pickup"
- "bus"
- "truck"
- "estate"
```
## Instructions
### Description of Configuration
Parameters related to vehicle attribute recognition in the [config file](../../config/infer_cfg_ppvehicle.yml) are as follows:
```yaml
VEHICLE_ATTR:
model_dir: output_inference/vehicle_attribute_infer/ # Path of the model
batch_size: 8 # The size of the inference batch
color_threshold: 0.5 # Threshold of color. Confidence is required to reach this threshold to determine the specific attribute, otherwise it will be 'Unknown‘.
type_threshold: 0.5 # Threshold of vehicle model. Confidence is required to reach this threshold to determine the specific attribute, otherwise it will be 'Unknown‘.
enable: False # Whether to enable this function
```
### How to Use
1. Download models `Vehicle Detection/Tracking` and `Vehicle Attribute Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path to use this function.
2. Set the "enable: True" of `VEHICLE_ATTR` in infer_cfg_ppvehicle.yml.
3. For image input, please run these commands. (Description of more parameters, please refer to [QUICK_STARTED - Parameter_Description](./PPVehicle_QUICK_STARTED.md).
```bash
# For single image
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--image_file=test_image.jpg \
--device=gpu
# For folder contains one or multiple images
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--image_dir=images/ \
--device=gpu
```
4. For video input, please run these commands.
```bash
# For single video
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu
# For folder contains one or multiple videos
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_dir=test_videos/ \
--device=gpu
```
5. There are two ways to modify the model path:
- Method 1:Set paths of each model in `./deploy/pipeline/config/infer_cfg_ppvehicle.yml`. For vehicle attribute recognition, the path should be modified under the `VEHICLE_ATTR` field.
- Method 2: Directly add `-o` in command line to override the default model path in the configuration file:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
-o VEHICLE_ATTR.model_dir=output_inference/vehicle_attribute_infer
```
The result is shown as follow:
<div width="1000" align="center">
<img src="../images/vehicle_attribute.gif"/>
</div>
### Features to the Solution
The vehicle attribute recognition model adopts PULC, Practical Ultra Lightweight image Classification from [PaddleClas](https://github.com/PaddlePaddle/PaddleClas). For details on data preparation, training, and testing of the model, please refer to [PULC Recognition Model of Vehicle Attribute](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/PULC/PULC_vehicle_attribute_en.md).
The vehicle attribute recognition model adopts the lightweight and high-precision PPLCNet. And on top of PPLCNet, our model optimized via::
- Improved about 0.5 percentage points accuracy by using the SSLD pre-trained model without changing the inference speed.
- Improved 0.52 percentage points accuracy further by integrating EDA data augmentation strategy.
- Improved 0.23 percentage points accuracy by using SKL-UGI knowledge distillation.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册