未验证 提交 9e5f22ae 编写于 作者: Z zhiboniu 提交者: GitHub

attribute docs update (#6381)

* attribute docs update

* fix mtmct

* update docs
上级 5931880c
...@@ -20,7 +20,7 @@ KPT: ...@@ -20,7 +20,7 @@ KPT:
batch_size: 8 batch_size: 8
ATTR: ATTR:
model_dir: output_inference/strongbaseline_r50_30e_pa100k/ model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/
batch_size: 8 batch_size: 8
basemode: "idbased" basemode: "idbased"
enable: False enable: False
......
...@@ -6,21 +6,40 @@ ...@@ -6,21 +6,40 @@
| 任务 | 算法 | 精度 | 预测速度(ms) |下载链接 | | 任务 | 算法 | 精度 | 预测速度(ms) |下载链接 |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | |:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| 行人属性高精度模型 | PP-HGNet_small | mA: 95.4 | 单人 1.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) | | 行人属性高精度模型 | PP-HGNet_small | mA: 95.4 | 单人 1.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.tar) |
| 行人属性快速版模型 | PP-LCNet_x1_0 | mA: 94.5 | 单人 0.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) | | 行人属性轻量级模型 | PP-LCNet_x1_0 | mA: 94.5 | 单人 0.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) |
| 行人属性衡模型 | PP-HGNet_tiny | mA: 95.2 | 单人 1.14ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) | | 行人属性精度与速度均衡模型 | PP-HGNet_tiny | mA: 95.2 | 单人 1.14ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) |
1. 行人属性分析精度为[PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html)和部分业务数据融合训练测试得到 1. 行人属性分析精度为[PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html)和部分业务数据融合训练测试得到
2. 预测速度为V100 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程 2. 预测速度为V100 机器上使用TensorRT FP16时的速度, 该处测速速度为模型预测速度
3. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。
4. 模型下载后解压放置在PaddleDetection/output_inference/目录下。
## 使用方法 ## 使用方法
1. 从上表链接中下载模型并解压到```./output_inference```路径下,并且设置infer_cfg_pphuman.yml中`ATTR`的enable: True 1. 从上表链接中下载模型并解压到```PaddleDetection/output_inference```路径下,并且设置```deploy/pipeline/config/infer_cfg_pphuman.yml````ATTR`的enable: True
`infer_cfg_pphuman.yml`中配置项说明:
```
ATTR: #模块名称
model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/ #模型路径
batch_size: 8 #推理最大batchsize
basemode: "idbased" #流程类型,'idbased'表示基于跟踪模型
enable: False #功能是否开启
```
2. 图片输入时,启动命令如下 2. 图片输入时,启动命令如下
```python ```python
#单张图片
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \ --image_file=test_image.jpg \
--device=gpu \ --device=gpu \
#图片文件夹
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_dir=images/ \
--device=gpu \
``` ```
3. 视频输入时,启动命令如下 3. 视频输入时,启动命令如下
```python ```python
...@@ -28,15 +47,16 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -28,15 +47,16 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu \
``` ```
4. 若修改模型路径,有以下两种方式: 4. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 - 方法一:```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- **(推荐)**命令行中增加`--model_dir`修改模型路径: - 方法二:命令行中增加`--model_dir`修改模型路径:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu \
--model_dir det=ppyoloe/ --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
``` ```
测试效果如下: 测试效果如下:
......
...@@ -6,21 +6,40 @@ Pedestrian attribute recognition has been widely used in the intelligent communi ...@@ -6,21 +6,40 @@ Pedestrian attribute recognition has been widely used in the intelligent communi
| Task | Algorithm | Precision | Inference Speed(ms) | Download Link | | Task | Algorithm | Precision | Inference Speed(ms) | Download Link |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | |:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| High-Precision Model | PP-HGNet_small | mA: 95.4 | per person 1.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) | | High-Precision Model | PP-HGNet_small | mA: 95.4 | per person 1.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.tar) |
| Fast Model | PP-LCNet_x1_0 | mA: 94.5 | per person 0.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) | | Fast Model | PP-LCNet_x1_0 | mA: 94.5 | per person 0.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) |
| Balanced Model | PP-HGNet_tiny | mA: 95.2 | per person 1.14ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) | | Balanced Model | PP-HGNet_tiny | mA: 95.2 | per person 1.14ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) |
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data. 1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset)[RAPv2](http://www.rapdataset.com/rapv2.html)[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The inference speed is V100, the speed of using TensorRT FP16. 2. The inference speed is V100, the speed of using TensorRT FP16.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./mot_en.md). The High precision and Faster model are both available.
4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`.
## Instruction ## Instruction
1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml 1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml
The meaning of configs of `infer_cfg_pphuman.yml`
```
ATTR: #module name
model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/ #model path
batch_size: 8 #maxmum batchsize when inference
basemode: "idbased" #the routing type of pipeline,'idbased' means this model is based on tracking.
enable: False #whether to enable this model
```
2. When inputting the image, run the command as follows: 2. When inputting the image, run the command as follows:
```python ```python
#single image
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \ --image_file=test_image.jpg \
--device=gpu \ --device=gpu \
#image directory
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_dir=images/ \
--device=gpu \
``` ```
3. When inputting the video, run the command as follows: 3. When inputting the video, run the command as follows:
```python ```python
...@@ -30,13 +49,13 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -30,13 +49,13 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
``` ```
4. If you want to change the model path, there are two methods: 4. If you want to change the model path, there are two methods:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. - The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- Add `--model_dir` in the command line to change the model path: - The second: Add `--model_dir` in the command line to change the model path:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu \
--model_dir det=ppyoloe/ --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
``` ```
The test result is: The test result is:
......
...@@ -444,6 +444,17 @@ class PipePredictor(object): ...@@ -444,6 +444,17 @@ class PipePredictor(object):
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn, color_threshold, type_threshold) enable_mkldnn, color_threshold, type_threshold)
if self.with_mtmct:
reid_cfg = self.cfg['REID']
model_dir = reid_cfg['model_dir']
batch_size = reid_cfg['batch_size']
basemode = reid_cfg['basemode']
self.modebase[basemode] = True
self.reid_predictor = ReID(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
if self.with_mot or self.modebase["idbased"] or self.modebase[ if self.with_mot or self.modebase["idbased"] or self.modebase[
"skeletonbased"]: "skeletonbased"]:
mot_cfg = self.cfg['MOT'] mot_cfg = self.cfg['MOT']
...@@ -493,15 +504,6 @@ class PipePredictor(object): ...@@ -493,15 +504,6 @@ class PipePredictor(object):
cpu_threads=cpu_threads, cpu_threads=cpu_threads,
enable_mkldnn=enable_mkldnn) enable_mkldnn=enable_mkldnn)
if self.with_mtmct:
reid_cfg = self.cfg['REID']
model_dir = reid_cfg['model_dir']
batch_size = reid_cfg['batch_size']
self.reid_predictor = ReID(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
def set_file_name(self, path): def set_file_name(self, path):
if path is not None: if path is not None:
self.file_name = os.path.split(path)[-1] self.file_name = os.path.split(path)[-1]
......
...@@ -329,6 +329,9 @@ def res2dict(multi_res): ...@@ -329,6 +329,9 @@ def res2dict(multi_res):
def mtmct_process(multi_res, captures, mtmct_vis=True, output_dir="output"): def mtmct_process(multi_res, captures, mtmct_vis=True, output_dir="output"):
cid_tid_dict = res2dict(multi_res) cid_tid_dict = res2dict(multi_res)
if len(cid_tid_dict) == 0:
print("no tracking result found, mtmct will be skiped.")
return
map_tid = sub_cluster(cid_tid_dict) map_tid = sub_cluster(cid_tid_dict)
if not os.path.exists(output_dir): if not os.path.exists(output_dir):
......
...@@ -70,36 +70,61 @@ train.txt文件内为所有训练图片名称(相对于根路径的文件路 ...@@ -70,36 +70,61 @@ train.txt文件内为所有训练图片名称(相对于根路径的文件路
00001.jpg 0,0,1,0,.... 00001.jpg 0,0,1,0,....
``` ```
注意:图片与标注值之间是以Tab[\t]符号隔开, 标注值之间是以逗号[,]隔开。该格式不能错,否则解析失败。 注意:1)图片与标注值之间是以Tab[\t]符号隔开, 2)标注值之间是以逗号[,]隔开。该格式不能错,否则解析失败。
### 修改配置开始训练 ### 修改配置开始训练
该任务的训练功能集成在[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)套件中。 首先执行以下命令下载训练代码:
需要在配置文件[PPLCNet_x1_0.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml)中,修改的配置项如下: ```shell
git clone https://github.com/PaddlePaddle/PaddleClas
```
需要在配置文件`PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`中,修改的配置项如下:
``` ```
image_root: "dataset/attribute/data/" 指定训练图片所在根路径 DataLoader:
cls_label_path: "dataset/attribute/trainval.txt" 指定训练列表文件位置 Train:
dataset:
name: MultiLabelDataset
image_root: "dataset/pa100k/" #指定训练图片所在根路径
cls_label_path: "dataset/pa100k/train_list.txt" #指定训练列表文件位置
label_ratio: True
transform_ops:
Eval:
dataset:
name: MultiLabelDataset
image_root: "dataset/pa100k/" #指定评估图片所在根路径
cls_label_path: "dataset/pa100k/val_list.txt" #指定评估列表文件位置
label_ratio: True
transform_ops:
``` ```
注意: 注意:
1. 这里image_root路径+train.txt中图片相对路径,对应图片的完整路径位置。
1. 这里image_root路径+train.txt中图片相对路径,对应图片存放的完整路径。 2. 如果有修改属性数量,则还需修改内容配置项中属性种类数量:
如果有修改属性数量,则还需修改内容配置项:
``` ```
class_num: 26 #属性种类数量 # model architecture
Arch:
name: "PPLCNet_x1_0"
pretrained: True
use_ssld: True
class_num: 26 #属性种类数量
``` ```
然后运行以下命令开始训练。 然后运行以下命令开始训练。
``` ```
#多卡训练
export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \ python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \ --gpus="0,1,2,3" \
tools/train.py \ tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml
#单卡训练
python3 tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml
``` ```
### 模型导出 ### 模型导出
...@@ -113,23 +138,34 @@ python3 tools/export_model.py \ ...@@ -113,23 +138,34 @@ python3 tools/export_model.py \
-o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer -o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer
``` ```
导出模型后,然后将PP-Human中提供的部署模型[PPLCNet_x1_0](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar)中的infer_cfg.yml文件拷贝到导出的模型文件夹'PPLCNet_x1_0_person_attribute_infer'中。 导出模型后,将PP-Human中提供的部署模型[PPLCNet_x1_0](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar)中的`infer_cfg.yml`文件拷贝到导出的模型文件夹`PPLCNet_x1_0_person_attribute_infer`中。
使用时在PP-Human中的配置文件infer_cfg_pphuman.yml中修改 使用时在PP-Human中的配置文件`./deploy/pipeline/config/infer_cfg_pphuman.yml`中修改新的模型路径
``` ```
ATTR: ATTR:
model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_person_attribute_infer/ model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_person_attribute_infer/ #新导出的模型路径位置
enable: True enable: True #开启功能
``` ```
然后可以使用。 然后可以使用-->至此即完成新增属性类别识别任务
## 属性增减 ## 属性增减
上述是以26个属性为例的标注、训练过程。 上述是以26个属性为例的标注、训练过程。
如果需要增加、减少属性数量,则只需修改1)标注、2)训练中train.txt所使用的属性数量和名称。 如果需要增加、减少属性数量,则需要:
1)标注时需增加新属性类别信息或删减属性类别信息;
2)对应修改训练中train.txt所使用的属性数量和名称;
3)修改训练配置,例如``PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml``文件中的属性数量,详细见上述`修改配置开始训练`部分。
增加属性示例:
删减属性,例如,如果不需要年龄属性,则位置[1,2,3]的数值可以去掉。只需在train.txt中标注的26个数字中全部删除第1-3位数值即可,同时标注数据时也不再需要标注这3位属性值。 1. 在标注数据时在26位后继续增加新的属性标注数值;
2. 在train.txt文件的标注数值中也增加新的属性数值。
3. 注意属性类型在train.txt中属性数值列表中的位置的对应关系需要时固定的,例如第1-3位表示年龄,所有图片都要使用1-3位置表示年龄,不再赘述。
同理进行增加属性,在标注数据时在26位后继续增加新的属性标注数值,在train.txt文件的标注数值中也增加新的属性数值。注意属性类型在train.txt中属性数值列表中的位置的对应关系需要时固定的,例如第1-3位表示年龄,所有图片都要使用1-3位置表示年龄,不再赘述。 <div width="500" align="center">
<img src="../../images/add_attribute.png"/>
</div>
删减属性同理。
例如,如果不需要年龄属性,则位置[1,2,3]的数值可以去掉。只需在train.txt中标注的26个数字中全部删除第1-3位数值即可,同时标注数据时也不再需要标注这3位属性值。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册