未验证 提交 3a233e83 编写于 作者: Z zhiboniu 提交者: GitHub

[cherry-pick]update pphuman & ppvehicle train configs (#6741) (#6767)

* update pphuman model of keypoint\attribute\action recognition

* update jetson docs

* mv yolov3 pedestrian to pphuman

* update ppvehicle&human det

* update links

* add vehicle plate & attribute

* add pphuman json & ppvehicle json
上级 38f4a27f
......@@ -16,7 +16,7 @@ English | [简体中文](README_cn.md)
**Notes:**
- The above models are trained with **MOT17-half train** set, it can be downloaded from this [link](https://dataset.bj.bcebos.com/mot/MOT17.zip).
- **MOT17-half train** set is a dataset composed of pictures and labels of the first half frame of each video in MOT17 Train dataset (7 sequences in total). **MOT17-half val set** is used for evaluation, which is composed of the second half frame of each video. They can be downloaded from this [link](https://paddledet.bj.bcebos.com/data/mot/mot17half/annotations.zip). Download and unzip it in the `dataset/mot/MOT17/images/`folder.
- YOLOv3 is trained with the same pedestrian dataset as `configs/pedestrian/pedestrian_yolov3_darknet.yml`, which is not open yet.
- YOLOv3 is trained with the same pedestrian dataset as `configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml`, which is not open yet.
- For pedestrian tracking, please use pedestrian detector combined with pedestrian ReID model. For vehicle tracking, please use vehicle detector combined with vehicle ReID model.
- High quality detected boxes are required for DeepSORT tracking, so the post-processing settings such as NMS threshold of these models are different from those in pure detection tasks.
......
......@@ -17,7 +17,7 @@
**注意:**
- 以上模型均可采用**MOT17-half train**数据集训练,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载。
- **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://paddledet.bj.bcebos.com/data/mot/mot17half/annotations.zip)下载,并解压放在`dataset/mot/MOT17/images/`文件夹下。
- YOLOv3和`configs/pedestrian/pedestrian_yolov3_darknet.yml`是相同的pedestrian数据集训练的,此数据集暂未开放。
- YOLOv3和`configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml`是相同的pedestrian数据集训练的,此数据集暂未开放。
- 行人跟踪请使用行人检测器结合行人ReID模型。车辆跟踪请使用车辆检测器结合车辆ReID模型。
- 用于DeepSORT跟踪时需要高质量的检出框,因此这些模型的NMS阈值等后处理设置会与纯检测任务的设置不同。
......
......@@ -2,19 +2,24 @@
# PP-YOLOE Human 检测模型
PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。
PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。PP-Human中使用模型为业务数据集模型,我们同时提供CrowdHuman训练配置,可以使用开源数据进行训练。
其中整理后的COCO格式的CrowdHuman数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip),检测类别仅一类 `pedestrian(1)`,原始数据集[下载链接](http://www.crowdhuman.org/download.html)
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
|:---------|:-------:|:------:|:------:| :----: | :------:|
|PP-YOLOE-s| CrowdHuman | 42.5 | 77.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_crowdhuman.yml) |
|PP-YOLOE-l| CrowdHuman | 48.0 | 81.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_crowdhuman.yml) |
|PP-YOLOE-s| 业务数据集 | 53.2 | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | [配置文件](./ppyoloe_crn_s_36e_pphuman.yml) |
|PP-YOLOE-l| 业务数据集 | 57.8 | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | [配置文件](./ppyoloe_crn_l_36e_pphuman.yml) |
**注意:**
- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)
# YOLOv3 Human 检测模型
请参考[Human_YOLOv3页面](./pedestrian_yolov3/README_cn.md)
# PP-YOLOE 香烟检测模型
基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。
......@@ -23,6 +28,47 @@ PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用
|:---------|:-------:|:------:|:------:| :----: | :------:|
| PP-YOLOE-s | 香烟业务数据集 | 39.7 | 79.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_80e_smoking_visdrone.yml) |
# PP-HGNet 打电话识别模型
基于PP-HGNet模型实现了打电话行为识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md#3.3)套件进行训练。此处提供预测模型下载:
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-HGNet | 业务数据集 | 86.85 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | - |
# HRNet 人体关键点模型
人体关键点模型与ST-GCN模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。关键点模型采用HRNet模型,关于关键点模型相关详细资料可以查看关键点专栏页面[KeyPoint](../keypoint/README.md)。此处提供训练模型下载链接。
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| HRNet | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) | [配置文件](./hrnet_w32_256x192.yml) |
# ST-GCN 骨骼点行为识别模型
人体关键点模型与[ST-GCN](https://arxiv.org/abs/1801.07455)模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
ST-GCN模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman)完成训练。
此处提供预测模型下载链接。
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| ST-GCN | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/configs/stgcn_pphuman.yaml) |
# PP-TSM 视频分类模型
基于`PP-TSM`模型完成了[基于视频分类的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
PP-TSM模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition)完成训练。
此处提供预测模型下载链接。
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-TSM | 组合开源数据集 | 89.06 |[下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition/pptsm_fight_frames_dense.yaml) |
# PP-HGNet、PP-LCNet 属性识别模型
基于PP-HGNet、PP-LCNet 模型实现了行人属性识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_attribute.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-LCNet.md)套件进行训练。此处提供预测模型下载链接.
| 模型 | 数据集 | mA | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-HGNet_small | 业务数据集 | 95.4 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | - |
| PP-LCNet | 业务数据集 | 94.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml) |
## 引用
```
......
use_gpu: true
log_iter: 5
save_dir: output
snapshot_epoch: 10
weights: output/hrnet_w32_256x192/model_final
epoch: 210
num_joints: &num_joints 17
pixel_std: &pixel_std 200
metric: KeyPointTopDownCOCOEval
num_classes: 1
train_height: &train_height 256
train_width: &train_width 192
trainsize: &trainsize [*train_width, *train_height]
hmsize: &hmsize [48, 64]
flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]]
#####model
architecture: TopDownHRNet
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/Trunc_HRNet_W32_C_pretrained.pdparams
TopDownHRNet:
backbone: HRNet
post_process: HRNetPostProcess
flip_perm: *flip_perm
num_joints: *num_joints
width: &width 32
loss: KeyPointMSELoss
HRNet:
width: *width
freeze_at: -1
freeze_norm: false
return_idx: [0]
KeyPointMSELoss:
use_target_weight: true
#####optimizer
LearningRate:
base_lr: 0.0005
schedulers:
- !PiecewiseDecay
milestones: [170, 200]
gamma: 0.1
- !LinearWarmup
start_factor: 0.001
steps: 1000
OptimizerBuilder:
optimizer:
type: Adam
regularizer:
factor: 0.0
type: L2
#####data
TrainDataset:
!KeypointTopDownCocoDataset
image_dir: train2017
anno_path: annotations/person_keypoints_train2017.json
dataset_dir: dataset/coco
num_joints: *num_joints
trainsize: *trainsize
pixel_std: *pixel_std
use_gt_bbox: True
EvalDataset:
!KeypointTopDownCocoDataset
image_dir: val2017
anno_path: annotations/person_keypoints_val2017.json
dataset_dir: dataset/coco
bbox_file: bbox.json
num_joints: *num_joints
trainsize: *trainsize
pixel_std: *pixel_std
use_gt_bbox: True
image_thre: 0.0
TestDataset:
!ImageFolder
anno_path: dataset/coco/keypoint_imagelist.txt
worker_num: 2
global_mean: &global_mean [0.485, 0.456, 0.406]
global_std: &global_std [0.229, 0.224, 0.225]
TrainReader:
sample_transforms:
- RandomFlipHalfBodyTransform:
scale: 0.5
rot: 40
num_joints_half_body: 8
prob_half_body: 0.3
pixel_std: *pixel_std
trainsize: *trainsize
upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
flip_pairs: *flip_perm
- TopDownAffine:
trainsize: *trainsize
- ToHeatmapsTopDown:
hmsize: *hmsize
sigma: 2
batch_transforms:
- NormalizeImage:
mean: *global_mean
std: *global_std
is_scale: true
- Permute: {}
batch_size: 64
shuffle: true
drop_last: false
EvalReader:
sample_transforms:
- TopDownAffine:
trainsize: *trainsize
batch_transforms:
- NormalizeImage:
mean: *global_mean
std: *global_std
is_scale: true
- Permute: {}
batch_size: 16
TestReader:
inputs_def:
image_shape: [3, *train_height, *train_width]
sample_transforms:
- Decode: {}
- TopDownEvalAffine:
trainsize: *trainsize
- NormalizeImage:
mean: *global_mean
std: *global_std
is_scale: true
- Permute: {}
batch_size: 1
fuse_normalize: false #whether to fuse nomalize layer into model while export model
......@@ -5,7 +5,7 @@ We provide some models implemented by PaddlePaddle to detect objects in specific
| Task | Algorithm | Box AP | Download | Configs |
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
| Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/pedestrian/pedestrian_yolov3_darknet.yml) |
| Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](./pedestrian_yolov3_darknet.yml) |
## Pedestrian Detection
......@@ -36,15 +36,15 @@ Users can employ the model to conduct the inference:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
--infer_dir configs/pedestrian/demo \
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
--draw_threshold 0.3 \
--output_dir configs/pedestrian/demo/output
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
```
Some inference results are visualized below:
![](../../docs/images/PedestrianDetection_001.png)
![](../../../docs/images/PedestrianDetection_001.png)
![](../../docs/images/PedestrianDetection_004.png)
![](../../../docs/images/PedestrianDetection_004.png)
......@@ -5,7 +5,7 @@
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/pedestrian/pedestrian_yolov3_darknet.yml) |
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](./pedestrian_yolov3_darknet.yml) |
## 行人检测(Pedestrian Detection)
......@@ -37,15 +37,15 @@ IOU=.5-.95时的AP为 0.518。
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
--infer_dir configs/pedestrian/demo \
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
--draw_threshold 0.3 \
--output_dir configs/pedestrian/demo/output
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
```
预测结果示例:
![](../../docs/images/PedestrianDetection_001.png)
![](../../../docs/images/PedestrianDetection_001.png)
![](../../docs/images/PedestrianDetection_004.png)
![](../../../docs/images/PedestrianDetection_004.png)
......@@ -26,4 +26,4 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: configs/pedestrian/pedestrian.json
anno_path: configs/pphuman/pedestrian_yolov3/pedestrian.json
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyoloe/_base_/optimizer_300e.yml',
'../ppyoloe/_base_/ppyoloe_crn.yml',
'../ppyoloe/_base_/ppyoloe_reader.yml',
]
log_iter: 100
snapshot_epoch: 4
weights: output/ppyoloe_crn_l_36e_pphuman/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
num_classes: 1
TrainDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/train.json
dataset_dir: dataset/pphuman
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/val.json
dataset_dir: dataset/pphuman
TestDataset:
!ImageFolder
anno_path: annotations/val.json
dataset_dir: dataset/pphuman
TrainReader:
batch_size: 8
epoch: 36
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 43
- !LinearWarmup
start_factor: 0.
epochs: 1
PPYOLOEHead:
static_assigner_epoch: -1
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyoloe/_base_/optimizer_300e.yml',
'../ppyoloe/_base_/ppyoloe_crn.yml',
'../ppyoloe/_base_/ppyoloe_reader.yml',
]
log_iter: 100
snapshot_epoch: 4
weights: output/ppyoloe_crn_s_36e_pphuman/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams
depth_mult: 0.33
width_mult: 0.50
num_classes: 1
TrainDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/train.json
dataset_dir: dataset/pphuman
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/val.json
dataset_dir: dataset/pphuman
TestDataset:
!ImageFolder
anno_path: annotations/val.json
dataset_dir: dataset/pphuman
TrainReader:
batch_size: 8
epoch: 36
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 43
- !LinearWarmup
start_factor: 0.
epochs: 1
PPYOLOEHead:
static_assigner_epoch: -1
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
简体中文 | [English](README.md)
# PP-YOLOE Vehicle 检测模型
## PP-YOLOE Vehicle 检测模型
PaddleDetection团队提供了针对自动驾驶场景的基于PP-YOLOE的检测模型,用户可以下载模型进行使用,主要包含5个数据集(BDD100K-DET、BDD100K-MOT、UA-DETRAC、PPVehicle9cls、PPVehicle)。其中前3者为公开数据集,后两者为整合数据集。
- BDD100K-DET具体类别为10类,包括`pedestrian(1), rider(2), car(3), truck(4), bus(5), train(6), motorcycle(7), bicycle(8), traffic light(9), traffic sign(10)`
......@@ -35,6 +35,28 @@ label_list.txt里的一行记录一个对应种类,如下所示:
vehicle
```
## YOLOv3 Vehicle 检测模型
请参考[Vehicle_YOLOv3页面](./vehicle_yolov3/README_cn.md)
## PP-OCRv3 车牌识别模型
车牌识别采用Paddle自研超轻量级模型PP-OCRv3_det、PP-OCRv3_rec。在[CCPD数据集](https://github.com/detectRecog/CCPD)(CCPD2019+CCPD2020车牌数据集)上进行了fine-tune。模型训练基于[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/applications/%E8%BD%BB%E9%87%8F%E7%BA%A7%E8%BD%A6%E7%89%8C%E8%AF%86%E5%88%AB.md)完成,我们提供了预测模型下载:
| 模型 | 数据集 | 精度 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-OCRv3_det | CCPD组合数据集 | hmean:0.979 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) | [配置文件](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/configs/det/ch_PP-OCRv3/ch_PP-OCRv3_det_cml.yml) |
| PP-OCRv3_rec | CCPD组合数据集 | acc:0.773 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | [配置文件](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/configs/rec/PP-OCRv3/ch_PP-OCRv3_rec_distillation.yml) |
## PP-LCNet 车牌属性模型
车牌属性采用Paddle自研超轻量级模型PP-LCNet。在[VeRi数据集](https://www.v7labs.com/open-datasets/veri-dataset)进行训练。模型训练基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/PULC/PULC_vehicle_attribute_en.md)完成,我们提供了预测模型下载:
| 模型 | 数据集 | 精度 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-LCNet_x1_0 | VeRi数据集 | 90.81 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml) |
## 引用
```
@InProceedings{bdd100k,
......
......@@ -5,7 +5,7 @@ We provide some models implemented by PaddlePaddle to detect objects in specific
| Task | Algorithm | Box AP | Download | Configs |
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
| Vehicle Detection | YOLOv3 | 54.5 | [model](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/vehicle/vehicle_yolov3_darknet.yml) |
| Vehicle Detection | YOLOv3 | 54.5 | [model](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [config](./vehicle_yolov3_darknet.yml) |
## Vehicle Detection
......@@ -39,15 +39,15 @@ Users can employ the model to conduct the inference:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \
python -u tools/infer.py -c configs/ppvehicle/vehicle_yolov3/vehicle_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams \
--infer_dir configs/vehicle/demo \
--infer_dir configs/ppvehicle/vehicle_yolov3/demo \
--draw_threshold 0.2 \
--output_dir configs/vehicle/demo/output
--output_dir configs/ppvehicle/vehicle_yolov3/demo/output
```
Some inference results are visualized below:
![](../../docs/images/VehicleDetection_001.jpeg)
![](../../../docs/images/VehicleDetection_001.jpeg)
![](../../docs/images/VehicleDetection_005.png)
![](../../../docs/images/VehicleDetection_005.png)
......@@ -5,7 +5,7 @@
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 车辆检测 | YOLOv3 | 54.5 | [下载链接](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/vehicle/vehicle_yolov3_darknet.yml) |
| 车辆检测 | YOLOv3 | 54.5 | [下载链接](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [配置文件](./vehicle_yolov3_darknet.yml) |
## 车辆检测(Vehicle Detection)
......@@ -40,15 +40,15 @@ IOU=.5时的AP为 0.764。
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \
python -u tools/infer.py -c configs/ppvehicle/vehicle_yolov3/vehicle_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams \
--infer_dir configs/vehicle/demo \
--infer_dir configs/ppvehicle/vehicle_yolov3/demo \
--draw_threshold 0.2 \
--output_dir configs/vehicle/demo/output
--output_dir configs/ppvehicle/vehicle_yolov3/demo/output
```
预测结果示例:
![](../../docs/images/VehicleDetection_001.jpeg)
![](../../../docs/images/VehicleDetection_001.jpeg)
![](../../docs/images/VehicleDetection_005.png)
![](../../../docs/images/VehicleDetection_005.png)
......@@ -39,4 +39,4 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: configs/vehicle/vehicle.json
anno_path: configs/ppvehicle/vehicle_yolov3/vehicle.json
......@@ -129,6 +129,18 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --video_file=rtsp://[YOUR_RTSP_SITE] --device=gpu
```
### Jetson部署说明
由于Jetson平台算力相比服务器有较大差距,有如下使用建议:
1. 模型选择轻量级版本,特别是跟踪模型,推荐使用`ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip`
2. 开启跟踪跳帧功能,推荐使用2或者3: `skip_frame_num: 3`
使用该推荐配置,在TX2平台上可以达到较高速率,经测试属性案例达到20fps。
可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。
### 参数说明
| 参数 | 是否必须|含义 |
......@@ -147,6 +159,9 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False |
| --do_entrance_counting | Option | 是否统计出入口流量,默认为False |
| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False |
| --region_type | Option | 'horizontal'(默认值)、'vertical':表示流量统计方向选择;'custom':表示设置闯入区域 |
| --region_polygon | Option | 设置闯入区域多边形多点的坐标,无默认值 |
| --do_break_in_counting | Option | 此项表示做区域闯入检查 |
## 方案介绍
......
......@@ -135,6 +135,18 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_vehicle_attr.yml -o visual=False --video_file=rtsp://[YOUR_RTSP_SITE] --device=gpu
```
### Jetson部署说明
由于Jetson平台算力相比服务器有较大差距,有如下使用建议:
1. 模型选择轻量级版本,特别是跟踪模型,推荐使用`ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip`
2. 开启跟踪跳帧功能,推荐使用2或者3. `skip_frame_num: 3`
使用该推荐配置,在TX2平台上可以达到较高速率,经测试属性案例达到20fps。
可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。
### 参数说明
| 参数 | 是否必须|含义 |
......@@ -153,6 +165,9 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infe
| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False |
| --do_entrance_counting | Option | 是否统计出入口流量,默认为False |
| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False |
| --region_type | Option | 'horizontal'(默认值)、'vertical':表示流量统计方向选择;'custom':表示设置车辆禁停区域 |
| --region_polygon | Option | 设置禁停区域多边形多点的坐标,无默认值 |
| --illegal_parking_time | Option | 设置禁停时间阈值,单位秒(s),-1(默认值)表示不做检查 |
## 方案介绍
......
......@@ -298,6 +298,7 @@ class PlateRecognizer(object):
'甘': 'GS-',
'青': 'QH-',
'宁': 'NX-',
'闽': 'FJ-',
'·': ' '
}
for _char in text:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册