未验证 提交 57aad25e 编写于 作者: Z zhiboniu 提交者: GitHub

update ppvehicle_plate docs (#6625)

* add ppvehicle_plate

* replace cn codes

* update docs

* update mot model
上级 0e1f2a68
......@@ -14,8 +14,8 @@ MOT:
VEHICLE_PLATE:
det_model_dir: output_inference/ch_PP-OCRv3_det_infer/
det_limit_side_len: 480
det_limit_type: "max"
det_limit_side_len: 736
det_limit_type: "min"
rec_model_dir: output_inference/ch_PP-OCRv3_rec_infer/
rec_image_shape: [3, 48, 320]
rec_batch_num: 6
......
# PP-Vehicle车牌识别模块
【应用介绍】
车牌识别,在车辆应用场景中有着非常广泛的应用,起到车辆身份识别的作用,比如车辆出入口自动闸机。PP-Vehicle中提供了车辆的跟踪及其车牌识别的功能,并提供模型下载:
【模型下载】
| 任务 | 算法 | 精度 | 预测速度(ms) |预测模型下载链接 |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| 车辆检测/跟踪 | PP-YOLOE-l | mAP: 63.9 | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| 车牌检测模型 | ch_PP-OCRv3_det | hmean: 0.979 | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) |
| 车牌识别模型 | ch_PP-OCRv3_rec | acc: 0.773 | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) |
1. 跟踪模型使用PPVehicle数据集(整合了BDD100K-MOT和UA-DETRAC),是将BDD100K-MOT中的car, truck, bus, van和UA-DETRAC中的car, bus, van都合并为1类vehicle(1)后的数据集。
2. 车牌检测、识别模型使用PP-OCRv3模型在CCPD2019、CCPD2020混合车牌数据集上fine-tune得到。
## 使用方法
【配置项说明】
1. 从上表链接中下载模型并解压到```PaddleDetection/output_inference```路径下,并修改配置文件中模型路径,也可默认自动下载模型。设置```deploy/pipeline/config/infer_cfg_ppvehicle.yml````VEHICLE_PLATE`的enable: True
【使用命令】
`infer_cfg_ppvehicle.yml`中配置项说明:
```
VEHICLE_PLATE: #模块名称
det_model_dir: output_inference/ch_PP-OCRv3_det_infer/ #车牌检测模型路径
det_limit_side_len: 480 #检测模型单边输入尺寸
det_limit_type: "max" #检测模型输入尺寸长短边选择,"max"表示长边
rec_model_dir: output_inference/ch_PP-OCRv3_rec_infer/ #车牌识别模型路径
rec_image_shape: [3, 48, 320] #车牌识别模型输入尺寸
rec_batch_num: 6 #车牌识别batchsize
word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt #OCR模型查询字典
basemode: "idbased" #流程类型,'idbased'表示基于跟踪模型
enable: False #功能是否开启
```
2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./PPVehicle_QUICK_STARTED.md#41-参数说明))。
```python
#单张图片
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--image_file=test_image.jpg \
--device=gpu \
#图片文件夹
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--image_dir=images/ \
--device=gpu \
```
3. 视频输入时,启动命令如下
```python
#单个视频文件
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
#视频文件夹
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_dir=test_videos/ \
--device=gpu \
```
4. 若修改模型路径,有以下两种方式:
- 方法一:```./deploy/pipeline/config/infer_cfg_ppvehicle.yml```下可以配置不同模型路径,车牌识别模型修改`VEHICLE_PLATE`字段下配置
- 方法二:命令行中--config配置项后面增加`-o VEHICLE_PLATE.det_model_dir=[YOUR_DETMODEL_PATH] VEHICLE_PLATE.rec_model_dir=[YOUR_RECMODEL_PATH]`修改模型路径。
测试效果如下:
<div width="1000" align="center">
<img src="../images/ppvehicleplate.jpg"/>
</div>
【效果展示】
## 方案说明
【实现方案及特色】
1. 目标检测/多目标跟踪获取图片/视频输入中的车辆检测框,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../configs/ppyoloe/README_cn.md)
2. 通过车辆检测框的坐标在输入图像中截取每个车辆
3. 使用车牌检测模型在每张车辆截图中识别车牌所在位置,同理截取车牌区域,模型方案为PP-OCRv3_det模型,经CCPD数据集在车牌场景fine-tune得到。
4. 使用字符识别模型识别车牌中的字符。模型方案为PP-OCRv3_rec模型,经CCPD数据集在车牌场景fine-tune得到。
**性能优化措施:**
1. 使用跳帧策略,每10帧做一次车牌检测,避免每帧做车牌检测的算力消耗。
2. 车牌结果稳定策略,避免单帧结果的波动,利用同一个id的历史所有车牌识别结果进行投票,得到该id最大可能的正确结果。
## 参考资料
## 参考文献
1. PaddeDetection特色检测模型[PP-YOLOE](../../../../configs/ppyoloe)
2. Paddle字符识别模型库[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
......@@ -364,6 +364,36 @@ class PipePredictor(object):
# auto download inference model
model_dir_dict = get_model_dir(self.cfg)
if self.with_vehicleplate:
vehicleplate_cfg = self.cfg['VEHICLE_PLATE']
self.vehicleplate_detector = PlateRecognizer(args, vehicleplate_cfg)
basemode = self.basemode['VEHICLE_PLATE']
self.modebase[basemode] = True
if self.with_human_attr:
attr_cfg = self.cfg['ATTR']
model_dir = model_dir_dict['ATTR']
batch_size = attr_cfg['batch_size']
basemode = self.basemode['ATTR']
self.modebase[basemode] = True
self.attr_predictor = AttrDetector(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
if self.with_vehicle_attr:
vehicleattr_cfg = self.cfg['VEHICLE_ATTR']
model_dir = model_dir_dict['VEHICLE_ATTR']
batch_size = vehicleattr_cfg['batch_size']
color_threshold = vehicleattr_cfg['color_threshold']
type_threshold = vehicleattr_cfg['type_threshold']
basemode = self.basemode['VEHICLE_ATTR']
self.modebase[basemode] = True
self.vehicle_attr_predictor = VehicleAttr(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn, color_threshold, type_threshold)
if not is_video:
det_cfg = self.cfg['DET']
model_dir = model_dir_dict['DET']
......@@ -372,41 +402,8 @@ class PipePredictor(object):
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
if self.with_human_attr:
attr_cfg = self.cfg['ATTR']
model_dir = model_dir_dict['ATTR']
batch_size = attr_cfg['batch_size']
basemode = self.basemode['ATTR']
self.modebase[basemode] = True
self.attr_predictor = AttrDetector(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
if self.with_vehicle_attr:
vehicleattr_cfg = self.cfg['VEHICLE_ATTR']
model_dir = model_dir_dict['VEHICLE_ATTR']
batch_size = vehicleattr_cfg['batch_size']
color_threshold = vehicleattr_cfg['color_threshold']
type_threshold = vehicleattr_cfg['type_threshold']
basemode = self.basemode['VEHICLE_ATTR']
self.modebase[basemode] = True
self.vehicle_attr_predictor = VehicleAttr(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn, color_threshold, type_threshold)
else:
if self.with_human_attr:
attr_cfg = self.cfg['ATTR']
model_dir = model_dir_dict['ATTR']
batch_size = attr_cfg['batch_size']
basemode = self.basemode['ATTR']
self.modebase[basemode] = True
self.attr_predictor = AttrDetector(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn)
if self.with_idbased_detaction:
idbased_detaction_cfg = self.cfg['ID_BASED_DETACTION']
model_dir = model_dir_dict['ID_BASED_DETACTION']
......@@ -502,26 +499,6 @@ class PipePredictor(object):
use_dark=False)
self.kpt_buff = KeyPointBuff(skeleton_action_frames)
if self.with_vehicleplate:
vehicleplate_cfg = self.cfg['VEHICLE_PLATE']
self.vehicleplate_detector = PlateRecognizer(args,
vehicleplate_cfg)
basemode = self.basemode['VEHICLE_PLATE']
self.modebase[basemode] = True
if self.with_vehicle_attr:
vehicleattr_cfg = self.cfg['VEHICLE_ATTR']
model_dir = model_dir_dict['VEHICLE_ATTR']
batch_size = vehicleattr_cfg['batch_size']
color_threshold = vehicleattr_cfg['color_threshold']
type_threshold = vehicleattr_cfg['type_threshold']
basemode = self.basemode['VEHICLE_ATTR']
self.modebase[basemode] = True
self.vehicle_attr_predictor = VehicleAttr(
model_dir, device, run_mode, batch_size, trt_min_shape,
trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads,
enable_mkldnn, color_threshold, type_threshold)
if self.with_mtmct:
reid_cfg = self.cfg['REID']
model_dir = model_dir_dict['REID']
......@@ -661,6 +638,20 @@ class PipePredictor(object):
attr_res = {'output': vehicle_attr_res_list}
self.pipeline_res.update(attr_res, 'vehicle_attr')
if self.with_vehicleplate:
if i > self.warmup_frame:
self.pipe_timer.module_time['vehicleplate'].start()
crop_inputs = crop_image_with_det(batch_input, det_res)
platelicenses = []
for crop_input in crop_inputs:
platelicense = self.vehicleplate_detector.get_platelicense(
crop_input)
platelicenses.extend(platelicense['plate'])
if i > self.warmup_frame:
self.pipe_timer.module_time['vehicleplate'].end()
vehicleplate_res = {'vehicleplate': platelicenses}
self.pipeline_res.update(vehicleplate_res, 'vehicleplate')
self.pipe_timer.img_num += len(batch_input)
if i > self.warmup_frame:
self.pipe_timer.total_time.end()
......@@ -1078,6 +1069,7 @@ class PipePredictor(object):
det_res = result.get('det')
human_attr_res = result.get('attr')
vehicle_attr_res = result.get('vehicle_attr')
vehicleplate_res = result.get('vehicleplate')
for i, (im_file, im) in enumerate(zip(im_files, images)):
if det_res is not None:
......@@ -1088,7 +1080,7 @@ class PipePredictor(object):
im = visualize_box_mask(
im,
det_res_i,
labels=['person'],
labels=['target'],
threshold=self.cfg['crop_thresh'])
im = np.ascontiguousarray(np.copy(im))
im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
......@@ -1100,6 +1092,11 @@ class PipePredictor(object):
vehicle_attr_res_i = vehicle_attr_res['output'][
start_idx:start_idx + boxes_num_i]
im = visualize_attr(im, vehicle_attr_res_i, det_res_i['boxes'])
if vehicleplate_res is not None:
plates = vehicleplate_res['vehicleplate']
det_res_i['boxes'][:, 4:6] = det_res_i[
'boxes'][:, 4:6] - det_res_i['boxes'][:, 2:4]
im = visualize_vehicleplate(im, plates, det_res_i['boxes'])
img_name = os.path.split(im_file)[-1]
if not os.path.exists(self.output_dir):
......
......@@ -258,21 +258,54 @@ class PlateRecognizer(object):
return self.check_plate(plate_text_list)
def check_plate(self, text_list):
simcode = [
'浙', '粤', '京', '津', '冀', '晋', '蒙', '辽', '黑', '沪', '吉', '苏', '皖',
'赣', '鲁', '豫', '鄂', '湘', '桂', '琼', '渝', '川', '贵', '云', '藏', '陕',
'甘', '青', '宁'
]
plate_all = {"plate": []}
for text_pcar in text_list:
platelicense = ""
for text_info in text_pcar:
text = text_info[0][0][0]
if len(text) > 2 and len(text) < 10:
platelicense = text
platelicense = self.replace_cn_code(text)
plate_all["plate"].append(platelicense)
return plate_all
def replace_cn_code(self, text):
simcode = {
'浙': 'ZJ-',
'粤': 'GD-',
'京': 'BJ-',
'津': 'TJ-',
'冀': 'HE-',
'晋': 'SX-',
'蒙': 'NM-',
'辽': 'LN-',
'黑': 'HLJ-',
'沪': 'SH-',
'吉': 'JL-',
'苏': 'JS-',
'皖': 'AH-',
'赣': 'JX-',
'鲁': 'SD-',
'豫': 'HA-',
'鄂': 'HB-',
'湘': 'HN-',
'桂': 'GX-',
'琼': 'HI-',
'渝': 'CQ-',
'川': 'SC-',
'贵': 'GZ-',
'云': 'YN-',
'藏': 'XZ-',
'陕': 'SN-',
'甘': 'GS-',
'青': 'QH-',
'宁': 'NX-',
'·': ' '
}
for _char in text:
if _char in simcode:
text = text.replace(_char, simcode[_char])
return text
def main():
cfg = merge_cfg(FLAGS)
......
......@@ -419,7 +419,7 @@ def visualize_vehicleplate(im, results, boxes=None):
im_h, im_w = im.shape[:2]
text_scale = max(1.0, im.shape[0] / 1600.)
text_thickness = 1
text_thickness = 2
line_inter = im.shape[0] / 40.
for i, res in enumerate(results):
......@@ -436,7 +436,7 @@ def visualize_vehicleplate(im, results, boxes=None):
text_loc = (text_w, text_h)
cv2.putText(
im,
text,
"LP: " + text,
text_loc,
cv2.FONT_ITALIC,
text_scale, (0, 255, 255),
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册