diff --git a/deploy/pipeline/README.md b/deploy/pipeline/README.md
index 15acf2fa9ac20538380903e15012c222e3ffc9b9..cbb2d86fbe0a985545f0c466886611a9c4eca729 100644
--- a/deploy/pipeline/README.md
+++ b/deploy/pipeline/README.md
@@ -62,7 +62,7 @@
| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M
基于人体id的图像分类:45M |
-点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
+点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
@@ -71,20 +71,14 @@
端到端模型效果(点击展开)
-| 任务 | 端到端速度(ms) | 模型方案 | 模型体积 |
-| --------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- |
-| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
-| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
-| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
-| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
-| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M
属性识别:86M |
-| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M
属性识别:86M |
-| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
[基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M
关键点检测:101M
基于关键点行为识别:21.8M |
-| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
-| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M |
-| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M
基于人体id的目标检测:27M |
-| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M
基于人体id的图像分类:45M |
-
+| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 |
+| :---------: | :-------: | :------: |:------: |
+| 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
+| 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
+| 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
+| 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
+| 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz)
[车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M
车牌字符识别: 12M |
+| 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
diff --git a/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md
index ec0a8b89f57bbc02ea802265f5e4085acfb3cc98..1d45ff521c5848ab1182e9bc620cc9be3ac8b258 100644
--- a/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md
+++ b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md
@@ -46,11 +46,13 @@ PP-Vehicle提供了目标检测、属性识别、行为识别、ReID预训练模
| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 |
| :---------: | :-------: | :------: |:------: |
-| 车辆检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
-| 车辆检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
-| 车辆跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
-| 车辆跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
-| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 多目标跟踪:182M |
+| 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
+| 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
+| 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
+| 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
+| 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz)
[车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M
车牌字符识别: 12M |
+| 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
+
下载模型后,解压至`./output_inference`文件夹。
@@ -97,7 +99,7 @@ VEHICLE_ATTR:
**注意:**
- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。
-- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。
+- 如果用户仅需要修改模型文件路径,可以在命令行中--config后面紧跟着 `-o MOT.model_dir=ppyoloe/` 进行修改即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。
## 预测部署
@@ -111,7 +113,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
# 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True```
# 命令行中指定的模型路径优先级高于配置文件
-python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16]
+python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
@@ -130,7 +132,6 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
|-------|-------|----------|
| --config | Yes | 配置文件路径 |
| -o | Option | 覆盖配置文件中对应的配置 |
-| --model_dir | Option | 各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`|
| --image_file | Option | 需要预测的图片 |
| --image_dir | Option | 要预测的图片文件夹路径 |
| --video_file | Option | 需要预测的视频,或者rtsp流地址 |
diff --git a/deploy/pipeline/docs/tutorials/pphuman_action.md b/deploy/pipeline/docs/tutorials/pphuman_action.md
index cf0c5af501b465de810d37813032bcb316e6fde5..ee49282eb8ca0ab221dfa3bb6400da51de395716 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_action.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_action.md
@@ -60,12 +60,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
3. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- - 命令行中增加`--model_dir`修改模型路径:
+ - 命令行中--config后面紧跟着增加`-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx `修改模型路径:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+ -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
--video_file=test_video.mp4 \
- --device=gpu \
- --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
+ --device=gpu
```
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。
diff --git a/deploy/pipeline/docs/tutorials/pphuman_action_en.md b/deploy/pipeline/docs/tutorials/pphuman_action_en.md
index 943a5c5678997816f66941e785f4ac95256acf83..3eab1d447736c26a354ca8527e391fe9fc1500ac 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_action_en.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_action_en.md
@@ -72,13 +72,13 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
3. There are two ways to modify the model path:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- - Add `--model_dir` in the command line to revise the model path:
+ - Add `-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx ` in the command line following the --config to change the model path:
```python
- python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
- --video_file=test_video.mp4 \
- --device=gpu \
- --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
+python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+ -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
+ --video_file=test_video.mp4 \
+ --device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
diff --git a/deploy/pipeline/docs/tutorials/pphuman_attribute.md b/deploy/pipeline/docs/tutorials/pphuman_attribute.md
index 16109606c0cbfbac9794512648bf34a7150d0cb7..5e5f1dc61f204e7106af26516d4311637acd32b5 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_attribute.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_attribute.md
@@ -59,12 +59,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. 若修改模型路径,有以下两种方式:
- 方法一:```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- - 方法二:命令行中增加`--model_dir`修改模型路径:
+ - 方法二:命令行中--config后面紧跟着增加`-o ATTR.model_dir`修改模型路径:
```python
-python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml
+ -o ATTR.model_dir=output_inference/PPLCNet_x1_0_person_attribute_945_infer/\
--video_file=test_video.mp4 \
- --device=gpu \
- --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
+ --device=gpu
```
测试效果如下:
diff --git a/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md
index 70c55de0edea1a2c3b0574efd7835c10d94f74e6..fb402f757302160055e63a9f5c289ecdd4aab8de 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md
@@ -55,12 +55,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. If you want to change the model path, there are two methods:
- The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- - The second: Add `--model_dir` in the command line to change the model path:
+ - The second: Add `-o ATTR.model_dir` in the command line following the --config to change the model path:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+ -o ATTR.model_dir=output_inference/PPLCNet_x1_0_person_attribute_945_infer/\
--video_file=test_video.mp4 \
- --device=gpu \
- --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
+ --device=gpu
```
The test result is:
diff --git a/deploy/pipeline/docs/tutorials/pphuman_mot.md b/deploy/pipeline/docs/tutorials/pphuman_mot.md
index 13a6c3f05eb170c65e38de1002413be2bef23f17..6380ffbbd7f68d6feb286a1b4f4f33eebcf473a5 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_mot.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_mot.md
@@ -39,15 +39,15 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- - 命令行中增加`--model_dir`修改模型路径:
+ - 命令行中--config后面紧跟着增加`-o MOT.model_dir`修改模型路径:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+ -o MOT.model_dir=ppyoloe/\
--video_file=test_video.mp4 \
--device=gpu \
--region_type=horizontal \
--do_entrance_counting \
- --draw_center_traj \
- --model_dir det=ppyoloe/
+ --draw_center_traj
```
**注意:**
diff --git a/deploy/pipeline/docs/tutorials/pphuman_mot_en.md b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md
index c66ad61721280c4d05e42820207794f893f9cdec..06e7ab0149012813b9b4488d2c136981ba578633 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_mot_en.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md
@@ -39,16 +39,16 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. There are two ways to modify the model path:
- In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- - Add `--model_dir` in the command line to revise the model path:
+ - Add `-o MOT.model_dir` in the command line following the --config to change the model path:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
+ -o MOT.model_dir=ppyoloe/\
--video_file=test_video.mp4 \
--device=gpu \
--region_type=horizontal \
--do_entrance_counting \
- --draw_center_traj \
- --model_dir det=ppyoloe/
+ --draw_center_traj
```
**Note:**
diff --git a/deploy/pipeline/docs/tutorials/pphuman_mtmct.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md
index 894c18ee0d5541f082f1f70b0250549519768020..e39d53fbd02fb51671928de9ff4e65acfa83cad6 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_mtmct.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md
@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp
```python
python3 deploy/pipeline/pipeline.py
- --config deploy/pipeline/config/infer_cfg_pphuman.yml
+ --config deploy/pipeline/config/infer_cfg_pphuman.yml -o REID.model_dir=reid_best/
--video_dir=[your_video_file_directory]
--device=gpu
- --model_dir reid=reid_best/
```
## 方案说明
diff --git a/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md
index 0321d2a52d511a00b0c095a3878c3a959e646292..f67c1b67e0a80fadcd5dd03ab24e54c526e6a987 100644
--- a/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md
+++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md
@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp
```python
python3 deploy/pipeline/pipeline.py
- --config deploy/pipeline/config/infer_cfg_pphuman.yml
+ --config deploy/pipeline/config/infer_cfg_pphuman.yml -o REID.model_dir=reid_best/
--video_dir=[your_video_file_directory]
--device=gpu
- --model_dir reid=reid_best/
```
## Intorduction to the Solution
diff --git a/deploy/pipeline/pipe_utils.py b/deploy/pipeline/pipe_utils.py
index 8cc863aa51c4e2df713fee181d9ea5c36257b50b..e17f8aeab784a673d09fda91bf8b3ffefab5c7ec 100644
--- a/deploy/pipeline/pipe_utils.py
+++ b/deploy/pipeline/pipe_utils.py
@@ -88,7 +88,8 @@ class PipeTimer(Times):
for k, v in self.module_time.items():
v_time = round(v.value(), 4)
if v_time > 0 and k in ['det', 'mot', 'video_action']:
- print("{} time(ms): {}".format(k, v_time * 1000))
+ print("{} time(ms): {}; per frame average time(ms): {}".format(
+ k, v_time * 1000, v_time * 1000 / self.img_num))
elif v_time > 0:
print("{} time(ms): {}; per trackid average time(ms): {}".
format(k, v_time * 1000, v_time * 1000 / self.track_num))
diff --git a/deploy/pipeline/pipeline.py b/deploy/pipeline/pipeline.py
index da2489682edd165d392a440e8660987b5fff2877..3f0b8ce4ac4e9b6ee8f978dfe19987d875a5363a 100644
--- a/deploy/pipeline/pipeline.py
+++ b/deploy/pipeline/pipeline.py
@@ -465,6 +465,7 @@ class PipePredictor(object):
self.cfg['crop_thresh'])
if i > self.warmup_frame:
self.pipe_timer.module_time['det'].end()
+ self.pipe_timer.track_num += len(det_res['boxes'])
self.pipeline_res.update(det_res, 'det')
if self.with_human_attr:
diff --git a/deploy/pipeline/ppvehicle/vehicle_plateutils.py b/deploy/pipeline/ppvehicle/vehicle_plateutils.py
index 4a7b792da37de9a1f9e394452cdd11a91b0a9b47..8a93c945cb95ea4b456b1be19572a717ca61150d 100644
--- a/deploy/pipeline/ppvehicle/vehicle_plateutils.py
+++ b/deploy/pipeline/ppvehicle/vehicle_plateutils.py
@@ -139,6 +139,7 @@ def create_predictor(args, cfg, mode):
min_input_shape = {"x": [1, 3, imgH, 10]}
max_input_shape = {"x": [batch_size, 3, imgH, 2304]}
opt_input_shape = {"x": [batch_size, 3, imgH, 320]}
+ config.exp_disable_tensorrt_ops(["transpose2"])
elif mode == "cls":
min_input_shape = {"x": [1, 3, 48, 10]}
max_input_shape = {"x": [batch_size, 3, 48, 1024]}
diff --git a/docs/images/pphumanv2.png b/docs/images/pphumanv2.png
index 829dd60865b18b211e97e6a1c405dc2ee3d24c4b..87b896ed0e5f973497aaedb7cf2042d004e080c3 100644
Binary files a/docs/images/pphumanv2.png and b/docs/images/pphumanv2.png differ
diff --git a/docs/images/ppvehicle.png b/docs/images/ppvehicle.png
new file mode 100644
index 0000000000000000000000000000000000000000..87176c313e7af184225e0f09001be53955b98f7a
Binary files /dev/null and b/docs/images/ppvehicle.png differ