未验证 提交 6eff0446 编写于 作者: Z zhiboniu 提交者: GitHub

update ppvehicle modelzoo (#6679)

* update ppvehicle modelzoo; test=document_fix

* update pipeline pictures

* update

* fix platerec jetson trtfp16; test=document_fix

* replace all --model_dir in pipeline docs; test=document_fix
上级 8fbdf1cb
...@@ -62,7 +62,7 @@ ...@@ -62,7 +62,7 @@
| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M<br>基于人体id的图像分类:45M | | 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M<br>基于人体id的图像分类:45M |
点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中 点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
</details> </details>
...@@ -71,20 +71,14 @@ ...@@ -71,20 +71,14 @@
<details> <details>
<summary><b>端到端模型效果(点击展开)</b></summary> <summary><b>端到端模型效果(点击展开)</b></summary>
| 任务 | 端到端速度(ms) | 模型方案 | 模型体积 | | 任务 | 端到端速度(ms)| 模型方案 | 模型体积 |
| --------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | | :---------: | :-------: | :------: |:------: |
| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | | 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | | 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M | | 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M |
| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M<br>属性识别:86M | | 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) <br> [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) <br> [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M<br>关键点检测:101M<br>基于关键点行为识别:21.8M |
| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M |
| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M<br>基于人体id的目标检测:27M |
| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M<br>基于人体id的图像分类:45M |
点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中 点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中
......
...@@ -46,11 +46,13 @@ PP-Vehicle提供了目标检测、属性识别、行为识别、ReID预训练模 ...@@ -46,11 +46,13 @@ PP-Vehicle提供了目标检测、属性识别、行为识别、ReID预训练模
| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 | | 任务 | 端到端速度(ms)| 模型方案 | 模型体积 |
| :---------: | :-------: | :------: |:------: | | :---------: | :-------: | :------: |:------: |
| 车辆检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆检测(高精度) | 25.7ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆检测(轻量级) | 13.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 车辆跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M | | 车辆跟踪(高精度) | 40ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M | | 车辆跟踪(轻量级) | 25ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) | 27M |
| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 多目标跟踪:182M | | 车牌识别 | 4.68ms | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌字符识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M |
| 车辆属性 | 7.31ms | [车辆属性](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
下载模型后,解压至`./output_inference`文件夹。 下载模型后,解压至`./output_inference`文件夹。
...@@ -97,7 +99,7 @@ VEHICLE_ATTR: ...@@ -97,7 +99,7 @@ VEHICLE_ATTR:
**注意:** **注意:**
- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。 - 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。
- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。 - 如果用户仅需要修改模型文件路径,可以在命令行中--config后面紧跟着 `-o MOT.model_dir=ppyoloe/` 进行修改即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。
## 预测部署 ## 预测部署
...@@ -111,7 +113,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv ...@@ -111,7 +113,7 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
# 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True``` # 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True```
# 命令行中指定的模型路径优先级高于配置文件 # 命令行中指定的模型路径优先级高于配置文件
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16] python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True``` # 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True```
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16]
...@@ -130,7 +132,6 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv ...@@ -130,7 +132,6 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppv
|-------|-------|----------| |-------|-------|----------|
| --config | Yes | 配置文件路径 | | --config | Yes | 配置文件路径 |
| -o | Option | 覆盖配置文件中对应的配置 | | -o | Option | 覆盖配置文件中对应的配置 |
| --model_dir | Option | 各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`|
| --image_file | Option | 需要预测的图片 | | --image_file | Option | 需要预测的图片 |
| --image_dir | Option | 要预测的图片文件夹路径 | | --image_dir | Option | 要预测的图片文件夹路径 |
| --video_file | Option | 需要预测的视频,或者rtsp流地址 | | --video_file | Option | 需要预测的视频,或者rtsp流地址 |
......
...@@ -60,12 +60,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -60,12 +60,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
3. 若修改模型路径,有以下两种方式: 3. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径: - 命令行中--config后面紧跟着增加`-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx `修改模型路径:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN
``` ```
4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md) 4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)
......
...@@ -72,13 +72,13 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model ...@@ -72,13 +72,13 @@ SKELETON_ACTION: # Config for skeleton-based action recognition model
3. There are two ways to modify the model path: 3. There are two ways to modify the model path:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path. - In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path: - Add `-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx ` in the command line following the --config to change the model path:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \ -o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN\
--device=gpu \ --video_file=test_video.mp4 \
--model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN --device=gpu
``` ```
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) 4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
......
...@@ -59,12 +59,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -59,12 +59,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. 若修改模型路径,有以下两种方式: 4. 若修改模型路径,有以下两种方式:
- 方法一:```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 - 方法一:```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置
- 方法二:命令行中增加`--model_dir`修改模型路径: - 方法二:命令行中--config后面紧跟着增加`-o ATTR.model_dir`修改模型路径:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml
-o ATTR.model_dir=output_inference/PPLCNet_x1_0_person_attribute_945_infer/\
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu
--model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
``` ```
测试效果如下: 测试效果如下:
......
...@@ -55,12 +55,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -55,12 +55,12 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. If you want to change the model path, there are two methods: 4. If you want to change the model path, there are two methods:
- The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. - The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- The second: Add `--model_dir` in the command line to change the model path: - The second: Add `-o ATTR.model_dir` in the command line following the --config to change the model path:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o ATTR.model_dir=output_inference/PPLCNet_x1_0_person_attribute_945_infer/\
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu
--model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/
``` ```
The test result is: The test result is:
......
...@@ -39,15 +39,15 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -39,15 +39,15 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. 若修改模型路径,有以下两种方式: 4. 若修改模型路径,有以下两种方式:
- ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。 - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- 命令行中增加`--model_dir`修改模型路径: - 命令行中--config后面紧跟着增加`-o MOT.model_dir`修改模型路径:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o MOT.model_dir=ppyoloe/\
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu \
--region_type=horizontal \ --region_type=horizontal \
--do_entrance_counting \ --do_entrance_counting \
--draw_center_traj \ --draw_center_traj
--model_dir det=ppyoloe/
``` ```
**注意:** **注意:**
......
...@@ -39,16 +39,16 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph ...@@ -39,16 +39,16 @@ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pph
4. There are two ways to modify the model path: 4. There are two ways to modify the model path:
- In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path. - In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path.
- Add `--model_dir` in the command line to revise the model path: - Add `-o MOT.model_dir` in the command line following the --config to change the model path:
```python ```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o MOT.model_dir=ppyoloe/\
--video_file=test_video.mp4 \ --video_file=test_video.mp4 \
--device=gpu \ --device=gpu \
--region_type=horizontal \ --region_type=horizontal \
--do_entrance_counting \ --do_entrance_counting \
--draw_center_traj \ --draw_center_traj
--model_dir det=ppyoloe/
``` ```
**Note:** **Note:**
......
...@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp ...@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp
```python ```python
python3 deploy/pipeline/pipeline.py python3 deploy/pipeline/pipeline.py
--config deploy/pipeline/config/infer_cfg_pphuman.yml --config deploy/pipeline/config/infer_cfg_pphuman.yml -o REID.model_dir=reid_best/
--video_dir=[your_video_file_directory] --video_dir=[your_video_file_directory]
--device=gpu --device=gpu
--model_dir reid=reid_best/
``` ```
## 方案说明 ## 方案说明
......
...@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp ...@@ -18,10 +18,9 @@ python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pp
```python ```python
python3 deploy/pipeline/pipeline.py python3 deploy/pipeline/pipeline.py
--config deploy/pipeline/config/infer_cfg_pphuman.yml --config deploy/pipeline/config/infer_cfg_pphuman.yml -o REID.model_dir=reid_best/
--video_dir=[your_video_file_directory] --video_dir=[your_video_file_directory]
--device=gpu --device=gpu
--model_dir reid=reid_best/
``` ```
## Intorduction to the Solution ## Intorduction to the Solution
......
...@@ -88,7 +88,8 @@ class PipeTimer(Times): ...@@ -88,7 +88,8 @@ class PipeTimer(Times):
for k, v in self.module_time.items(): for k, v in self.module_time.items():
v_time = round(v.value(), 4) v_time = round(v.value(), 4)
if v_time > 0 and k in ['det', 'mot', 'video_action']: if v_time > 0 and k in ['det', 'mot', 'video_action']:
print("{} time(ms): {}".format(k, v_time * 1000)) print("{} time(ms): {}; per frame average time(ms): {}".format(
k, v_time * 1000, v_time * 1000 / self.img_num))
elif v_time > 0: elif v_time > 0:
print("{} time(ms): {}; per trackid average time(ms): {}". print("{} time(ms): {}; per trackid average time(ms): {}".
format(k, v_time * 1000, v_time * 1000 / self.track_num)) format(k, v_time * 1000, v_time * 1000 / self.track_num))
......
...@@ -465,6 +465,7 @@ class PipePredictor(object): ...@@ -465,6 +465,7 @@ class PipePredictor(object):
self.cfg['crop_thresh']) self.cfg['crop_thresh'])
if i > self.warmup_frame: if i > self.warmup_frame:
self.pipe_timer.module_time['det'].end() self.pipe_timer.module_time['det'].end()
self.pipe_timer.track_num += len(det_res['boxes'])
self.pipeline_res.update(det_res, 'det') self.pipeline_res.update(det_res, 'det')
if self.with_human_attr: if self.with_human_attr:
......
...@@ -139,6 +139,7 @@ def create_predictor(args, cfg, mode): ...@@ -139,6 +139,7 @@ def create_predictor(args, cfg, mode):
min_input_shape = {"x": [1, 3, imgH, 10]} min_input_shape = {"x": [1, 3, imgH, 10]}
max_input_shape = {"x": [batch_size, 3, imgH, 2304]} max_input_shape = {"x": [batch_size, 3, imgH, 2304]}
opt_input_shape = {"x": [batch_size, 3, imgH, 320]} opt_input_shape = {"x": [batch_size, 3, imgH, 320]}
config.exp_disable_tensorrt_ops(["transpose2"])
elif mode == "cls": elif mode == "cls":
min_input_shape = {"x": [1, 3, 48, 10]} min_input_shape = {"x": [1, 3, 48, 10]}
max_input_shape = {"x": [batch_size, 3, 48, 1024]} max_input_shape = {"x": [batch_size, 3, 48, 1024]}
......
docs/images/pphumanv2.png

104.9 KB | W: | H:

docs/images/pphumanv2.png

287.0 KB | W: | H:

docs/images/pphumanv2.png
docs/images/pphumanv2.png
docs/images/pphumanv2.png
docs/images/pphumanv2.png
  • 2-up
  • Swipe
  • Onion skin
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册