未验证 提交 d244753b 编写于 作者: G Guanghua Yu 提交者: GitHub

[cherry-pick] fix openvino demo (#5597)

上级 0aefef5b
......@@ -11,35 +11,30 @@
pip install openvino==2022.1.0
```
详细安装步骤,可参考官网: https://docs.openvinotoolkit.org/latest/get_started_guides.html
详细安装步骤,可参考[OpenVINO官网](https://docs.openvinotoolkit.org/latest/get_started_guides.html)
## 测试
准备测试模型,根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)中模型导出与转换步骤,采用不包含后处理的方式导出模型(`-o export.benchmark=True` ),并生成待测试模型简化后的onnx(可在下文链接中直接下载)
在本目录下新建```out_onnxsim```文件夹:
```shell
mkdir out_onnxsim
```
将导出的onnx模型放在该目录下
- 准备测试模型:根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)中【导出及转换模型】步骤,采用不包含后处理的方式导出模型(`-o export.benchmark=True` ),并生成待测试模型简化后的onnx模型(可在下文链接中可直接下载)。同时在本目录下新建```out_onnxsim```文件夹,将导出的onnx模型放在该目录下。
准备测试所用图片,本demo默认利用PaddleDetection/demo/[000000570688.jpg](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/demo/000000570688.jpg)
- 准备测试所用图片:本demo默认利用PaddleDetection/demo/[000000570688.jpg](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/demo/000000570688.jpg)
在本目录下直接运行:
- 在本目录下直接运行:
```shell
#Windows
python '.\openvino_ppdet2 copy.py' --img_path ..\..\..\..\demo\000000570688.jpg --onnx_path out_onnxsim\picodet_xs_320_coco_lcnet.onnx --in_shape 320
#Linux
python './openvino_ppdet2 copy.py' --img_path ../../../../demo/000000570688.jpg --onnx_path out_onnxsim/picodet_xs_320_coco_lcnet.onnx --in_shape 320
# Linux
python openvino_benchmark.py --img_path ../../../../demo/000000570688.jpg --onnx_path out_onnxsim/picodet_xs_320_coco_lcnet.onnx --in_shape 320
# Windows
python openvino_benchmark.py --img_path ..\..\..\..\demo\000000570688.jpg --onnx_path out_onnxsim\picodet_xs_320_coco_lcnet.onnx --in_shape 320
```
注意:```--in_shape```为对应模型输入size,默认为320
- 注意:```--in_shape```为对应模型输入size,默认为320
## 结果
在英特尔酷睿i7 10750H 的CPU(MKLDNN 12线程)上测试结果如下:
测试结果如下:
| 模型 | 输入尺寸 | ONNX | 预测时延<sup><small>[ms](#latency)|
| 模型 | 输入尺寸 | ONNX | 预测时延<sup><small>[CPU](#latency)|
| :-------- | :--------: | :---------------------: | :----------------: |
| PicoDet-XS | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_coco_lcnet.onnx) | 3.9ms |
| PicoDet-XS | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_coco_lcnet.onnx) | 6.1ms |
......@@ -50,3 +45,5 @@ python './openvino_ppdet2 copy.py' --img_path ../../../../demo/000000570688.jpg
| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco_lcnet.onnx) | 11.5ms |
| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco_lcnet.onnx) | 20.7ms |
| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco.onnx) | 62.5ms |
- <a name="latency">测试环境:</a> 英特尔酷睿i7 10750H CPU。
......@@ -19,7 +19,7 @@ import argparse
from openvino.runtime import Core
def image_preprocess_mobilenetv3(img_path, re_shape):
def image_preprocess(img_path, re_shape):
img = cv2.imread(img_path)
img = cv2.resize(
img, (re_shape, re_shape), interpolation=cv2.INTER_LANCZOS4)
......@@ -38,7 +38,7 @@ def benchmark(img_file, onnx_file, re_shape):
ie = Core()
net = ie.read_model(onnx_file)
test_image = image_preprocess_mobilenetv3(img_file, re_shape)
test_image = image_preprocess(img_file, re_shape)
compiled_model = ie.compile_model(net, 'CPU')
......@@ -64,9 +64,9 @@ def benchmark(img_file, onnx_file, re_shape):
time_avg = timeall / loop_num
print(
f'inference_time(ms): min={round(time_min*1000, 2)}, max = {round(time_max*1000, 1)}, avg = {round(time_avg*1000, 1)}'
)
print('inference_time(ms): min={}, max={}, avg={}'.format(
round(time_min * 1000, 2),
round(time_max * 1000, 1), round(time_avg * 1000, 1)))
if __name__ == '__main__':
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册