Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
f5f3f473
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f5f3f473
编写于
1月 21, 2022
作者:
B
Bin Lu
提交者:
GitHub
1月 21, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1648 from Intsigstephon/develop
support cls inference for onnx mode
上级
fcc2fac0
b2dbbc3c
变更
8
隐藏空白更改
内联
并排
Showing
8 changed file
with
164 addition
and
36 deletion
+164
-36
deploy/paddle2onnx/readme.md
deploy/paddle2onnx/readme.md
+61
-0
deploy/python/predict_cls.py
deploy/python/predict_cls.py
+19
-8
deploy/python/predict_det.py
deploy/python/predict_det.py
+5
-6
deploy/python/predict_rec.py
deploy/python/predict_rec.py
+18
-8
deploy/utils/predictor.py
deploy/utils/predictor.py
+23
-2
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+14
-0
test_tipc/prepare.sh
test_tipc/prepare.sh
+12
-0
test_tipc/test_paddle2onnx.sh
test_tipc/test_paddle2onnx.sh
+12
-12
未找到文件。
deploy/paddle2onnx/readme.md
0 → 100644
浏览文件 @
f5f3f473
# paddle2onnx 模型转化与预测
本章节介绍 ResNet50_vd 模型如何转化为 ONNX 模型,并基于 ONNX 引擎预测。
## 1. 环境准备
需要准备 Paddle2ONNX 模型转化环境,和 ONNX 模型预测环境。
Paddle2ONNX 支持将 PaddlePaddle 模型格式转化到 ONNX 模型格式,算子目前稳定支持导出 ONNX Opset 9~11,部分Paddle算子支持更低的ONNX Opset转换。
更多细节可参考
[
Paddle2ONNX
](
https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md
)
-
安装 Paddle2ONNX
```
python3.7 -m pip install paddle2onnx
```
-
安装 ONNX 运行时
```
python3.7 -m pip install onnxruntime
```
## 2. 模型转换
-
ResNet50_vd inference模型下载
```
cd deploy
mkdir models && cd models
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
cd ..
```
-
模型转换
使用 Paddle2ONNX 将 Paddle 静态图模型转换为 ONNX 模型格式:
```
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
--model_filename=inference.pdmodel \
--params_filename=inference.pdiparams \
--save_file=./models/ResNet50_vd_infer/inference.onnx \
--opset_version=10 \
--enable_onnx_checker=True
```
执行完毕后,ONNX 模型
`inference.onnx`
会被保存在
`./models/ResNet50_vd_infer/`
路径下
## 3. onnx 预测
执行如下命令:
```
python3.7 python/predict_cls.py \
-c configs/inference_cls.yaml \
-o Global.use_onnx=True \
-o Global.use_gpu=False \
-o Global.inference_model_dir=./models/ResNet50_vd_infer \
```
结果如下:
```
ILSVRC2012_val_00000010.jpeg: class id(s): [153, 204, 229, 332, 155], score(s): [0.69, 0.10, 0.02, 0.01, 0.01], label_name(s): ['Maltese dog, Maltese terrier, Maltese', 'Lhasa, Lhasa apso', 'Old English sheepdog, bobtail', 'Angora, Angora rabbit', 'Shih-Tzu']
```
deploy/python/predict_cls.py
浏览文件 @
f5f3f473
...
...
@@ -67,12 +67,17 @@ class ClsPredictor(Predictor):
warmup
=
2
)
def
predict
(
self
,
images
):
input_names
=
self
.
paddle_predictor
.
get_input_names
()
input_tensor
=
self
.
paddle_predictor
.
get_input_handle
(
input_names
[
0
])
use_onnx
=
self
.
args
.
get
(
"use_onnx"
,
False
)
if
not
use_onnx
:
input_names
=
self
.
predictor
.
get_input_names
()
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_names
[
0
])
output_names
=
self
.
predictor
.
get_output_names
()
output_tensor
=
self
.
predictor
.
get_output_handle
(
output_names
[
0
])
else
:
input_names
=
self
.
predictor
.
get_inputs
()[
0
].
name
output_names
=
self
.
predictor
.
get_outputs
()[
0
].
name
output_names
=
self
.
paddle_predictor
.
get_output_names
()
output_tensor
=
self
.
paddle_predictor
.
get_output_handle
(
output_names
[
0
])
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
start
()
if
not
isinstance
(
images
,
(
list
,
)):
...
...
@@ -84,9 +89,15 @@ class ClsPredictor(Predictor):
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
stamp
()
input_tensor
.
copy_from_cpu
(
image
)
self
.
paddle_predictor
.
run
()
batch_output
=
output_tensor
.
copy_to_cpu
()
if
not
use_onnx
:
input_tensor
.
copy_from_cpu
(
image
)
self
.
predictor
.
run
()
batch_output
=
output_tensor
.
copy_to_cpu
()
else
:
batch_output
=
self
.
predictor
.
run
(
output_names
=
[
output_names
],
input_feed
=
{
input_names
:
image
})[
0
]
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
stamp
()
if
self
.
postprocess
is
not
None
:
...
...
deploy/python/predict_det.py
浏览文件 @
f5f3f473
...
...
@@ -109,17 +109,16 @@ class DetPredictor(Predictor):
'''
inputs
=
self
.
preprocess
(
image
)
np_boxes
=
None
input_names
=
self
.
p
addle_p
redictor
.
get_input_names
()
input_names
=
self
.
predictor
.
get_input_names
()
for
i
in
range
(
len
(
input_names
)):
input_tensor
=
self
.
paddle_predictor
.
get_input_handle
(
input_names
[
i
])
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_names
[
i
])
input_tensor
.
copy_from_cpu
(
inputs
[
input_names
[
i
]])
t1
=
time
.
time
()
self
.
p
addle_p
redictor
.
run
()
output_names
=
self
.
p
addle_p
redictor
.
get_output_names
()
boxes_tensor
=
self
.
p
addle_p
redictor
.
get_output_handle
(
output_names
[
0
])
self
.
predictor
.
run
()
output_names
=
self
.
predictor
.
get_output_names
()
boxes_tensor
=
self
.
predictor
.
get_output_handle
(
output_names
[
0
])
np_boxes
=
boxes_tensor
.
copy_to_cpu
()
t2
=
time
.
time
()
...
...
deploy/python/predict_rec.py
浏览文件 @
f5f3f473
...
...
@@ -58,12 +58,16 @@ class RecPredictor(Predictor):
warmup
=
2
)
def
predict
(
self
,
images
,
feature_normalize
=
True
):
input_names
=
self
.
paddle_predictor
.
get_input_names
()
input_tensor
=
self
.
paddle_predictor
.
get_input_handle
(
input_names
[
0
])
use_onnx
=
self
.
args
.
get
(
"use_onnx"
,
False
)
if
not
use_onnx
:
input_names
=
self
.
predictor
.
get_input_names
()
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_names
[
0
])
output_names
=
self
.
paddle_predictor
.
get_output_names
()
output_tensor
=
self
.
paddle_predictor
.
get_output_handle
(
output_names
[
0
])
output_names
=
self
.
predictor
.
get_output_names
()
output_tensor
=
self
.
predictor
.
get_output_handle
(
output_names
[
0
])
else
:
input_names
=
self
.
predictor
.
get_inputs
()[
0
].
name
output_names
=
self
.
predictor
.
get_outputs
()[
0
].
name
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
start
()
...
...
@@ -76,9 +80,15 @@ class RecPredictor(Predictor):
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
stamp
()
input_tensor
.
copy_from_cpu
(
image
)
self
.
paddle_predictor
.
run
()
batch_output
=
output_tensor
.
copy_to_cpu
()
if
not
use_onnx
:
input_tensor
.
copy_from_cpu
(
image
)
self
.
predictor
.
run
()
batch_output
=
output_tensor
.
copy_to_cpu
()
else
:
batch_output
=
self
.
predictor
.
run
(
output_names
=
[
output_names
],
input_feed
=
{
input_names
:
image
})[
0
]
if
self
.
benchmark
:
self
.
auto_logger
.
times
.
stamp
()
...
...
deploy/utils/predictor.py
浏览文件 @
f5f3f473
...
...
@@ -28,8 +28,12 @@ class Predictor(object):
if
args
.
use_fp16
is
True
:
assert
args
.
use_tensorrt
is
True
self
.
args
=
args
self
.
paddle_predictor
,
self
.
config
=
self
.
create_paddle_predictor
(
args
,
inference_model_dir
)
if
self
.
args
.
get
(
"use_onnx"
,
False
):
self
.
predictor
,
self
.
config
=
self
.
create_onnx_predictor
(
args
,
inference_model_dir
)
else
:
self
.
predictor
,
self
.
config
=
self
.
create_paddle_predictor
(
args
,
inference_model_dir
)
def
predict
(
self
,
image
):
raise
NotImplementedError
...
...
@@ -69,3 +73,20 @@ class Predictor(object):
predictor
=
create_predictor
(
config
)
return
predictor
,
config
def
create_onnx_predictor
(
self
,
args
,
inference_model_dir
=
None
):
import
onnxruntime
as
ort
if
inference_model_dir
is
None
:
inference_model_dir
=
args
.
inference_model_dir
model_file
=
os
.
path
.
join
(
inference_model_dir
,
"inference.onnx"
)
config
=
ort
.
SessionOptions
()
if
args
.
use_gpu
:
raise
ValueError
(
"onnx inference now only supports cpu! please specify use_gpu false."
)
else
:
config
.
intra_op_num_threads
=
args
.
cpu_num_threads
if
args
.
ir_optim
:
config
.
graph_optimization_level
=
ort
.
GraphOptimizationLevel
.
ORT_ENABLE_ALL
predictor
=
ort
.
InferenceSession
(
model_file
,
sess_options
=
config
)
return
predictor
,
config
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
f5f3f473
===========================paddle2onnx_params===========================
model_name:ResNet50_vd
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/ResNet50_vd_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/ResNet50_vd_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference: python/predict_cls.py -c configs/inference_cls.yaml
Global.use_onnx:True
Global.inference_model_dir:models/ResNet50_vd_infer/
Global.use_gpu:False
test_tipc/prepare.sh
浏览文件 @
f5f3f473
...
...
@@ -165,3 +165,15 @@ if [ ${MODE} = "serving_infer" ];then
cd
./deploy/paddleserving
wget
-nc
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
&&
tar
xf ResNet50_vd_infer.tar
fi
if
[
${
MODE
}
=
"paddle2onnx_infer"
]
;
then
# prepare paddle2onnx env
python_name
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
${
python_name
}
-m
pip
install install
paddle2onnx
${
python_name
}
-m
pip
install
onnxruntime
# wget model
cd
deploy
&&
mkdir
models
&&
cd
models
wget
-nc
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
&&
tar
xf ResNet50_vd_infer.tar
cd
../../
fi
test_tipc/test_paddle2onnx.sh
浏览文件 @
f5f3f473
...
...
@@ -11,7 +11,7 @@ python=$(func_parser_value "${lines[2]}")
# parser params
dataline
=
$(
awk
'NR==1, NR==1
2
{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==1, NR==1
4
{print}'
$FILENAME
)
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
...
...
@@ -33,12 +33,12 @@ enable_onnx_checker_key=$(func_parser_key "${lines[9]}")
enable_onnx_checker_value
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
# parser onnx inference
inference_py
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
use_
gpu
_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
use_
gpu
_value
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
det_model
_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
i
mage_dir_key
=
$(
func_parser_key
"
${
lines
[13
]
}
"
)
i
mage_dir_value
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
use_
onnx
_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
use_
onnx
_value
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
inference_model_dir
_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
i
nference_model_dir_value
=
$(
func_parser_value
"
${
lines
[12
]
}
"
)
i
nference_hardware_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
inference_hardware_value
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
LOG_PATH
=
"./test_tipc/output"
mkdir
-p
./test_tipc/output
...
...
@@ -50,7 +50,7 @@ function func_paddle2onnx(){
_script
=
$1
# paddle2onnx
_save_log_path
=
"
${
LOG_PATH
}
/paddle2onnx_infer_cpu.log"
_save_log_path
=
"
.
${
LOG_PATH
}
/paddle2onnx_infer_cpu.log"
set_dirname
=
$(
func_set_params
"
${
infer_model_dir_key
}
"
"
${
infer_model_dir_value
}
"
)
set_model_filename
=
$(
func_set_params
"
${
model_filename_key
}
"
"
${
model_filename_value
}
"
)
set_params_filename
=
$(
func_set_params
"
${
params_filename_key
}
"
"
${
params_filename_value
}
"
)
...
...
@@ -62,10 +62,10 @@ function func_paddle2onnx(){
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
# python inference
set_
gpu
=
$(
func_set_params
"
${
use_gpu_key
}
"
"
${
use_gpu
_value
}
"
)
set_
model_dir
=
$(
func_set_params
"
${
det_model_key
}
"
"
${
save_file
_value
}
"
)
set_
img_dir
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
image_dir
_value
}
"
)
infer_model_cmd
=
"
${
python
}
${
inference_py
}
${
set_gpu
}
${
set_img_dir
}
${
set_model_dir
}
--use_onnx=True >
${
_save_log_path
}
2>&1
"
set_
model_dir
=
$(
func_set_params
"
${
inference_model_dir_key
}
"
"
${
inference_model_dir
_value
}
"
)
set_
use_onnx
=
$(
func_set_params
"
${
use_onnx_key
}
"
"
${
use_onnx
_value
}
"
)
set_
hardware
=
$(
func_set_params
"
${
inference_hardware_key
}
"
"
${
inference_hardware
_value
}
"
)
infer_model_cmd
=
"
cd deploy &&
${
python
}
${
inference_py
}
-o
${
set_model_dir
}
-o
${
set_use_onnx
}
-o
${
set_hardware
}
>
${
_save_log_path
}
2>&1 && cd ../
"
eval
$infer_model_cmd
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
}
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录