Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
ff81fa01
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
接近 2 年 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ff81fa01
编写于
6月 07, 2022
作者:
W
Walter
提交者:
GitHub
6月 07, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1964 from HydrogenSulfate/add_paddle2onnx_tipc
add paddle2onnx tipc chain
上级
a43f8539
17401161
变更
21
隐藏空白更改
内联
并排
Showing
21 changed file
with
339 addition
and
19 deletion
+339
-19
test_tipc/README.md
test_tipc/README.md
+6
-2
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+15
-0
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+15
-0
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+15
-0
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+2
-0
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
..._linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
+16
-0
test_tipc/docs/test_paddle2onnx.md
test_tipc/docs/test_paddle2onnx.md
+52
-0
test_tipc/prepare.sh
test_tipc/prepare.sh
+10
-4
test_tipc/test_paddle2onnx.sh
test_tipc/test_paddle2onnx.sh
+16
-13
未找到文件。
test_tipc/README.md
浏览文件 @
ff81fa01
...
@@ -35,11 +35,14 @@
...
@@ -35,11 +35,14 @@
│ ├── MobileNetV3 # MobileNetV3系列模型测试配置文件目录
│ ├── MobileNetV3 # MobileNetV3系列模型测试配置文件目录
│ │ ├── MobileNetV3_large_x1_0_train_infer_python.txt #基础训练预测配置文件
│ │ ├── MobileNetV3_large_x1_0_train_infer_python.txt #基础训练预测配置文件
│ │ ├── MobileNetV3_large_x1_0_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ │ ├── MobileNetV3_large_x1_0_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ │ └── MobileNetV3_large_x1_0_train_linux_gpu_normal_amp_infer_python_linux_gpu_cpu.txt #混合精度训练预测配置文件
│ │ ├── MobileNetV3_large_x1_0_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ │ └── MobileNetV3_large_x1_0_paddle2onnx_infer_python.txt #paddle2onnx推理测试配置文件
│ └── ResNet # ResNet系列模型测试配置文件目录
│ └── ResNet # ResNet系列模型测试配置文件目录
│ ├── ResNet50_vd_train_infer_python.txt #基础训练预测配置文件
│ ├── ResNet50_vd_train_infer_python.txt #基础训练预测配置文件
│ ├── ResNet50_vd_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ ├── ResNet50_vd_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ └── ResNet50_vd_train_linux_gpu_normal_amp_infer_python_linux_gpu_cpu.txt #混合精度训练预测配置文件
│ ├── ResNet50_vd_train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt #多机多卡训练预测配置文件
│ ├── ResNet50_vd_train_linux_gpu_normal_amp_infer_python_linux_gpu_cpu.txt #混合精度训练预测配置文件
│ └── ResNet50_vd_paddle2onnx_infer_python.txt #paddle2onnx推理测试配置文件
| ......
| ......
├── docs
├── docs
│ ├── guide.png
│ ├── guide.png
...
@@ -47,6 +50,7 @@
...
@@ -47,6 +50,7 @@
├── prepare.sh # 完成test_*.sh运行所需要的数据和模型下载
├── prepare.sh # 完成test_*.sh运行所需要的数据和模型下载
├── README.md # 使用文档
├── README.md # 使用文档
├── results # 预先保存的预测结果,用于和实际预测结果进行精读比对
├── results # 预先保存的预测结果,用于和实际预测结果进行精读比对
├── test_paddle2onnx.sh # 测试paddle2onnx推理预测的主程序
└── test_train_inference_python.sh # 测试python训练预测的主程序
└── test_train_inference_python.sh # 测试python训练预测的主程序
```
```
...
...
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:MobileNetV3_large_x1_0
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/MobileNetV3_large_x1_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/MobileNetV3_large_x1_0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/MobileNetV3_large_x1_0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PP-ShiTu_general_rec
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/general_PPLCNet_x2_5_lite_v1.0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/general_PPLCNet_x2_5_lite_v1.0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PP-ShiTu_mainbody_det
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPHGNet_small
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPHGNet_small_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPHGNet_small_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPHGNet_small_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPHGNet_tiny
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPHGNet_tiny_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPHGNet_tiny_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPHGNet_tiny_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x0_25
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x0_25_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x0_25_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x0_25_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x0_25
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x0_25_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x0_25_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_25_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x0_25_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PP-ShiTu_mainbody_det
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x0_75
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x0_75_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x0_75_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_75_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x0_75_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x1_0
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x1_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x1_0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x1_0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x1_5
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x1_5_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x1_5_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_5_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x1_5_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x2_0
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x2_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x2_0_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x2_0_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x2_0_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNet_x2_5
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNet_x2_5_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNet_x2_5_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x2_5_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNet_x2_5_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:PPLCNetV2_base
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/PPLCNetV2_base_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/PPLCNetV2_base_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNetV2_base_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/PPLCNetV2_base_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:ResNet50
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/ResNet50_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/ResNet50_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/ResNet50_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
浏览文件 @
ff81fa01
...
@@ -8,7 +8,9 @@ python:python3.7
...
@@ -8,7 +8,9 @@ python:python3.7
--save_file:./deploy/models/ResNet50_vd_infer/inference.onnx
--save_file:./deploy/models/ResNet50_vd_infer/inference.onnx
--opset_version:10
--opset_version:10
--enable_onnx_checker:True
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
inference: python/predict_cls.py -c configs/inference_cls.yaml
inference: python/predict_cls.py -c configs/inference_cls.yaml
Global.use_onnx:True
Global.use_onnx:True
Global.inference_model_dir:models/ResNet50_vd_infer/
Global.inference_model_dir:models/ResNet50_vd_infer/
Global.use_gpu:False
Global.use_gpu:False
-c:configs/inference_cls.yaml
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
0 → 100644
浏览文件 @
ff81fa01
===========================paddle2onnx_params===========================
model_name:SwinTransformer_tiny_patch4_window7_224
python:python3.7
2onnx: paddle2onnx
--model_dir:./deploy/models/SwinTransformer_tiny_patch4_window7_224_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--save_file:./deploy/models/SwinTransformer_tiny_patch4_window7_224_infer/inference.onnx
--opset_version:10
--enable_onnx_checker:True
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SwinTransformer_tiny_patch4_window7_224_infer.tar
inference:./python/predict_cls.py
Global.use_onnx:True
Global.inference_model_dir:./models/SwinTransformer_tiny_patch4_window7_224_infer
Global.use_gpu:False
-c:configs/inference_cls.yaml
\ No newline at end of file
test_tipc/docs/test_paddle2onnx.md
0 → 100644
浏览文件 @
ff81fa01
# Paddle2onnx预测功能测试
PaddleServing预测功能测试的主程序为
`test_paddle2onnx.sh`
,可以测试Paddle2ONNX的模型转化功能,并验证正确性。
## 1. 测试结论汇总
基于训练是否使用量化,进行本测试的模型可以分为
`正常模型`
和
`量化模型`
,这两类模型对应的Paddle2ONNX预测功能汇总如下:
| 模型类型 |device |
| ---- | ---- |
| 正常模型 | GPU |
| 正常模型 | CPU |
## 2. 测试流程
以下内容以
`ResNet50`
模型的paddle2onnx测试为例
### 2.1 功能测试
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_paddle2onnx.sh`
进行测试,最终在
`test_tipc/output/ResNet50`
目录下生成
`paddle2onnx_infer_*.log`
后缀的日志文件
下方展示以PPHGNet_small为例的测试命令与结果。
```
shell
bash test_tipc/prepare.sh ./test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt paddle2onnx_infer
# 用法:
bash test_tipc/test_paddle2onnx.sh ./test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt
```
#### 运行结果
各测试的运行情况会打印在
`./test_tipc/output/ResNet50/results_paddle2onnx.log`
中:
运行成功时会输出:
```
Run successfully with command - paddle2onnx --model_dir=./deploy/models/ResNet50_infer/ --model_filename=inference.pdmodel --params_filename=inference.pdiparams --save_file=./deploy/models/ResNet50_infer/inference.onnx --opset_version=10 --enable_onnx_checker=True!
Run successfully with command - cd deploy && python3.7 ./python/predict_cls.py -o Global.inference_model_dir=./models/ResNet50_infer -o Global.use_onnx=True -o Global.use_gpu=False -c=configs/inference_cls.yaml > ../test_tipc/output/ResNet50/paddle2onnx_infer_cpu.log 2>&1 && cd ../!
```
运行失败时会输出:
```
Run failed with command - paddle2onnx --model_dir=./deploy/models/ResNet50_infer/ --model_filename=inference.pdmodel --params_filename=inference.pdiparams --save_file=./deploy/models/ResNet50_infer/inference.onnx --opset_version=10 --enable_onnx_checker=True!
Run failed with command - cd deploy && python3.7 ./python/predict_cls.py -o Global.inference_model_dir=./models/ResNet50_infer -o Global.use_onnx=True -o Global.use_gpu=False -c=configs/inference_cls.yaml > ../test_tipc/output/ResNet50/paddle2onnx_infer_cpu.log 2>&1 && cd ../!
...
```
## 3. 更多教程
本文档为功能测试用,更详细的Paddle2onnx预测使用教程请参考:
[
Paddle2ONNX
](
https://github.com/PaddlePaddle/Paddle2ONNX
)
test_tipc/prepare.sh
浏览文件 @
ff81fa01
...
@@ -174,13 +174,19 @@ fi
...
@@ -174,13 +174,19 @@ fi
if
[
${
MODE
}
=
"paddle2onnx_infer"
]
;
then
if
[
${
MODE
}
=
"paddle2onnx_infer"
]
;
then
# prepare paddle2onnx env
# prepare paddle2onnx env
python_name
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
python_name
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
inference_model_url
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
tar_name
=
${
inference_model_url
##*/
}
${
python_name
}
-m
pip
install install
paddle2onnx
${
python_name
}
-m
pip
install install
paddle2onnx
${
python_name
}
-m
pip
install
onnxruntime
${
python_name
}
-m
pip
install
onnxruntime
cd
deploy
# wget model
mkdir
models
cd
deploy
&&
mkdir
models
&&
cd
models
cd
models
wget
-nc
https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
&&
tar
xf ResNet50_vd_infer.tar
wget
-nc
${
inference_model_url
}
tar
xf
${
tar_name
}
cd
../../
cd
../../
fi
fi
if
[
${
MODE
}
=
"benchmark_train"
]
;
then
if
[
${
MODE
}
=
"benchmark_train"
]
;
then
...
...
test_tipc/test_paddle2onnx.sh
浏览文件 @
ff81fa01
#!/bin/bash
#!/bin/bash
source
test_tipc/common_func.sh
source
test_tipc/common_func.sh
FILENAME
=
$1
FILENAME
=
$1
...
@@ -11,7 +11,7 @@ python=$(func_parser_value "${lines[2]}")
...
@@ -11,7 +11,7 @@ python=$(func_parser_value "${lines[2]}")
# parser params
# parser params
dataline
=
$(
awk
'NR==1, NR==1
4
{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==1, NR==1
6
{print}'
$FILENAME
)
IFS
=
$'
\n
'
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
lines
=(
${
dataline
}
)
...
@@ -31,16 +31,18 @@ opset_version_key=$(func_parser_key "${lines[8]}")
...
@@ -31,16 +31,18 @@ opset_version_key=$(func_parser_key "${lines[8]}")
opset_version_value
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
opset_version_value
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
enable_onnx_checker_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
enable_onnx_checker_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
enable_onnx_checker_value
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
enable_onnx_checker_value
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
# parser onnx inference
# parser onnx inference
inference_py
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
inference_py
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
use_onnx_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
use_onnx_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
use_onnx_value
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
use_onnx_value
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
inference_model_dir_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
inference_model_dir_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
inference_model_dir_value
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
inference_model_dir_value
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
inference_hardware_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
inference_hardware_key
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
inference_hardware_value
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
inference_hardware_value
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
inference_config_key
=
$(
func_parser_key
"
${
lines
[15]
}
"
)
inference_config_value
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
LOG_PATH
=
"./test_tipc/output"
LOG_PATH
=
"./test_tipc/output
/
${
model_name
}
"
mkdir
-p
./test_tipc/output
mkdir
-p
./test_tipc/output
status_log
=
"
${
LOG_PATH
}
/results_paddle2onnx.log"
status_log
=
"
${
LOG_PATH
}
/results_paddle2onnx.log"
...
@@ -65,7 +67,8 @@ function func_paddle2onnx(){
...
@@ -65,7 +67,8 @@ function func_paddle2onnx(){
set_model_dir
=
$(
func_set_params
"
${
inference_model_dir_key
}
"
"
${
inference_model_dir_value
}
"
)
set_model_dir
=
$(
func_set_params
"
${
inference_model_dir_key
}
"
"
${
inference_model_dir_value
}
"
)
set_use_onnx
=
$(
func_set_params
"
${
use_onnx_key
}
"
"
${
use_onnx_value
}
"
)
set_use_onnx
=
$(
func_set_params
"
${
use_onnx_key
}
"
"
${
use_onnx_value
}
"
)
set_hardware
=
$(
func_set_params
"
${
inference_hardware_key
}
"
"
${
inference_hardware_value
}
"
)
set_hardware
=
$(
func_set_params
"
${
inference_hardware_key
}
"
"
${
inference_hardware_value
}
"
)
infer_model_cmd
=
"cd deploy &&
${
python
}
${
inference_py
}
-o
${
set_model_dir
}
-o
${
set_use_onnx
}
-o
${
set_hardware
}
>
${
_save_log_path
}
2>&1 && cd ../"
set_inference_config
=
$(
func_set_params
"
${
inference_config_key
}
"
"
${
inference_config_value
}
"
)
infer_model_cmd
=
"cd deploy &&
${
python
}
${
inference_py
}
-o
${
set_model_dir
}
-o
${
set_use_onnx
}
-o
${
set_hardware
}
${
set_inference_config
}
>
${
_save_log_path
}
2>&1 && cd ../"
eval
$infer_model_cmd
eval
$infer_model_cmd
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
}
}
...
@@ -75,4 +78,4 @@ echo "################### run test ###################"
...
@@ -75,4 +78,4 @@ echo "################### run test ###################"
export
Count
=
0
export
Count
=
0
IFS
=
"|"
IFS
=
"|"
func_paddle2onnx
func_paddle2onnx
\ No newline at end of file
\ No newline at end of file
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录