Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
bd2bd031
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
bd2bd031
编写于
5月 31, 2022
作者:
H
HydrogenSulfate
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
debug
上级
6a1acf76
变更
19
展开全部
隐藏空白更改
内联
并排
Showing
19 changed file
with
612 addition
and
575 deletion
+612
-575
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...l_rec_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...y_det_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...small_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._tiny_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...x0_25_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...x0_35_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x0_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...x0_75_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x1_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x2_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._x2_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._base_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+17
-11
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...Net50_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...50_vd_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...7_224_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+16
-10
test_tipc/prepare.sh
test_tipc/prepare.sh
+96
-352
test_tipc/test_inference_cpp.sh
test_tipc/test_inference_cpp.sh
+243
-52
未找到文件。
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:MobileNetV3_large_x1_0
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/MobileNetV3_large_x1_0_infer/inference.pdmodel
cls_params_path:./deploy/models/MobileNetV3_large_x1_0_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:general_PPLCNet_x2_5_lite_v1.0
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdmodel
cls_params_path:./deploy/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:picodet_PPLCNet_x2_5_mainbody_lite_v1.0
cpp_infer_type:shitu
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.pdmodel
cls_params_path:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPHGNet_small
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_small_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPHGNet_small_infer/inference.pdmodel
cls_params_path:./deploy/models/PPHGNet_small_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPHGNet_tiny
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_tiny_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPHGNet_tiny_infer/inference.pdmodel
cls_params_path:./deploy/models/PPHGNet_tiny_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x0_25
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_25_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x0_25_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x0_25_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x0_35
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_35_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x0_35_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x0_35_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x0_5
cpp_infer_type:cls
cls_inference_model_dir:./deploy/models/PPLCNet_x0_5_infer
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_5_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x0_5_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x0_5_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x0_75
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x0_75_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x0_75_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x0_75_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x1_0
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_0_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x1_0_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x1_0_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x1_5
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_5_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x1_5_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x1_5_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x2_0
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x2_0_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x2_0_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x2_0_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:PPLCNet_x2_5
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x2_5_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x2_5_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x2_5_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
model_name:PPLCNet_x0_5
===========================cpp_infer_params===========================
model_name:PPLCNetV2_base
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNetV2_base_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/PPLCNet_x0_5_infer/inference.pdmodel
cls_params_path:./deploy/models/PPLCNet_x0_5_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:ResNet50
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/ResNet50_infer/inference.pdmodel
cls_params_path:./deploy/models/ResNet50_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:ResNet50_vd
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/ResNet50_vd_infer/inference.pdmodel
cls_params_path:./deploy/models/ResNet50_vd_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
bd2bd031
# model load config
===========================cpp_infer_params===========================
model_name:SwinTransformer_tiny_patch4_window7_224
cpp_infer_type:cls
cls_inference_model_dir:./inference/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SwinTransformer_tiny_patch4_window7_224_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
gpu_id:0
gpu_mem:4000
cpu_math_library_num_threads:10
# cls config
cls_model_path:./deploy/models/SwinTransformer_tiny_patch4_window7_224_infer/inference.pdmodel
cls_params_path:./deploy/models/SwinTransformer_tiny_patch4_window7_224_infer/inference.pdiparams
resize_short_size:256
crop_size:224
\ No newline at end of file
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/prepare.sh
浏览文件 @
bd2bd031
此差异已折叠。
点击以展开。
test_tipc/test_inference_cpp.sh
浏览文件 @
bd2bd031
#!/bin/bash
source
test_tipc/common_func.sh
function
func_parser_key_cpp
(){
strs
=
$1
IFS
=
" "
array
=(
${
strs
}
)
tmp
=
${
array
[0]
}
echo
${
tmp
}
}
function
func_parser_value_cpp
(){
strs
=
$1
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[1]
}
echo
${
tmp
}
}
FILENAME
=
$1
dataline
=
$(
cat
${
FILENAME
}
)
lines
=(
${
dataline
}
)
GPUID
=
$2
if
[[
!
$GPUID
]]
;
then
GPUID
=
0
fi
dataline
=
$(
awk
'NR==1, NR==18{print}'
$FILENAME
)
# parser params
dataline
=
$(
awk
'NR==1, NR==14{print}'
$FILENAME
)
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
# parser cpp inference model
model_name
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
cpp_infer_type
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
cpp_infer_model_dir
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cpp_det_infer_model_dir
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
# parser cpp inference
inference_cmd
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
cpp_image_dir_value
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[16]
}
"
)
generate_yaml_cmd
=
$(
func_parser_value
"
${
lines
[17]
}
"
)
transform_index_cmd
=
$(
func_parser_value
"
${
lines
[18]
}
"
)
# parser load config
model_name
=
$(
func_parser_value_cpp
"
${
lines
[1]
}
"
)
use_gpu_key
=
$(
func_parser_key_cpp
"
${
lines
[2]
}
"
)
use_gpu_value
=
$(
func_parser_value_cpp
"
${
lines
[2]
}
"
)
LOG_PATH
=
"./test_tipc/output/
${
model_name
}
/infer_cpp"
LOG_PATH
=
"./test_tipc/output/
${
model_name
}
"
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results_infer_cpp.log"
status_log
=
"
${
LOG_PATH
}
/results_cpp.log"
# generate_yaml_cmd="python3 test_tipc/generate_cpp_yaml.py"
line_inference_model_dir
=
3
line_use_gpu
=
5
line_infer_imgs
=
2
function
func_infer_cpp
(){
# inference cpp
function
func_shitu_cpp_inference
(){
IFS
=
'|'
for
use_gpu
in
${
use_gpu_value
[*]
}
;
do
if
[[
${
use_gpu
}
=
"True"
]]
;
then
_save_log_path
=
"
${
LOG_PATH
}
/infer_cpp_use_gpu.log"
else
_save_log_path
=
"
${
LOG_PATH
}
/infer_cpp_use_cpu.log"
fi
# run infer cpp
inference_cpp_cmd
=
"./deploy/cpp/build/clas_system"
inference_cpp_cfg
=
"./deploy/configs/inference_cls.yaml"
_script
=
$1
_model_dir
=
$2
_log_path
=
$3
_img_dir
=
$4
_flag_quant
=
$5
# inference
set_model_name_cmd
=
"sed -i '
${
line_inference_model_dir
}
s#: .*#: ./deploy/models/
${
model_name
}
_infer#' '
${
inference_cpp_cfg
}
'"
eval
$set_model_name_cmd
for
use_gpu
in
${
cpp_use_gpu_list
[*]
}
;
do
if
[
${
use_gpu
}
=
"False"
]
||
[
${
use_gpu
}
=
"cpu"
]
;
then
for
use_mkldnn
in
${
cpp_use_mkldnn_list
[*]
}
;
do
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
continue
fi
for
threads
in
${
cpp_cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
precision
=
"fp32"
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precison
=
"int8"
fi
_save_log_path
=
"
${
_log_path
}
/shitu_cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_imgs_cmd
=
"sed -i '
${
line_infer_imgs
}
s#: .*#: ./deploy/images/ILSVRC2012_val_00000010.jpeg#' '
${
inference_cpp_cfg
}
'"
eval
$set_infer_imgs_cmd
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$transform_index_cmd
command
=
"
${
_script
}
2>&1|tee
${
_save_log_path
}
"
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
done
done
done
elif
[
${
use_gpu
}
=
"True"
]
||
[
${
use_gpu
}
=
"gpu"
]
;
then
for
use_trt
in
${
cpp_use_trt_list
[*]
}
;
do
for
precision
in
${
cpp_precision_list
[*]
}
;
do
if
[[
${
_flag_quant
}
=
"False"
]]
&&
[[
${
precision
}
=
~
"int8"
]]
;
then
continue
fi
if
[[
${
precision
}
=
~
"fp16"
||
${
precision
}
=
~
"int8"
]]
&&
[
${
use_trt
}
=
"False"
]
;
then
continue
fi
if
[[
${
use_trt
}
=
"False"
||
${
precision
}
=
~
"int8"
]]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
continue
fi
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/shitu_cpp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$transform_index_cmd
command
=
"
${
_script
}
2>&1|tee
${
_save_log_path
}
"
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
_script
}
"
"
${
status_log
}
"
done
done
done
else
echo
"Does not support hardware other than CPU and GPU Currently!"
fi
done
}
set_use_gpu_cmd
=
"sed -i '
${
line_use_gpu
}
s#: .*#:
${
use_gpu
}
#' '
${
inference_cpp_cfg
}
'"
eval
$set_use_gpu_cmd
function
func_cls_cpp_inference
(){
IFS
=
'|'
_script
=
$1
_model_dir
=
$2
_log_path
=
$3
_img_dir
=
$4
_flag_quant
=
$5
# inference
infer_cpp_full_cmd
=
"
${
inference_cpp_cmd
}
-c
${
inference_cpp_cfg
}
>
${
_save_log_path
}
2>&1 "
eval
$infer_cpp_full_cmd
for
use_gpu
in
${
cpp_use_gpu_list
[*]
}
;
do
if
[
${
use_gpu
}
=
"False"
]
||
[
${
use_gpu
}
=
"cpu"
]
;
then
for
use_mkldnn
in
${
cpp_use_mkldnn_list
[*]
}
;
do
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
continue
fi
for
threads
in
${
cpp_cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
precision
=
"fp32"
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precison
=
"int8"
fi
_save_log_path
=
"
${
_log_path
}
/cls_cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
infer_cpp_full_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
command1
=
"
${
_script
}
2>&1|tee
${
_save_log_path
}
"
eval
${
command1
}
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
command1
}
"
"
${
status_log
}
"
done
done
done
elif
[
${
use_gpu
}
=
"True"
]
||
[
${
use_gpu
}
=
"gpu"
]
;
then
for
use_trt
in
${
cpp_use_trt_list
[*]
}
;
do
for
precision
in
${
cpp_precision_list
[*]
}
;
do
if
[[
${
_flag_quant
}
=
"False"
]]
&&
[[
${
precision
}
=
~
"int8"
]]
;
then
continue
fi
if
[[
${
precision
}
=
~
"fp16"
||
${
precision
}
=
~
"int8"
]]
&&
[
${
use_trt
}
=
"False"
]
;
then
continue
fi
if
[[
${
use_trt
}
=
"False"
||
${
precision
}
=
~
"int8"
]]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
continue
fi
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/cls_cpp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
command
=
"
${
_script
}
2>&1|tee
${
_save_log_path
}
"
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
done
done
done
else
echo
"Does not support hardware other than CPU and GPU Currently!"
fi
done
}
echo
"################### run test cpp inference ###################"
func_infer_cpp
\ No newline at end of file
if
[[
$cpp_infer_type
==
"cls"
]]
;
then
cd
deploy/cpp
elif
[[
$cpp_infer_type
==
"shitu"
]]
;
then
cd
deploy/cpp_shitu
else
echo
"Only support cls and shitu"
exit
0
fi
if
[[
$cpp_infer_type
==
"shitu"
]]
;
then
echo
"################### update cmake ###################"
wget
-nc
https://github.com/Kitware/CMake/releases/download/v3.22.0/cmake-3.22.0.tar.gz
tar
xf cmake-3.22.0.tar.gz
cd
./cmake-3.22.0
export
root_path
=
$PWD
export
install_path
=
${
root_path
}
/cmake
eval
"./bootstrap --prefix=
${
install_path
}
"
make
-j
make
install
export
PATH
=
${
install_path
}
/bin:
$PATH
cd
..
echo
"################### update cmake done ###################"
echo
"################### build faiss ###################"
apt-get
install
-y
libopenblas-dev
git clone https://github.com/facebookresearch/faiss.git
cd
faiss
export
faiss_install_path
=
$PWD
/faiss_install
eval
"cmake -B build . -DFAISS_ENABLE_PYTHON=OFF -DCMAKE_INSTALL_PREFIX=
${
faiss_install_path
}
"
make
-C
build
-j
faiss
make
-C
build
install
cd
..
fi
echo
"################### build PaddleClas demo ####################"
# pwd = /workspace/hesensen/PaddleClas/deploy/cpp_shitu
OPENCV_DIR
=
$(
dirname
$PWD
)
/cpp/opencv-3.4.7/opencv3/
LIB_DIR
=
$(
dirname
$PWD
)
/cpp/paddle_inference/
CUDA_LIB_DIR
=
$(
dirname
`
find /usr
-name
libcudart.so
`
)
CUDNN_LIB_DIR
=
$(
dirname
`
find /usr
-name
libcudnn.so
`
)
BUILD_DIR
=
build
rm
-rf
${
BUILD_DIR
}
mkdir
${
BUILD_DIR
}
cd
${
BUILD_DIR
}
if
[[
$cpp_infer_type
==
cls
]]
;
then
cmake ..
\
-DPADDLE_LIB
=
${
LIB_DIR
}
\
-DWITH_MKL
=
ON
\
-DWITH_GPU
=
ON
\
-DWITH_STATIC_LIB
=
OFF
\
-DWITH_TENSORRT
=
OFF
\
-DOPENCV_DIR
=
${
OPENCV_DIR
}
\
-DCUDNN_LIB
=
${
CUDNN_LIB_DIR
}
\
-DCUDA_LIB
=
${
CUDA_LIB_DIR
}
\
-DTENSORRT_DIR
=
${
TENSORRT_DIR
}
else
cmake ..
\
-DPADDLE_LIB
=
${
LIB_DIR
}
\
-DWITH_MKL
=
ON
\
-DWITH_GPU
=
ON
\
-DWITH_STATIC_LIB
=
OFF
\
-DWITH_TENSORRT
=
OFF
\
-DOPENCV_DIR
=
${
OPENCV_DIR
}
\
-DCUDNN_LIB
=
${
CUDNN_LIB_DIR
}
\
-DCUDA_LIB
=
${
CUDA_LIB_DIR
}
\
-DTENSORRT_DIR
=
${
TENSORRT_DIR
}
\
-DFAISS_DIR
=
${
faiss_install_path
}
\
-DFAISS_WITH_MKL
=
OFF
fi
make
-j
cd
../../../
# cd ../../
echo
"################### build PaddleClas demo finished ###################"
# set cuda device
# GPUID=$2
# if [ ${#GPUID} -le 0 ];then
# env="export CUDA_VISIBLE_DEVICES=0"
# else
# env="export CUDA_VISIBLE_DEVICES=${GPUID}"
# fi
# set CUDA_VISIBLE_DEVICES
# eval $env
echo
"################### run test ###################"
export
Count
=
0
IFS
=
"|"
infer_quant_flag
=(
${
cpp_infer_is_quant
}
)
for
infer_model
in
${
cpp_infer_model_dir
[*]
}
;
do
#run inference
is_quant
=
${
infer_quant_flag
[Count]
}
if
[[
$cpp_infer_type
==
"cls"
]]
;
then
func_cls_cpp_inference
"
${
inference_cmd
}
"
"
${
infer_model
}
"
"
${
LOG_PATH
}
"
"
${
cpp_image_dir_value
}
"
${
is_quant
}
else
func_shitu_cpp_inference
"
${
inference_cmd
}
"
"
${
infer_model
}
"
"
${
LOG_PATH
}
"
"
${
cpp_image_dir_value
}
"
${
is_quant
}
fi
Count
=
$((
$Count
+
1
))
done
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录