Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
22531707
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
22531707
编写于
6月 24, 2022
作者:
W
Walter
提交者:
GitHub
6月 24, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2093 from HydrogenSulfate/fix_cpp_chain
Fix cpp chain
上级
46ec76eb
cf1e7747
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
24 addition
and
24 deletion
+24
-24
test_tipc/docs/test_inference_cpp.md
test_tipc/docs/test_inference_cpp.md
+5
-5
test_tipc/prepare.sh
test_tipc/prepare.sh
+6
-1
test_tipc/test_inference_cpp.sh
test_tipc/test_inference_cpp.sh
+13
-18
未找到文件。
test_tipc/docs/test_inference_cpp.md
浏览文件 @
22531707
...
@@ -248,20 +248,20 @@ bash test_tipc/prepare.sh test_tipc/config/ResNet/ResNet50_linux_gpu_normal_norm
...
@@ -248,20 +248,20 @@ bash test_tipc/prepare.sh test_tipc/config/ResNet/ResNet50_linux_gpu_normal_norm
测试方法如下所示,希望测试不同的模型文件,只需更换为自己的参数配置文件,即可完成对应模型的测试。
测试方法如下所示,希望测试不同的模型文件,只需更换为自己的参数配置文件,即可完成对应模型的测试。
```
shell
```
shell
bash test_tipc/test_inference_cpp.sh
${
your_params_file
}
bash test_tipc/test_inference_cpp.sh
${
your_params_file
}
cpp_infer
```
```
以
`ResNet50`
的
`Linux GPU/CPU C++推理测试`
为例,命令如下所示。
以
`ResNet50`
的
`Linux GPU/CPU C++推理测试`
为例,命令如下所示。
```
shell
```
shell
bash test_tipc/test_inference_cpp.sh test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
bash test_tipc/test_inference_cpp.sh test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
cpp_infer
```
```
输出结果如下,表示命令运行成功。
输出结果如下,表示命令运行成功。
```
shell
```
shell
Run successfully with
command
-
./deploy/cpp/build/clas_system
-c
inference_cls.yaml
>
./test_tipc/output/ResNet50/cls_
cpp_infer_gpu_usetrt_False_precision_fp32_batchsize_1.log 2>&1!
Run successfully with
command
-
ResNet50 - ./deploy/cpp/build/clas_system
-c
inference_cls.yaml
>
./test_tipc/output/ResNet50/cpp_infer/
cpp_infer_gpu_usetrt_False_precision_fp32_batchsize_1.log 2>&1!
Run successfully with
command
-
./deploy/cpp/build/clas_system
-c
inference_cls.yaml
>
./test_tipc/output/ResNet50/cls_
cpp_infer_cpu_usemkldnn_False_threads_1_precision_fp32_batchsize_1.log 2>&1!
Run successfully with
command
-
ResNet50 - ./deploy/cpp/build/clas_system
-c
inference_cls.yaml
>
./test_tipc/output/ResNet50/cpp_infer/
cpp_infer_cpu_usemkldnn_False_threads_1_precision_fp32_batchsize_1.log 2>&1!
```
```
最终log中会打印出结果,如下所示
最终log中会打印出结果,如下所示
...
@@ -312,6 +312,6 @@ Current total inferen time cost: 5449.39 ms.
...
@@ -312,6 +312,6 @@ Current total inferen time cost: 5449.39 ms.
Top5: class_id: 265, score: 0.0420, label: toy poodle
Top5: class_id: 265, score: 0.0420, label: toy poodle
```
```
详细log位于
`./test_tipc/output/ResNet50/c
ls_cpp_infer_gpu_usetrt_False_precision_fp32_batchsize_1.log`
和
`./test_tipc/output/ResNet50/cls_
cpp_infer_cpu_usemkldnn_False_threads_1_precision_fp32_batchsize_1.log`
中。
详细log位于
`./test_tipc/output/ResNet50/c
pp_infer/cpp_infer_gpu_usetrt_False_precision_fp32_batchsize_1.log`
和
`./test_tipc/output/ResNet50/
cpp_infer_cpu_usemkldnn_False_threads_1_precision_fp32_batchsize_1.log`
中。
如果运行失败,也会在终端中输出运行失败的日志信息以及对应的运行命令。可以基于该命令,分析运行失败的原因。
如果运行失败,也会在终端中输出运行失败的日志信息以及对应的运行命令。可以基于该命令,分析运行失败的原因。
test_tipc/prepare.sh
浏览文件 @
22531707
...
@@ -85,7 +85,12 @@ if [[ ${MODE} = "cpp_infer" ]]; then
...
@@ -85,7 +85,12 @@ if [[ ${MODE} = "cpp_infer" ]]; then
fi
fi
if
[[
!
-d
"./deploy/cpp/paddle_inference/"
]]
;
then
if
[[
!
-d
"./deploy/cpp/paddle_inference/"
]]
;
then
pushd
./deploy/cpp/
pushd
./deploy/cpp/
wget
-nc
https://paddle-inference-lib.bj.bcebos.com/2.2.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.1_cudnn7.6.5_trt6.0.1.5/paddle_inference.tgz
PADDLEInfer
=
$3
if
[
""
=
"
$PADDLEInfer
"
]
;
then
wget
-nc
https://paddle-inference-lib.bj.bcebos.com/2.2.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.1_cudnn7.6.5_trt6.0.1.5/paddle_inference.tgz
--no-check-certificate
else
wget
-nc
${
PADDLEInfer
}
--no-check-certificate
fi
tar
xf paddle_inference.tgz
tar
xf paddle_inference.tgz
popd
popd
fi
fi
...
...
test_tipc/test_inference_cpp.sh
浏览文件 @
22531707
...
@@ -2,10 +2,17 @@
...
@@ -2,10 +2,17 @@
source
test_tipc/common_func.sh
source
test_tipc/common_func.sh
FILENAME
=
$1
FILENAME
=
$1
GPUID
=
$2
MODE
=
$2
# set cuda device
GPUID
=
$3
if
[[
!
$GPUID
]]
;
then
if
[[
!
$GPUID
]]
;
then
GPUID
=
0
GPUID
=
0
fi
fi
env
=
"export CUDA_VISIBLE_DEVICES=
${
GPUID
}
"
set
CUDA_VISIBLE_DEVICES
eval
$env
dataline
=
$(
awk
'NR==1, NR==19{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==1, NR==19{print}'
$FILENAME
)
# parser params
# parser params
...
@@ -30,7 +37,7 @@ cpp_benchmark_value=$(func_parser_value "${lines[16]}")
...
@@ -30,7 +37,7 @@ cpp_benchmark_value=$(func_parser_value "${lines[16]}")
generate_yaml_cmd
=
$(
func_parser_value
"
${
lines
[17]
}
"
)
generate_yaml_cmd
=
$(
func_parser_value
"
${
lines
[17]
}
"
)
transform_index_cmd
=
$(
func_parser_value
"
${
lines
[18]
}
"
)
transform_index_cmd
=
$(
func_parser_value
"
${
lines
[18]
}
"
)
LOG_PATH
=
"./test_tipc/output/
${
model_name
}
"
LOG_PATH
=
"./test_tipc/output/
${
model_name
}
/
${
MODE
}
"
mkdir
-p
${
LOG_PATH
}
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results_cpp.log"
status_log
=
"
${
LOG_PATH
}
/results_cpp.log"
# generate_yaml_cmd="python3 test_tipc/generate_cpp_yaml.py"
# generate_yaml_cmd="python3 test_tipc/generate_cpp_yaml.py"
...
@@ -56,7 +63,7 @@ function func_shitu_cpp_inference(){
...
@@ -56,7 +63,7 @@ function func_shitu_cpp_inference(){
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precison
=
"int8"
precison
=
"int8"
fi
fi
_save_log_path
=
"
${
_log_path
}
/
shitu_
cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
eval
$transform_index_cmd
eval
$transform_index_cmd
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$command
...
@@ -80,7 +87,7 @@ function func_shitu_cpp_inference(){
...
@@ -80,7 +87,7 @@ function func_shitu_cpp_inference(){
continue
continue
fi
fi
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/
shitu_
cpp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/cpp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
eval
$transform_index_cmd
eval
$transform_index_cmd
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
command
=
"
${
generate_yaml_cmd
}
--type shitu --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--det_model_dir
${
cpp_det_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$command
...
@@ -118,7 +125,7 @@ function func_cls_cpp_inference(){
...
@@ -118,7 +125,7 @@ function func_cls_cpp_inference(){
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precison
=
"int8"
precison
=
"int8"
fi
fi
_save_log_path
=
"
${
_log_path
}
/c
ls_c
pp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn
${
use_mkldnn
}
--gpu
${
use_gpu
}
--cpu_thread
${
threads
}
--tensorrt False --precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$command
...
@@ -142,7 +149,7 @@ function func_cls_cpp_inference(){
...
@@ -142,7 +149,7 @@ function func_cls_cpp_inference(){
continue
continue
fi
fi
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/c
ls_c
pp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/cpp_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
command
=
"
${
generate_yaml_cmd
}
--type cls --batch_size
${
batch_size
}
--mkldnn False --gpu
${
use_gpu
}
--cpu_thread 1 --tensorrt
${
use_trt
}
--precision
${
precision
}
--data_dir
${
_img_dir
}
--benchmark True --cls_model_dir
${
cpp_infer_model_dir
}
--gpu_id
${
GPUID
}
"
eval
$command
eval
$command
command
=
"
${
_script
}
>
${
_save_log_path
}
2>&1"
command
=
"
${
_script
}
>
${
_save_log_path
}
2>&1"
...
@@ -235,18 +242,6 @@ cd ../../../
...
@@ -235,18 +242,6 @@ cd ../../../
# cd ../../
# cd ../../
echo
"################### build PaddleClas demo finished ###################"
echo
"################### build PaddleClas demo finished ###################"
# set cuda device
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
env
=
"export CUDA_VISIBLE_DEVICES=0"
else
env
=
"export CUDA_VISIBLE_DEVICES=
${
GPUID
}
"
fi
set
CUDA_VISIBLE_DEVICES
eval
$env
echo
"################### run test ###################"
echo
"################### run test ###################"
export
Count
=
0
export
Count
=
0
IFS
=
"|"
IFS
=
"|"
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录