Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
f5467804
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1528
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f5467804
编写于
8月 02, 2022
作者:
A
andyjpaddle
提交者:
GitHub
8月 02, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #7072 from andyjpaddle/fix_infer_log
[TIPC] fix tipc infer log
上级
c1f9e807
f1d5d539
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
6 addition
and
4 deletion
+6
-4
test_tipc/test_train_inference_python.sh
test_tipc/test_train_inference_python.sh
+6
-4
未找到文件。
test_tipc/test_train_inference_python.sh
浏览文件 @
f5467804
...
...
@@ -101,6 +101,7 @@ function func_inference(){
_log_path
=
$4
_img_dir
=
$5
_flag_quant
=
$6
_gpu
=
$7
# inference
for
use_gpu
in
${
use_gpu_list
[*]
}
;
do
if
[
${
use_gpu
}
=
"False"
]
||
[
${
use_gpu
}
=
"cpu"
]
;
then
...
...
@@ -119,7 +120,7 @@ function func_inference(){
fi
# skip when quant model inference but precision is not int8
set_precision
=
$(
func_set_params
"
${
precision_key
}
"
"
${
precision
}
"
)
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_
gpus_
${
_gpu
}
_
usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
...
...
@@ -150,7 +151,7 @@ function func_inference(){
continue
fi
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/python_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
_save_log_path
=
"
${
_log_path
}
/python_infer_gpu_
gpus_
${
_gpu
}
_
usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
...
...
@@ -184,6 +185,7 @@ if [ ${MODE} = "whole_infer" ]; then
# set CUDA_VISIBLE_DEVICES
eval
$env
export
Count
=
0
gpu
=
0
IFS
=
"|"
infer_run_exports
=(
${
infer_export_list
}
)
infer_quant_flag
=(
${
infer_is_quant
}
)
...
...
@@ -205,7 +207,7 @@ if [ ${MODE} = "whole_infer" ]; then
fi
#run inference
is_quant
=
${
infer_quant_flag
[Count]
}
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
"
${
gpu
}
"
Count
=
$((
$Count
+
1
))
done
else
...
...
@@ -328,7 +330,7 @@ else
else
infer_model_dir
=
${
save_infer_path
}
fi
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
infer_model_dir
}
"
"
${
LOG_PATH
}
"
"
${
train_infer_img_dir
}
"
"
${
flag_quant
}
"
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
infer_model_dir
}
"
"
${
LOG_PATH
}
"
"
${
train_infer_img_dir
}
"
"
${
flag_quant
}
"
"
${
gpu
}
"
eval
"unset CUDA_VISIBLE_DEVICES"
fi
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录