Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
e807027a
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
e807027a
编写于
10月 19, 2021
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add mkldnn precision to test.sh
上级
bd9f35d5
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
22 addition
and
16 deletion
+22
-16
PTDN/test_train_inference_python.sh
PTDN/test_train_inference_python.sh
+22
-16
未找到文件。
PTDN/test_train_inference_python.sh
浏览文件 @
e807027a
...
...
@@ -141,22 +141,28 @@ function func_inference(){
fi
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
batch_size_list
[*]
}
;
do
precison
=
"fp32"
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precision
=
"int8"
fi
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
set_cpu_threads
=
$(
func_set_params
"
${
cpu_threads_key
}
"
"
${
threads
}
"
)
set_model_dir
=
$(
func_set_params
"
${
infer_model_key
}
"
"
${
_model_dir
}
"
)
set_infer_params1
=
$(
func_set_params
"
${
infer_key1
}
"
"
${
infer_value1
}
"
)
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_model_dir
}
${
set_batchsize
}
${
set_infer_data
}
${
set_benchmark
}
${
set_infer_params1
}
>
${
_save_log_path
}
2>&1 "
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
for
precision
in
${
precision_list
[*]
}
;
do
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
precision
}
=
"fp16"
]
;
then
continue
fi
# skip when enable fp16 but disable mkldnn
if
[
${
_flag_quant
}
=
"True"
]
&&
[
${
precision
}
!=
"int8"
]
;
then
continue
fi
# skip when quant model inference but precision is not int8
set_precision
=
$(
func_set_params
"
${
precision_key
}
"
"
${
precision
}
"
)
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
set_cpu_threads
=
$(
func_set_params
"
${
cpu_threads_key
}
"
"
${
threads
}
"
)
set_model_dir
=
$(
func_set_params
"
${
infer_model_key
}
"
"
${
_model_dir
}
"
)
set_infer_params1
=
$(
func_set_params
"
${
infer_key1
}
"
"
${
infer_value1
}
"
)
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_model_dir
}
${
set_batchsize
}
${
set_infer_data
}
${
set_benchmark
}
${
set_precision
}
${
set_infer_params1
}
>
${
_save_log_path
}
2>&1 "
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
done
done
done
done
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录