Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
057d1b38
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1528
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
057d1b38
编写于
8月 17, 2022
作者:
A
andyjpaddle
提交者:
GitHub
8月 17, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #7218 from zhengya01/tipc_log
add log_path in result.log
上级
8210d717
9863be35
变更
8
隐藏空白更改
内联
并排
Showing
8 changed file
with
38 addition
and
36 deletion
+38
-36
test_tipc/common_func.sh
test_tipc/common_func.sh
+3
-2
test_tipc/test_inference_cpp.sh
test_tipc/test_inference_cpp.sh
+2
-2
test_tipc/test_inference_python.sh
test_tipc/test_inference_python.sh
+5
-4
test_tipc/test_paddle2onnx.sh
test_tipc/test_paddle2onnx.sh
+7
-7
test_tipc/test_ptq_inference_python.sh
test_tipc/test_ptq_inference_python.sh
+3
-3
test_tipc/test_serving_infer_cpp.sh
test_tipc/test_serving_infer_cpp.sh
+4
-4
test_tipc/test_serving_infer_python.sh
test_tipc/test_serving_infer_python.sh
+8
-8
test_tipc/test_train_inference_python.sh
test_tipc/test_train_inference_python.sh
+6
-6
未找到文件。
test_tipc/common_func.sh
浏览文件 @
057d1b38
...
...
@@ -58,10 +58,11 @@ function status_check(){
run_command
=
$2
run_log
=
$3
model_name
=
$4
log_path
=
$5
if
[
$last_status
-eq
0
]
;
then
echo
-e
"
\0
33[33m Run successfully with command -
${
model_name
}
-
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
echo
-e
"
\0
33[33m Run successfully with command -
${
model_name
}
-
${
run_command
}
-
${
log_path
}
\0
33[0m"
|
tee
-a
${
run_log
}
else
echo
-e
"
\0
33[33m Run failed with command -
${
model_name
}
-
${
run_command
}
!
\0
33[0m"
|
tee
-a
${
run_log
}
echo
-e
"
\0
33[33m Run failed with command -
${
model_name
}
-
${
run_command
}
-
${
log_path
}
\0
33[0m"
|
tee
-a
${
run_log
}
fi
}
test_tipc/test_inference_cpp.sh
浏览文件 @
057d1b38
...
...
@@ -84,7 +84,7 @@ function func_cpp_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
done
...
...
@@ -117,7 +117,7 @@ function func_cpp_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
...
...
test_tipc/test_inference_python.sh
浏览文件 @
057d1b38
...
...
@@ -88,7 +88,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
done
...
...
@@ -119,7 +119,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
...
...
@@ -146,14 +146,15 @@ if [ ${MODE} = "whole_infer" ]; then
for
infer_model
in
${
infer_model_dir_list
[*]
}
;
do
# run export
if
[
${
infer_run_exports
[Count]
}
!=
"null"
]
;
then
_save_log_path
=
"
${
_log_path
}
/python_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
_infermodel_
${
infer_model
}
.log"
save_infer_dir
=
$(
dirname
$infer_model
)
set_export_weight
=
$(
func_set_params
"
${
export_weight
}
"
"
${
infer_model
}
"
)
set_save_infer_key
=
$(
func_set_params
"
${
save_infer_key
}
"
"
${
save_infer_dir
}
"
)
export_cmd
=
"
${
python
}
${
infer_run_exports
[Count]
}
${
set_export_weight
}
${
set_save_infer_key
}
"
export_cmd
=
"
${
python
}
${
infer_run_exports
[Count]
}
${
set_export_weight
}
${
set_save_infer_key
}
>
${
_save_log_path
}
2>&1
"
echo
${
infer_run_exports
[Count]
}
eval
$export_cmd
status_export
=
$?
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
else
save_infer_dir
=
${
infer_model
}
fi
...
...
test_tipc/test_paddle2onnx.sh
浏览文件 @
057d1b38
...
...
@@ -66,7 +66,7 @@ function func_paddle2onnx(){
trans_model_cmd
=
"
${
padlle2onnx_cmd
}
${
set_dirname
}
${
set_model_filename
}
${
set_params_filename
}
${
set_save_model
}
${
set_opset_version
}
${
set_enable_onnx_checker
}
>
${
trans_det_log
}
2>&1 "
eval
$trans_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
trans_det_log
}
"
# trans rec
set_dirname
=
$(
func_set_params
"--model_dir"
"
${
rec_infer_model_dir_value
}
"
)
set_model_filename
=
$(
func_set_params
"
${
model_filename_key
}
"
"
${
model_filename_value
}
"
)
...
...
@@ -78,7 +78,7 @@ function func_paddle2onnx(){
trans_model_cmd
=
"
${
padlle2onnx_cmd
}
${
set_dirname
}
${
set_model_filename
}
${
set_params_filename
}
${
set_save_model
}
${
set_opset_version
}
${
set_enable_onnx_checker
}
>
${
trans_rec_log
}
2>&1 "
eval
$trans_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
trans_rec_log
}
"
elif
[[
${
model_name
}
=
~
"det"
]]
;
then
# trans det
set_dirname
=
$(
func_set_params
"--model_dir"
"
${
det_infer_model_dir_value
}
"
)
...
...
@@ -91,7 +91,7 @@ function func_paddle2onnx(){
trans_model_cmd
=
"
${
padlle2onnx_cmd
}
${
set_dirname
}
${
set_model_filename
}
${
set_params_filename
}
${
set_save_model
}
${
set_opset_version
}
${
set_enable_onnx_checker
}
>
${
trans_det_log
}
2>&1 "
eval
$trans_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
trans_det_log
}
"
elif
[[
${
model_name
}
=
~
"rec"
]]
;
then
# trans rec
set_dirname
=
$(
func_set_params
"--model_dir"
"
${
rec_infer_model_dir_value
}
"
)
...
...
@@ -104,7 +104,7 @@ function func_paddle2onnx(){
trans_model_cmd
=
"
${
padlle2onnx_cmd
}
${
set_dirname
}
${
set_model_filename
}
${
set_params_filename
}
${
set_save_model
}
${
set_opset_version
}
${
set_enable_onnx_checker
}
>
${
trans_rec_log
}
2>&1 "
eval
$trans_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
trans_rec_log
}
"
fi
# python inference
...
...
@@ -127,7 +127,7 @@ function func_paddle2onnx(){
eval
$infer_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
elif
[
${
use_gpu
}
=
"True"
]
||
[
${
use_gpu
}
=
"gpu"
]
;
then
_save_log_path
=
"
${
LOG_PATH
}
/paddle2onnx_infer_gpu.log"
set_gpu
=
$(
func_set_params
"
${
use_gpu_key
}
"
"
${
use_gpu
}
"
)
...
...
@@ -146,7 +146,7 @@ function func_paddle2onnx(){
eval
$infer_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
infer_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
else
echo
"Does not support hardware other than CPU and GPU Currently!"
fi
...
...
@@ -158,4 +158,4 @@ echo "################### run test ###################"
export
Count
=
0
IFS
=
"|"
func_paddle2onnx
\ No newline at end of file
func_paddle2onnx
test_tipc/test_ptq_inference_python.sh
浏览文件 @
057d1b38
...
...
@@ -84,7 +84,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
done
...
...
@@ -109,7 +109,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
...
...
@@ -145,7 +145,7 @@ if [ ${MODE} = "whole_infer" ]; then
echo
$export_cmd
eval
$export_cmd
status_export
=
$?
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
export_log_path
}
"
else
save_infer_dir
=
${
infer_model
}
fi
...
...
test_tipc/test_serving_infer_cpp.sh
浏览文件 @
057d1b38
...
...
@@ -83,7 +83,7 @@ function func_serving(){
trans_model_cmd
=
"
${
python_list
[0]
}
${
trans_model_py
}
${
set_dirname
}
${
set_model_filename
}
${
set_params_filename
}
${
set_serving_server
}
${
set_serving_client
}
>
${
trans_rec_log
}
2>&1 "
eval
$trans_model_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
trans_model_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
trans_rec_log
}
"
set_image_dir
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
image_dir_value
}
"
)
python_list
=(
${
python_list
}
)
cd
${
serving_dir_value
}
...
...
@@ -95,14 +95,14 @@ function func_serving(){
web_service_cpp_cmd
=
"nohup
${
python_list
[0]
}
${
web_service_py
}
--model
${
det_server_value
}
${
rec_server_value
}
${
op_key
}
${
op_value
}
${
port_key
}
${
port_value
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cpp_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cpp_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cpp_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
sleep
5s
_save_log_path
=
"
${
LOG_PATH
}
/cpp_client_cpu.log"
cpp_client_cmd
=
"
${
python_list
[0]
}
${
cpp_client_py
}
${
det_client_value
}
${
rec_client_value
}
>
${
_save_log_path
}
2>&1"
eval
$cpp_client_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
cpp_client_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
cpp_client_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
ps ux |
grep
-i
${
port_value
}
|
awk
'{print $2}'
| xargs
kill
-s
9
else
server_log_path
=
"
${
LOG_PATH
}
/cpp_server_gpu.log"
...
...
@@ -114,7 +114,7 @@ function func_serving(){
eval
$cpp_client_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
cpp_client_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
cpp_client_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
ps ux |
grep
-i
${
port_value
}
|
awk
'{print $2}'
| xargs
kill
-s
9
fi
done
...
...
test_tipc/test_serving_infer_python.sh
浏览文件 @
057d1b38
...
...
@@ -126,19 +126,19 @@ function func_serving(){
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=""
${
web_use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_det_model_config
}
${
set_rec_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
elif
[[
${
model_name
}
=
~
"det"
]]
;
then
set_det_model_config
=
$(
func_set_params
"
${
det_server_key
}
"
"
${
det_server_value
}
"
)
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=""
${
web_use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_det_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
elif
[[
${
model_name
}
=
~
"rec"
]]
;
then
set_rec_model_config
=
$(
func_set_params
"
${
rec_server_key
}
"
"
${
rec_server_value
}
"
)
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=""
${
web_use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_rec_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
fi
sleep
2s
for
pipeline
in
${
pipeline_py
[*]
}
;
do
...
...
@@ -147,7 +147,7 @@ function func_serving(){
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
pipeline_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
pipeline_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
sleep
2s
done
ps ux |
grep
-E
'web_service'
|
awk
'{print $2}'
| xargs
kill
-s
9
...
...
@@ -177,19 +177,19 @@ function func_serving(){
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
set_tensorrt
}
${
set_precision
}
${
set_det_model_config
}
${
set_rec_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
elif
[[
${
model_name
}
=
~
"det"
]]
;
then
set_det_model_config
=
$(
func_set_params
"
${
det_server_key
}
"
"
${
det_server_value
}
"
)
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
set_tensorrt
}
${
set_precision
}
${
set_det_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
elif
[[
${
model_name
}
=
~
"rec"
]]
;
then
set_rec_model_config
=
$(
func_set_params
"
${
rec_server_key
}
"
"
${
rec_server_value
}
"
)
web_service_cmd
=
"nohup
${
python
}
${
web_service_py
}
${
set_tensorrt
}
${
set_precision
}
${
set_rec_model_config
}
>
${
server_log_path
}
2>&1 &"
eval
$web_service_cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
web_service_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
server_log_path
}
"
fi
sleep
2s
for
pipeline
in
${
pipeline_py
[*]
}
;
do
...
...
@@ -198,7 +198,7 @@ function func_serving(){
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
pipeline_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
pipeline_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
sleep
2s
done
ps ux |
grep
-E
'web_service'
|
awk
'{print $2}'
| xargs
kill
-s
9
...
...
test_tipc/test_train_inference_python.sh
浏览文件 @
057d1b38
...
...
@@ -133,7 +133,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
done
...
...
@@ -164,7 +164,7 @@ function func_inference(){
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
_save_log_path
}
"
done
done
...
...
@@ -201,7 +201,7 @@ if [ ${MODE} = "whole_infer" ]; then
echo
$export_cmd
eval
$export_cmd
status_export
=
$?
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$status_export
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
export_log_path
}
"
else
save_infer_dir
=
${
infer_model
}
fi
...
...
@@ -298,7 +298,7 @@ else
# run train
eval
$cmd
eval
"cat
${
save_log
}
/train.log >>
${
save_log
}
.log"
status_check
$?
"
${
cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$?
"
${
cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
save_log
}
.log"
set_eval_pretrain
=
$(
func_set_params
"
${
pretrain_model_key
}
"
"
${
save_log
}
/
${
train_model_name
}
"
)
...
...
@@ -309,7 +309,7 @@ else
eval_log_path
=
"
${
LOG_PATH
}
/
${
trainer
}
_gpus_
${
gpu
}
_autocast_
${
autocast
}
_nodes_
${
nodes
}
_eval.log"
eval_cmd
=
"
${
python
}
${
eval_py
}
${
set_eval_pretrain
}
${
set_use_gpu
}
${
set_eval_params1
}
>
${
eval_log_path
}
2>&1 "
eval
$eval_cmd
status_check
$?
"
${
eval_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$?
"
${
eval_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
eval_log_path
}
"
fi
# run export model
if
[
${
run_export
}
!=
"null"
]
;
then
...
...
@@ -320,7 +320,7 @@ else
set_save_infer_key
=
$(
func_set_params
"
${
save_infer_key
}
"
"
${
save_infer_path
}
"
)
export_cmd
=
"
${
python
}
${
run_export
}
${
set_export_weight
}
${
set_save_infer_key
}
>
${
export_log_path
}
2>&1 "
eval
$export_cmd
status_check
$?
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
status_check
$?
"
${
export_cmd
}
"
"
${
status_log
}
"
"
${
model_name
}
"
"
${
export_log_path
}
"
#run inference
eval
$env
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录