Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
6f1a1b52
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
6f1a1b52
编写于
2月 10, 2022
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'dygraph' of
https://github.com/PaddlePaddle/PaddleOCR
into dygraph
上级
2fb88e66
63062602
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
332 addition
and
5 deletion
+332
-5
test_tipc/benchmark_train.sh
test_tipc/benchmark_train.sh
+258
-0
test_tipc/configs/det_mv3_db_v2_0/train_infer_python.txt
test_tipc/configs/det_mv3_db_v2_0/train_infer_python.txt
+8
-2
test_tipc/docs/benchmark_train.md
test_tipc/docs/benchmark_train.md
+53
-0
test_tipc/prepare.sh
test_tipc/prepare.sh
+11
-2
tools/program.py
tools/program.py
+2
-1
未找到文件。
test_tipc/benchmark_train.sh
0 → 100644
浏览文件 @
6f1a1b52
#!/bin/bash
source
test_tipc/common_func.sh
# set env
python
=
python
export
model_branch
=
`
git symbolic-ref HEAD 2>/dev/null |
cut
-d
"/"
-f
3
`
export
model_commit
=
$(
git log|head
-n1
|awk
'{print $2}'
)
export
str_tmp
=
$(
echo
`
pip list|grep paddlepaddle-gpu|awk
-F
' '
'{print $2}'
`
)
export
frame_version
=
${
str_tmp
%%.post*
}
export
frame_commit
=
$(
echo
`
${
python
}
-c
"import paddle;print(paddle.version.commit)"
`
)
# run benchmark sh
# Usage:
# bash run_benchmark_train.sh config.txt params
# or
# bash run_benchmark_train.sh config.txt
function
func_parser_params
(){
strs
=
$1
IFS
=
"="
array
=(
${
strs
}
)
tmp
=
${
array
[1]
}
echo
${
tmp
}
}
function
func_sed_params
(){
filename
=
$1
line
=
$2
param_value
=
$3
params
=
`
sed
-n
"
${
line
}
p"
$filename
`
IFS
=
":"
array
=(
${
params
}
)
key
=
${
array
[0]
}
value
=
${
array
[1]
}
if
[[
$value
=
~
'benchmark_train'
]]
;
then
IFS
=
'='
_val
=(
${
value
}
)
param_value
=
"
${
_val
[0]
}
=
${
param_value
}
"
fi
new_params
=
"
${
key
}
:
${
param_value
}
"
IFS
=
";"
cmd
=
"sed -i '
${
line
}
s/.*/
${
new_params
}
/' '
${
filename
}
'"
eval
$cmd
}
function
set_gpu_id
(){
string
=
$1
_str
=
${
string
:1:6
}
IFS
=
"C"
arr
=(
${
_str
}
)
M
=
${
arr
[0]
}
P
=
${
arr
[1]
}
gn
=
`
expr
$P
- 1
`
gpu_num
=
`
expr
$gn
/
$M
`
seq
=
`
seq
-s
","
0
$gpu_num
`
echo
$seq
}
function
get_repo_name
(){
IFS
=
";"
cur_dir
=
$(
pwd
)
IFS
=
"/"
arr
=(
${
cur_dir
}
)
echo
${
arr
[-1]
}
}
FILENAME
=
$1
# copy FILENAME as new
new_filename
=
"./test_tipc/benchmark_train.txt"
cmd
=
`
yes
|cp
$FILENAME
$new_filename
`
FILENAME
=
$new_filename
# MODE must be one of ['benchmark_train']
MODE
=
$2
PARAMS
=
$3
# bash test_tipc/benchmark_train.sh test_tipc/configs/det_mv3_db_v2_0/train_benchmark.txt benchmark_train dynamic_bs8_null_DP_N1C1
IFS
=
$'
\n
'
# parser params from train_benchmark.txt
dataline
=
`
cat
$FILENAME
`
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
model_name
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
# 获取benchmark_params所在的行数
line_num
=
`
grep
-n
"train_benchmark_params"
$FILENAME
|
cut
-d
":"
-f
1
`
# for train log parser
batch_size
=
$(
func_parser_value
"
${
lines
[line_num]
}
"
)
line_num
=
`
expr
$line_num
+ 1
`
fp_items
=
$(
func_parser_value
"
${
lines
[line_num]
}
"
)
line_num
=
`
expr
$line_num
+ 1
`
epoch
=
$(
func_parser_value
"
${
lines
[line_num]
}
"
)
line_num
=
`
expr
$line_num
+ 1
`
profile_option_key
=
$(
func_parser_key
"
${
lines
[line_num]
}
"
)
profile_option_params
=
$(
func_parser_value
"
${
lines
[line_num]
}
"
)
profile_option
=
"
${
profile_option_key
}
:
${
profile_option_params
}
"
line_num
=
`
expr
$line_num
+ 1
`
flags_value
=
$(
func_parser_value
"
${
lines
[line_num]
}
"
)
# set flags
IFS
=
";"
flags_list
=(
${
flags_value
}
)
for
_flag
in
${
flags_list
[*]
}
;
do
cmd
=
"export
${
_flag
}
"
eval
$cmd
done
# set log_name
repo_name
=
$(
get_repo_name
)
SAVE_LOG
=
${
BENCHMARK_LOG_DIR
:-
$(
pwd
)
}
# */benchmark_log
mkdir
-p
"
${
SAVE_LOG
}
/benchmark_log/"
status_log
=
"
${
SAVE_LOG
}
/benchmark_log/results.log"
# The number of lines in which train params can be replaced.
line_python
=
3
line_gpuid
=
4
line_precision
=
6
line_epoch
=
7
line_batchsize
=
9
line_profile
=
13
line_eval_py
=
24
line_export_py
=
30
func_sed_params
"
$FILENAME
"
"
${
line_eval_py
}
"
"null"
func_sed_params
"
$FILENAME
"
"
${
line_export_py
}
"
"null"
func_sed_params
"
$FILENAME
"
"
${
line_python
}
"
"
$python
"
# if params
if
[
!
-n
"
$PARAMS
"
]
;
then
# PARAMS input is not a word.
IFS
=
"|"
batch_size_list
=(
${
batch_size
}
)
fp_items_list
=(
${
fp_items
}
)
device_num_list
=(
N1C4
)
run_mode
=
"DP"
else
# parser params from input: modeltype_bs${bs_item}_${fp_item}_${run_mode}_${device_num}
IFS
=
"_"
params_list
=(
${
PARAMS
}
)
model_type
=
${
params_list
[0]
}
batch_size
=
${
params_list
[1]
}
batch_size
=
`
echo
${
batch_size
}
|
tr
-cd
"[0-9]"
`
precision
=
${
params_list
[2]
}
# run_process_type=${params_list[3]}
run_mode
=
${
params_list
[3]
}
device_num
=
${
params_list
[4]
}
IFS
=
";"
if
[
${
precision
}
=
"null"
]
;
then
precision
=
"fp32"
fi
fp_items_list
=(
$precision
)
batch_size_list
=(
$batch_size
)
device_num_list
=(
$device_num
)
fi
IFS
=
"|"
for
batch_size
in
${
batch_size_list
[*]
}
;
do
for
precision
in
${
fp_items_list
[*]
}
;
do
for
device_num
in
${
device_num_list
[*]
}
;
do
# sed batchsize and precision
func_sed_params
"
$FILENAME
"
"
${
line_precision
}
"
"
$precision
"
func_sed_params
"
$FILENAME
"
"
${
line_batchsize
}
"
"
$MODE
=
$batch_size
"
func_sed_params
"
$FILENAME
"
"
${
line_epoch
}
"
"
$MODE
=
$epoch
"
gpu_id
=
$(
set_gpu_id
$device_num
)
if
[
${#
gpu_id
}
-le
1
]
;
then
run_process_type
=
"SingleP"
log_path
=
"
$SAVE_LOG
/profiling_log"
mkdir
-p
$log_path
log_name
=
"
${
repo_name
}
_
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
_
${
device_num
}
_profiling"
func_sed_params
"
$FILENAME
"
"
${
line_gpuid
}
"
"0"
# sed used gpu_id
# set profile_option params
tmp
=
`
sed
-i
"
${
line_profile
}
s/.*/
${
profile_option
}
/"
"
${
FILENAME
}
"
`
# run test_train_inference_python.sh
cmd
=
"bash test_tipc/test_train_inference_python.sh
${
FILENAME
}
benchmark_train >
${
log_path
}
/
${
log_name
}
2>&1 "
echo
$cmd
eval
$cmd
eval
"cat
${
log_path
}
/
${
log_name
}
"
# without profile
log_path
=
"
$SAVE_LOG
/train_log"
speed_log_path
=
"
$SAVE_LOG
/index"
mkdir
-p
$log_path
mkdir
-p
$speed_log_path
log_name
=
"
${
repo_name
}
_
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
_
${
device_num
}
_log"
speed_log_name
=
"
${
repo_name
}
_
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
_
${
device_num
}
_speed"
func_sed_params
"
$FILENAME
"
"
${
line_profile
}
"
"null"
# sed profile_id as null
cmd
=
"bash test_tipc/test_train_inference_python.sh
${
FILENAME
}
benchmark_train >
${
log_path
}
/
${
log_name
}
2>&1 "
echo
$cmd
job_bt
=
`
date
'+%Y%m%d%H%M%S'
`
eval
$cmd
job_et
=
`
date
'+%Y%m%d%H%M%S'
`
export
model_run_time
=
$((${
job_et
}
-
${
job_bt
}))
eval
"cat
${
log_path
}
/
${
log_name
}
"
# parser log
_model_name
=
"
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
"
cmd
=
"
${
python
}
${
BENCHMARK_ROOT
}
/scripts/analysis.py --filename
${
log_path
}
/
${
log_name
}
\
--speed_log_file '
${
speed_log_path
}
/
${
speed_log_name
}
'
\
--model_name
${
_model_name
}
\
--base_batch_size
${
batch_size
}
\
--run_mode
${
run_mode
}
\
--run_process_type
${
run_process_type
}
\
--fp_item
${
precision
}
\
--keyword ips:
\
--skip_steps 2
\
--device_num
${
device_num
}
\
--speed_unit samples/s
\
--convergence_key loss: "
echo
$cmd
eval
$cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
cmd
}
"
"
${
status_log
}
"
else
IFS
=
";"
unset_env
=
`
unset
CUDA_VISIBLE_DEVICES
`
run_process_type
=
"MultiP"
log_path
=
"
$SAVE_LOG
/train_log"
speed_log_path
=
"
$SAVE_LOG
/index"
mkdir
-p
$log_path
mkdir
-p
$speed_log_path
log_name
=
"
${
repo_name
}
_
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
_
${
device_num
}
_log"
speed_log_name
=
"
${
repo_name
}
_
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
_
${
device_num
}
_speed"
func_sed_params
"
$FILENAME
"
"
${
line_gpuid
}
"
"
$gpu_id
"
# sed used gpu_id
func_sed_params
"
$FILENAME
"
"
${
line_profile
}
"
"null"
# sed --profile_option as null
cmd
=
"bash test_tipc/test_train_inference_python.sh
${
FILENAME
}
benchmark_train >
${
log_path
}
/
${
log_name
}
2>&1 "
echo
$cmd
job_bt
=
`
date
'+%Y%m%d%H%M%S'
`
eval
$cmd
job_et
=
`
date
'+%Y%m%d%H%M%S'
`
export
model_run_time
=
$((${
job_et
}
-
${
job_bt
}))
eval
"cat
${
log_path
}
/
${
log_name
}
"
# parser log
_model_name
=
"
${
model_name
}
_bs
${
batch_size
}
_
${
precision
}
_
${
run_process_type
}
_
${
run_mode
}
"
cmd
=
"
${
python
}
${
BENCHMARK_ROOT
}
/scripts/analysis.py --filename
${
log_path
}
/
${
log_name
}
\
--speed_log_file '
${
speed_log_path
}
/
${
speed_log_name
}
'
\
--model_name
${
_model_name
}
\
--base_batch_size
${
batch_size
}
\
--run_mode
${
run_mode
}
\
--run_process_type
${
run_process_type
}
\
--fp_item
${
precision
}
\
--keyword ips:
\
--skip_steps 2
\
--device_num
${
device_num
}
\
--speed_unit images/s
\
--convergence_key loss: "
echo
$cmd
eval
$cmd
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
cmd
}
"
"
${
status_log
}
"
fi
done
done
done
\ No newline at end of file
test_tipc/configs/det_mv3_db_v2
.
0/train_infer_python.txt
→
test_tipc/configs/det_mv3_db_v2
_
0/train_infer_python.txt
浏览文件 @
6f1a1b52
===========================train_params===========================
===========================train_params===========================
model_name:det_mv3_db_v2
.
0
model_name:det_mv3_db_v2
_
0
python:python3.7
python:python3.7
gpu_list:0|0,1
gpu_list:0|0,1
Global.use_gpu:True|True
Global.use_gpu:True|True
...
@@ -48,4 +48,10 @@ inference:tools/infer/predict_det.py
...
@@ -48,4 +48,10 @@ inference:tools/infer/predict_det.py
--image_dir:./inference/ch_det_data_50/all-sum-510/
--image_dir:./inference/ch_det_data_50/all-sum-510/
null:null
null:null
--benchmark:True
--benchmark:True
null:null
null:null
\ No newline at end of file
===========================train_benchmark_params==========================
batch_size:8|16
fp_items:fp32|fp16
epoch:2
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
\ No newline at end of file
test_tipc/docs/benchmark_train.md
0 → 100644
浏览文件 @
6f1a1b52
# TIPC Linux端Benchmark测试文档
该文档为Benchmark测试说明,Benchmark预测功能测试的主程序为
`benchmark_train.sh`
,用于验证监控模型训练的性能。
# 1. 测试流程
## 1.1 准备数据和环境安装
运行
`test_tipc/prepare.sh`
,完成训练数据准备和安装环境流程。
```
shell
# 运行格式:bash test_tipc/prepare.sh train_benchmark.txt mode
bash test_tipc/prepare.sh test_tipc/configs/det_mv3_db_v2_0/train_benchmark.txt benchmark_train
```
## 1.2 功能测试
执行
`test_tipc/benchmark_train.sh`
,完成模型训练和日志解析
```
shell
# 运行格式:bash test_tipc/benchmark_train.sh train_benchmark.txt mode
bash test_tipc/benchmark_train.sh test_tipc/configs/det_mv3_db_v2_0/train_infer_python.txt benchmark_train
```
`test_tipc/benchmark_train.sh`
支持根据传入的第三个参数实现只运行某一个训练配置,如下:
```
shell
# 运行格式:bash test_tipc/benchmark_train.sh train_benchmark.txt mode
bash test_tipc/benchmark_train.sh test_tipc/configs/det_mv3_db_v2_0/train_infer_python.txt benchmark_train dynamic_bs8_fp32_DP_N1C1
```
dynamic_bs8_fp32_DP_N1C1为test_tipc/benchmark_train.sh传入的参数,格式如下:
`${modeltype}_${batch_size}_${fp_item}_${run_mode}_${device_num}`
包含的信息有:模型类型、batchsize大小、训练精度如fp32,fp16等、分布式运行模式以及分布式训练使用的机器信息如单机单卡(N1C1)。
## 2. 日志输出
运行后将保存模型的训练日志和解析日志,使用
`test_tipc/configs/det_mv3_db_v2_0/train_benchmark.txt`
参数文件的训练日志解析结果是:
```
{"model_branch": "dygaph", "model_commit": "7c39a1996b19087737c05d883fd346d2f39dbcc0", "model_name": "det_mv3_db_v2_0_bs8_fp32_SingleP_DP", "batch_size": 8, "fp_item": "fp32", "run_process_type": "SingleP", "run_mode": "DP", "convergence_value": "5.413110", "convergence_key": "loss:", "ips": 19.333, "speed_unit": "samples/s", "device_num": "N1C1", "model_run_time": "0", "frame_commit": "8cc09552473b842c651ead3b9848d41827a3dbab", "frame_version": "0.0.0"}
```
训练日志和日志解析结果保存在benchmark_log目录下,文件组织格式如下:
```
train_log/
├── index
│ ├── PaddleOCR_det_mv3_db_v2_0_bs8_fp32_SingleP_DP_N1C1_speed
│ └── PaddleOCR_det_mv3_db_v2_0_bs8_fp32_SingleP_DP_N1C4_speed
├── profiling_log
│ └── PaddleOCR_det_mv3_db_v2_0_bs8_fp32_SingleP_DP_N1C1_profiling
└── train_log
├── PaddleOCR_det_mv3_db_v2_0_bs8_fp32_SingleP_DP_N1C1_log
└── PaddleOCR_det_mv3_db_v2_0_bs8_fp32_SingleP_DP_N1C4_log
```
test_tipc/prepare.sh
浏览文件 @
6f1a1b52
...
@@ -20,6 +20,15 @@ model_name=$(func_parser_value "${lines[1]}")
...
@@ -20,6 +20,15 @@ model_name=$(func_parser_value "${lines[1]}")
trainer_list
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
trainer_list
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
if
[
${
MODE
}
=
"benchmark_train"
]
;
then
pip
install
-r
requirements.txt
if
[[
${
model_name
}
=
~
"det_mv3_db_v2_0"
]]
;
then
rm
-rf
./train_data/icdar2015
wget
-nc
-P
./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
--no-check-certificate
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015.tar
--no-check-certificate
cd
./train_data/
&&
tar
xf icdar2015.tar
&&
cd
../
fi
fi
if
[
${
MODE
}
=
"lite_train_lite_infer"
]
;
then
if
[
${
MODE
}
=
"lite_train_lite_infer"
]
;
then
# pretrain lite train data
# pretrain lite train data
...
@@ -52,7 +61,7 @@ if [ ${MODE} = "lite_train_lite_infer" ];then
...
@@ -52,7 +61,7 @@ if [ ${MODE} = "lite_train_lite_infer" ];then
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/total_text_lite.tar
--no-check-certificate
wget
-nc
-P
./train_data/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/total_text_lite.tar
--no-check-certificate
cd
./train_data
&&
tar
xf total_text_lite.tar
&&
ln
-s
total_text_lite total_text
&&
cd
../
cd
./train_data
&&
tar
xf total_text_lite.tar
&&
ln
-s
total_text_lite total_text
&&
cd
../
fi
fi
if
[
${
model_name
}
==
"det_mv3_db_v2
.
0"
]
;
then
if
[
${
model_name
}
==
"det_mv3_db_v2
_
0"
]
;
then
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
--no-check-certificate
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
--no-check-certificate
cd
./inference/
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
cd
../
cd
./inference/
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
cd
../
fi
fi
...
@@ -211,7 +220,7 @@ elif [ ${MODE} = "whole_infer" ];then
...
@@ -211,7 +220,7 @@ elif [ ${MODE} = "whole_infer" ];then
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_totaltext_v2.0_train.tar
--no-check-certificate
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_totaltext_v2.0_train.tar
--no-check-certificate
cd
./inference/
&&
tar
xf det_r50_vd_sast_totaltext_v2.0_train.tar
&&
cd
../
cd
./inference/
&&
tar
xf det_r50_vd_sast_totaltext_v2.0_train.tar
&&
cd
../
fi
fi
if
[
${
model_name
}
==
"det_mv3_db_v2
.
0"
]
;
then
if
[
${
model_name
}
==
"det_mv3_db_v2
_
0"
]
;
then
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
--no-check-certificate
wget
-nc
-P
./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar
--no-check-certificate
cd
./inference/
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
cd
./inference/
&&
tar
xf det_mv3_db_v2.0_train.tar
&&
tar
xf ch_det_data_50.tar
&&
cd
../
fi
fi
...
...
tools/program.py
浏览文件 @
6f1a1b52
...
@@ -278,12 +278,13 @@ def train(config,
...
@@ -278,12 +278,13 @@ def train(config,
(
global_step
>
0
and
global_step
%
print_batch_step
==
0
)
or
(
global_step
>
0
and
global_step
%
print_batch_step
==
0
)
or
(
idx
>=
len
(
train_dataloader
)
-
1
)):
(
idx
>=
len
(
train_dataloader
)
-
1
)):
logs
=
train_stats
.
log
()
logs
=
train_stats
.
log
()
eta_sec
=
((
epoch_num
+
1
-
epoch
)
*
\
eta_sec
=
((
epoch_num
+
1
-
epoch
)
*
\
len
(
train_dataloader
)
-
idx
-
1
)
*
eta_meter
.
avg
len
(
train_dataloader
)
-
idx
-
1
)
*
eta_meter
.
avg
eta_sec_format
=
str
(
datetime
.
timedelta
(
seconds
=
int
(
eta_sec
)))
eta_sec_format
=
str
(
datetime
.
timedelta
(
seconds
=
int
(
eta_sec
)))
strs
=
'epoch: [{}/{}], global_step: {}, {}, avg_reader_cost: '
\
strs
=
'epoch: [{}/{}], global_step: {}, {}, avg_reader_cost: '
\
'{:.5f} s, avg_batch_cost: {:.5f} s, avg_samples: {}, '
\
'{:.5f} s, avg_batch_cost: {:.5f} s, avg_samples: {}, '
\
'ips: {:.5f}, eta: {}'
.
format
(
'ips: {:.5f}
samples/s
, eta: {}'
.
format
(
epoch
,
epoch_num
,
global_step
,
logs
,
epoch
,
epoch_num
,
global_step
,
logs
,
train_reader_cost
/
print_batch_step
,
train_reader_cost
/
print_batch_step
,
train_batch_cost
/
print_batch_step
,
train_batch_cost
/
print_batch_step
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录