Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
e8229015
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1525
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
e8229015
编写于
11月 16, 2020
作者:
W
WenmuZhou
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
日志符合benchmark规范
上级
d4facfe4
变更
1
显示空白变更内容
内联
并排
Showing
1 changed file
with
17 addition
and
7 deletion
+17
-7
tools/program.py
tools/program.py
+17
-7
未找到文件。
tools/program.py
浏览文件 @
e8229015
...
...
@@ -185,12 +185,15 @@ def train(config,
for
epoch
in
range
(
start_epoch
,
epoch_num
):
if
epoch
>
0
:
train_dataloader
=
build_dataloader
(
config
,
'Train'
,
device
,
logger
)
train_batch_cost
=
0.0
train_reader_cost
=
0.0
batch_sum
=
0
batch_start
=
time
.
time
()
for
idx
,
batch
in
enumerate
(
train_dataloader
):
train_reader_cost
+=
time
.
time
()
-
batch_start
if
idx
>=
len
(
train_dataloader
):
break
lr
=
optimizer
.
get_lr
()
t1
=
time
.
time
()
images
=
batch
[
0
]
preds
=
model
(
images
)
loss
=
loss_class
(
preds
,
batch
)
...
...
@@ -198,6 +201,10 @@ def train(config,
avg_loss
.
backward
()
optimizer
.
step
()
optimizer
.
clear_grad
()
train_batch_cost
+=
time
.
time
()
-
batch_start
batch_sum
+=
len
(
images
)
if
not
isinstance
(
lr_scheduler
,
float
):
lr_scheduler
.
step
()
...
...
@@ -213,9 +220,6 @@ def train(config,
metirc
=
eval_class
.
get_metric
()
train_stats
.
update
(
metirc
)
t2
=
time
.
time
()
train_batch_elapse
=
t2
-
t1
if
vdl_writer
is
not
None
and
dist
.
get_rank
()
==
0
:
for
k
,
v
in
train_stats
.
get
().
items
():
vdl_writer
.
add_scalar
(
'TRAIN/{}'
.
format
(
k
),
v
,
global_step
)
...
...
@@ -224,9 +228,15 @@ def train(config,
if
dist
.
get_rank
(
)
==
0
and
global_step
>
0
and
global_step
%
print_batch_step
==
0
:
logs
=
train_stats
.
log
()
strs
=
'epoch: [{}/{}], iter: {}, {}, time: {:.3f}'
.
format
(
epoch
,
epoch_num
,
global_step
,
logs
,
train_batch_elapse
)
strs
=
'epoch: [{}/{}], iter: {}, {}, reader_cost: {:.5f}s, batch_cost: {:.5f}s, samples: {}, ips: {:.5f}'
.
format
(
epoch
,
epoch_num
,
global_step
,
logs
,
train_reader_cost
/
print_batch_step
,
train_batch_cost
/
print_batch_step
,
batch_sum
,
batch_sum
/
train_batch_cost
)
logger
.
info
(
strs
)
train_batch_cost
=
0.0
train_reader_cost
=
0.0
batch_sum
=
0
batch_start
=
time
.
time
()
# eval
if
global_step
>
start_eval_step
and
\
(
global_step
-
start_eval_step
)
%
eval_batch_step
==
0
and
dist
.
get_rank
()
==
0
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录