Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
8b8a02d6
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
8b8a02d6
编写于
4月 28, 2022
作者:
F
flytocc
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add update_freq option for gradient accumulation
上级
ed820223
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
28 addition
and
17 deletion
+28
-17
ppcls/configs/ImageNet/ConvNeXt/convnext_tiny.yaml
ppcls/configs/ImageNet/ConvNeXt/convnext_tiny.yaml
+2
-2
ppcls/engine/engine.py
ppcls/engine/engine.py
+5
-1
ppcls/engine/train/train.py
ppcls/engine/train/train.py
+21
-14
未找到文件。
ppcls/configs/ImageNet/ConvNeXt/convnext_tiny.yaml
浏览文件 @
8b8a02d6
...
...
@@ -15,7 +15,7 @@ Global:
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
update_freq
:
8
# model ema
EMA
:
...
...
@@ -51,7 +51,7 @@ Optimizer:
one_dim_param_no_weight_decay
:
True
lr
:
name
:
Cosine
learning_rate
:
5e-4
learning_rate
:
4e-3
# lr 4e-3 for total_batch_size 4096
eta_min
:
1e-6
warmup_epoch
:
20
warmup_start_lr
:
0
...
...
ppcls/engine/engine.py
浏览文件 @
8b8a02d6
...
...
@@ -119,6 +119,9 @@ class Engine(object):
# EMA model
self
.
ema
=
"EMA"
in
self
.
config
and
self
.
mode
==
"train"
# gradient accumulation
self
.
update_freq
=
self
.
config
[
"Global"
].
get
(
"update_freq"
,
1
)
if
"class_num"
in
config
[
"Global"
]:
global_class_num
=
config
[
"Global"
][
"class_num"
]
if
"class_num"
not
in
config
[
"Arch"
]:
...
...
@@ -229,7 +232,7 @@ class Engine(object):
if
self
.
mode
==
'train'
:
self
.
optimizer
,
self
.
lr_sch
=
build_optimizer
(
self
.
config
[
"Optimizer"
],
self
.
config
[
"Global"
][
"epochs"
],
len
(
self
.
train_dataloader
),
len
(
self
.
train_dataloader
)
//
self
.
update_freq
,
[
self
.
model
,
self
.
train_loss_func
])
# for amp training
...
...
@@ -312,6 +315,7 @@ class Engine(object):
self
.
max_iter
=
len
(
self
.
train_dataloader
)
-
1
if
platform
.
system
(
)
==
"Windows"
else
len
(
self
.
train_dataloader
)
self
.
max_iter
=
self
.
max_iter
//
self
.
update_freq
*
self
.
update_freq
for
epoch_id
in
range
(
best_metric
[
"epoch"
]
+
1
,
self
.
config
[
"Global"
][
"epochs"
]
+
1
):
acc
=
0.0
...
...
ppcls/engine/train/train.py
浏览文件 @
8b8a02d6
...
...
@@ -53,25 +53,32 @@ def train_epoch(engine, epoch_id, print_batch_step):
out
=
forward
(
engine
,
batch
)
loss_dict
=
engine
.
train_loss_func
(
out
,
batch
[
1
])
# loss
loss
=
loss_dict
[
"loss"
]
/
engine
.
update_freq
# step opt
if
engine
.
amp
:
scaled
=
engine
.
scaler
.
scale
(
loss
_dict
[
"loss"
]
)
scaled
=
engine
.
scaler
.
scale
(
loss
)
scaled
.
backward
()
for
i
in
range
(
len
(
engine
.
optimizer
)):
engine
.
scaler
.
minimize
(
engine
.
optimizer
[
i
],
scaled
)
if
(
iter_id
+
1
)
%
engine
.
update_freq
==
0
:
for
i
in
range
(
len
(
engine
.
optimizer
)):
engine
.
scaler
.
minimize
(
engine
.
optimizer
[
i
],
scaled
)
else
:
loss_dict
[
"loss"
].
backward
()
loss
.
backward
()
if
(
iter_id
+
1
)
%
engine
.
update_freq
==
0
:
for
i
in
range
(
len
(
engine
.
optimizer
)):
engine
.
optimizer
[
i
].
step
()
if
(
iter_id
+
1
)
%
engine
.
update_freq
==
0
:
# clear grad
for
i
in
range
(
len
(
engine
.
optimizer
)):
engine
.
optimizer
[
i
].
step
()
# clear grad
for
i
in
range
(
len
(
engine
.
optimizer
)):
engine
.
optimizer
[
i
].
clear_grad
()
# step lr
for
i
in
range
(
len
(
engine
.
lr_sch
)):
engine
.
lr_sch
[
i
].
step
()
# update ema
if
engine
.
ema
:
engine
.
model_ema
.
update
(
engine
.
model
)
engine
.
optimizer
[
i
].
clear_grad
()
# step lr
for
i
in
range
(
len
(
engine
.
lr_sch
)):
engine
.
lr_sch
[
i
].
step
()
# update ema
if
engine
.
ema
:
engine
.
model_ema
.
update
(
engine
.
model
)
# below code just for logging
# update metric_for_logger
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录