Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
28061f53
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
接近 2 年 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
28061f53
编写于
12月 21, 2021
作者:
Z
zhangbo9674
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
refine optimizer init logice
上级
b54ee044
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
13 addition
and
5 deletion
+13
-5
ppcls/engine/train/train.py
ppcls/engine/train/train.py
+2
-1
ppcls/optimizer/optimizer.py
ppcls/optimizer/optimizer.py
+11
-4
未找到文件。
ppcls/engine/train/train.py
浏览文件 @
28061f53
...
@@ -21,6 +21,7 @@ from ppcls.utils import profiler
...
@@ -21,6 +21,7 @@ from ppcls.utils import profiler
def
train_epoch
(
engine
,
epoch_id
,
print_batch_step
):
def
train_epoch
(
engine
,
epoch_id
,
print_batch_step
):
tic
=
time
.
time
()
tic
=
time
.
time
()
v_current
=
[
int
(
i
)
for
i
in
paddle
.
__version__
.
split
(
"."
)]
for
iter_id
,
batch
in
enumerate
(
engine
.
train_dataloader
):
for
iter_id
,
batch
in
enumerate
(
engine
.
train_dataloader
):
if
iter_id
>=
engine
.
max_iter
:
if
iter_id
>=
engine
.
max_iter
:
break
break
...
@@ -59,7 +60,7 @@ def train_epoch(engine, epoch_id, print_batch_step):
...
@@ -59,7 +60,7 @@ def train_epoch(engine, epoch_id, print_batch_step):
else
:
else
:
loss_dict
[
"loss"
].
backward
()
loss_dict
[
"loss"
].
backward
()
engine
.
optimizer
.
step
()
engine
.
optimizer
.
step
()
engine
.
optimizer
.
clear_grad
(
set_to_zero
=
True
)
engine
.
optimizer
.
clear_grad
()
engine
.
lr_sch
.
step
()
engine
.
lr_sch
.
step
()
# below code just for logging
# below code just for logging
...
...
ppcls/optimizer/optimizer.py
浏览文件 @
28061f53
...
@@ -17,6 +17,7 @@ from __future__ import division
...
@@ -17,6 +17,7 @@ from __future__ import division
from
__future__
import
print_function
from
__future__
import
print_function
from
paddle
import
optimizer
as
optim
from
paddle
import
optimizer
as
optim
import
paddle
from
ppcls.utils
import
logger
from
ppcls.utils
import
logger
...
@@ -36,15 +37,13 @@ class Momentum(object):
...
@@ -36,15 +37,13 @@ class Momentum(object):
momentum
,
momentum
,
weight_decay
=
None
,
weight_decay
=
None
,
grad_clip
=
None
,
grad_clip
=
None
,
multi_precision
=
True
,
multi_precision
=
True
):
use_multi_tensor
=
True
):
super
().
__init__
()
super
().
__init__
()
self
.
learning_rate
=
learning_rate
self
.
learning_rate
=
learning_rate
self
.
momentum
=
momentum
self
.
momentum
=
momentum
self
.
weight_decay
=
weight_decay
self
.
weight_decay
=
weight_decay
self
.
grad_clip
=
grad_clip
self
.
grad_clip
=
grad_clip
self
.
multi_precision
=
multi_precision
self
.
multi_precision
=
multi_precision
self
.
use_multi_tensor
=
use_multi_tensor
def
__call__
(
self
,
model_list
):
def
__call__
(
self
,
model_list
):
# model_list is None in static graph
# model_list is None in static graph
...
@@ -56,8 +55,16 @@ class Momentum(object):
...
@@ -56,8 +55,16 @@ class Momentum(object):
weight_decay
=
self
.
weight_decay
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
grad_clip
=
self
.
grad_clip
,
multi_precision
=
self
.
multi_precision
,
multi_precision
=
self
.
multi_precision
,
use_multi_tensor
=
self
.
use_multi_tensor
,
parameters
=
parameters
)
parameters
=
parameters
)
if
hasattr
(
opt
,
'_use_multi_tensor'
):
opt
=
optim
.
Momentum
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
multi_precision
=
self
.
multi_precision
,
parameters
=
parameters
,
use_multi_tensor
=
False
)
return
opt
return
opt
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录