Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
c7aeec28
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
c7aeec28
编写于
9月 30, 2021
作者:
G
gaotingquan
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix: support static graph
上级
0dccfb91
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
17 addition
and
7 deletion
+17
-7
ppcls/optimizer/__init__.py
ppcls/optimizer/__init__.py
+2
-1
ppcls/optimizer/optimizer.py
ppcls/optimizer/optimizer.py
+15
-6
未找到文件。
ppcls/optimizer/__init__.py
浏览文件 @
c7aeec28
...
...
@@ -41,7 +41,8 @@ def build_lr_scheduler(lr_config, epochs, step_each_epoch):
return
lr
def
build_optimizer
(
config
,
epochs
,
step_each_epoch
,
model_list
):
# model_list is None in static graph
def
build_optimizer
(
config
,
epochs
,
step_each_epoch
,
model_list
=
None
):
config
=
copy
.
deepcopy
(
config
)
# step1 build lr
lr
=
build_lr_scheduler
(
config
.
pop
(
'lr'
),
epochs
,
step_each_epoch
)
...
...
ppcls/optimizer/optimizer.py
浏览文件 @
c7aeec28
...
...
@@ -43,7 +43,9 @@ class Momentum(object):
self
.
multi_precision
=
multi_precision
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
Momentum
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -79,7 +81,9 @@ class Adam(object):
self
.
multi_precision
=
multi_precision
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
Adam
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -123,7 +127,9 @@ class RMSProp(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
RMSProp
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -160,18 +166,21 @@ class AdamW(object):
self
.
one_dim_param_no_weight_decay
=
one_dim_param_no_weight_decay
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
# TODO(gaotingquan): model_list is None when in static graph, "no_weight_decay" not work.
self
.
no_weight_decay_param_name_list
=
[
p
.
name
for
model
in
model_list
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
]
]
if
model_list
else
[]
if
self
.
one_dim_param_no_weight_decay
:
self
.
no_weight_decay_param_name_list
+=
[
p
.
name
for
model
in
model_list
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
]
]
if
model_list
else
[]
opt
=
optim
.
AdamW
(
learning_rate
=
self
.
learning_rate
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录