Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
eb3f70a0
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
eb3f70a0
编写于
5月 06, 2020
作者:
Y
yoonlee666
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add warmup_steps in AdamWeightDecayDynamicLR optimizer
上级
3d3b9d54
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
14 addition
and
1 deletion
+14
-1
mindspore/nn/optim/adam.py
mindspore/nn/optim/adam.py
+14
-1
未找到文件。
mindspore/nn/optim/adam.py
浏览文件 @
eb3f70a0
...
...
@@ -327,12 +327,17 @@ class AdamWeightDecayDynamicLR(Optimizer):
beta2
=
0.999
,
eps
=
1e-6
,
weight_decay
=
0.0
,
decay_filter
=
lambda
x
:
'beta'
not
in
x
.
name
and
'gamma'
not
in
x
.
name
):
decay_filter
=
lambda
x
:
'beta'
not
in
x
.
name
and
'gamma'
not
in
x
.
name
,
warmup_steps
=
0
):
super
(
AdamWeightDecayDynamicLR
,
self
).
__init__
(
learning_rate
,
params
)
_check_param_value
(
beta1
,
beta2
,
eps
,
weight_decay
,
self
.
cls_name
)
_check_learning_rate_value
(
learning_rate
,
end_learning_rate
,
decay_steps
,
power
,
self
.
cls_name
)
# turn them to scalar when me support scalar/tensor mix operations
self
.
global_step
=
Parameter
(
initializer
(
0
,
[
1
]),
name
=
"global_step"
)
self
.
warmup_steps
=
Tensor
(
np
.
array
([
warmup_steps
]).
astype
(
np
.
float32
))
self
.
warmup_flag
=
False
if
warmup_steps
>
0
:
self
.
warmup_flag
=
True
self
.
decay_steps
=
Tensor
(
np
.
array
([
decay_steps
]).
astype
(
np
.
float32
))
self
.
end_learning_rate
=
Tensor
(
np
.
array
([
end_learning_rate
]).
astype
(
np
.
float32
))
self
.
diff_learning_rate
=
Tensor
(
np
.
array
([
learning_rate
-
end_learning_rate
]).
astype
(
np
.
float32
))
...
...
@@ -348,12 +353,20 @@ class AdamWeightDecayDynamicLR(Optimizer):
self
.
hyper_map
=
C
.
HyperMap
()
self
.
min
=
P
.
Minimum
()
self
.
pow
=
P
.
Pow
()
self
.
greater
=
P
.
Greater
()
self
.
one
=
Tensor
(
np
.
array
([
1.0
]).
astype
(
np
.
float32
))
self
.
cast
=
P
.
Cast
()
self
.
start_learning_rate
=
Tensor
(
np
.
array
([
learning_rate
]).
astype
(
np
.
float32
))
def
construct
(
self
,
gradients
):
step
=
self
.
min
(
self
.
global_step
,
self
.
decay_steps
)
p
=
step
/
self
.
decay_steps
lr
=
self
.
diff_learning_rate
*
self
.
pow
(
self
.
one
-
p
,
self
.
power
)
+
self
.
end_learning_rate
if
self
.
warmup_flag
:
warmup_percent
=
self
.
global_step
/
self
.
warmup_steps
warmup_lr
=
self
.
start_learning_rate
*
warmup_percent
is_warmup
=
self
.
cast
(
self
.
greater
(
self
.
warmup_steps
,
self
.
global_step
),
mstype
.
float32
)
lr
=
(
self
.
one
-
is_warmup
)
*
lr
+
is_warmup
*
warmup_lr
updated_velocity
=
self
.
hyper_map
(
F
.
partial
(
adam_opt
,
self
.
beta1
,
self
.
beta2
,
self
.
eps
,
lr
,
self
.
weight_decay_tensor
),
self
.
params
,
self
.
moments1
,
self
.
moments2
,
gradients
,
self
.
decay_flag
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录