Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
f182edfd
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
f182edfd
编写于
4月 18, 2020
作者:
Z
Ziyan
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix lars base class type
上级
7c06d292
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
4 addition
and
5 deletion
+4
-5
mindspore/nn/optim/lars.py
mindspore/nn/optim/lars.py
+3
-4
mindspore/nn/optim/optimizer.py
mindspore/nn/optim/optimizer.py
+1
-1
未找到文件。
mindspore/nn/optim/lars.py
浏览文件 @
f182edfd
...
@@ -21,8 +21,7 @@ from mindspore.common.parameter import Parameter
...
@@ -21,8 +21,7 @@ from mindspore.common.parameter import Parameter
from
mindspore.ops
import
operations
as
P
from
mindspore.ops
import
operations
as
P
from
mindspore.ops
import
composite
as
C
from
mindspore.ops
import
composite
as
C
from
mindspore.ops
import
functional
as
F
from
mindspore.ops
import
functional
as
F
from
mindspore.nn.cell
import
Cell
from
.optimizer
import
grad_scale
,
Optimizer
from
.optimizer
import
grad_scale
lars_opt
=
C
.
MultitypeFuncGraph
(
"lars_opt"
)
lars_opt
=
C
.
MultitypeFuncGraph
(
"lars_opt"
)
...
@@ -61,7 +60,7 @@ def _tensor_run_opt_v2(lars, weight_decay, learning_rate, gradient, weight, deca
...
@@ -61,7 +60,7 @@ def _tensor_run_opt_v2(lars, weight_decay, learning_rate, gradient, weight, deca
return
gradient
return
gradient
class
LARS
(
Cell
):
class
LARS
(
Optimizer
):
"""
"""
Implements the LARS algorithm with LARSUpdate Operator.
Implements the LARS algorithm with LARSUpdate Operator.
...
@@ -98,7 +97,7 @@ class LARS(Cell):
...
@@ -98,7 +97,7 @@ class LARS(Cell):
def
__init__
(
self
,
optimizer
,
epsilon
=
1e-05
,
hyperpara
=
0.001
,
weight_decay
=
0.0
,
use_clip
=
False
,
def
__init__
(
self
,
optimizer
,
epsilon
=
1e-05
,
hyperpara
=
0.001
,
weight_decay
=
0.0
,
use_clip
=
False
,
decay_filter
=
lambda
x
:
'LayerNorm'
not
in
x
.
name
and
'bias'
not
in
x
.
name
,
decay_filter
=
lambda
x
:
'LayerNorm'
not
in
x
.
name
and
'bias'
not
in
x
.
name
,
lars_filter
=
lambda
x
:
'LayerNorm'
not
in
x
.
name
and
'bias'
not
in
x
.
name
,
loss_scale
=
1.0
):
lars_filter
=
lambda
x
:
'LayerNorm'
not
in
x
.
name
and
'bias'
not
in
x
.
name
,
loss_scale
=
1.0
):
super
(
LARS
,
self
).
__init__
(
auto_prefix
=
False
)
super
(
LARS
,
self
).
__init__
(
0.0
,
[
Parameter
(
Tensor
(
0.0
),
name
=
"trivial"
)]
)
self
.
opt
=
optimizer
self
.
opt
=
optimizer
self
.
parameters
=
optimizer
.
parameters
self
.
parameters
=
optimizer
.
parameters
self
.
learning_rate
=
optimizer
.
learning_rate
self
.
learning_rate
=
optimizer
.
learning_rate
...
...
mindspore/nn/optim/optimizer.py
浏览文件 @
f182edfd
...
@@ -57,7 +57,7 @@ class Optimizer(Cell):
...
@@ -57,7 +57,7 @@ class Optimizer(Cell):
def
__init__
(
self
,
learning_rate
,
parameters
,
weight_decay
=
0.0
,
loss_scale
=
1.0
,
def
__init__
(
self
,
learning_rate
,
parameters
,
weight_decay
=
0.0
,
loss_scale
=
1.0
,
decay_filter
=
lambda
x
:
'beta'
not
in
x
.
name
and
'gamma'
not
in
x
.
name
):
decay_filter
=
lambda
x
:
'beta'
not
in
x
.
name
and
'gamma'
not
in
x
.
name
):
super
(
Optimizer
,
self
).
__init__
()
super
(
Optimizer
,
self
).
__init__
(
auto_prefix
=
False
)
if
isinstance
(
learning_rate
,
float
):
if
isinstance
(
learning_rate
,
float
):
self
.
dynamic_lr
=
False
self
.
dynamic_lr
=
False
self
.
gather
=
None
self
.
gather
=
None
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录