Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
7e4bdf6a
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
7e4bdf6a
编写于
6月 15, 2020
作者:
L
lilei
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
proximal_ada_grad optimizer
上级
36d9e353
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
3 addition
and
5 deletion
+3
-5
mindspore/nn/optim/proximal_ada_grad.py
mindspore/nn/optim/proximal_ada_grad.py
+3
-5
未找到文件。
mindspore/nn/optim/proximal_ada_grad.py
浏览文件 @
7e4bdf6a
...
...
@@ -31,15 +31,13 @@ def _tensor_run_opt(opt, learning_rate, l1, l2, gradient, weight, accum):
return
success
def
_check_param_value
(
accum
,
l
earning_rate
,
l
1
,
l2
,
use_locking
,
prim_name
=
None
):
def
_check_param_value
(
accum
,
l1
,
l2
,
use_locking
,
prim_name
=
None
):
"""Check inputs param."""
validator
.
check_value_type
(
"accum"
,
accum
,
[
float
],
prim_name
)
validator
.
check_value_type
(
"learning_rate"
,
learning_rate
,
[
float
],
prim_name
)
validator
.
check_value_type
(
"l1"
,
l1
,
[
float
],
prim_name
)
validator
.
check_value_type
(
"l2"
,
l2
,
[
float
],
prim_name
)
validator
.
check_value_type
(
"use_locking"
,
use_locking
,
[
bool
],
prim_name
)
validator
.
check_number_range
(
"accum"
,
accum
,
0.0
,
float
(
"inf"
),
Rel
.
INC_LEFT
,
prim_name
)
validator
.
check_number_range
(
"learning_rate"
,
learning_rate
,
0.0
,
float
(
"inf"
),
Rel
.
INC_LEFT
,
prim_name
)
validator
.
check_number_range
(
"l1"
,
l1
,
0.0
,
float
(
"inf"
),
Rel
.
INC_LEFT
,
prim_name
)
validator
.
check_number_range
(
"l2"
,
l2
,
0.0
,
float
(
"inf"
),
Rel
.
INC_LEFT
,
prim_name
)
...
...
@@ -79,10 +77,10 @@ class ProximalAdagrad(Optimizer):
def
__init__
(
self
,
params
,
accum
=
0.1
,
learning_rate
=
0.001
,
l1
=
0.0
,
l2
=
0.0
,
use_locking
=
False
,
loss_scale
=
1.0
,
weight_decay
=
0.0
):
super
(
ProximalAdagrad
,
self
).
__init__
(
0.0
,
params
,
weight_decay
,
loss_scale
)
super
(
ProximalAdagrad
,
self
).
__init__
(
learning_rate
,
params
,
weight_decay
,
loss_scale
)
if
self
.
is_group
:
raise
RuntimeError
(
f
"The
{
self
.
cls_name
}
optimizer cannot support group setting."
)
_check_param_value
(
accum
,
l
earning_rate
,
l
1
,
l2
,
use_locking
,
self
.
cls_name
)
_check_param_value
(
accum
,
l1
,
l2
,
use_locking
,
self
.
cls_name
)
self
.
accum
=
self
.
parameters
.
clone
(
prefix
=
"accum"
,
init
=
accum
)
self
.
l1
=
Tensor
(
l1
,
mstype
.
float32
)
self
.
l2
=
Tensor
(
l2
,
mstype
.
float32
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录