Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
9451a961
MegEngine
项目概览
MegEngine 天元
/
MegEngine
1 年多 前同步成功
通知
403
Star
4705
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
9451a961
编写于
4月 23, 2021
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
test(mge/optimizer): update optimizer test to make sure grad not change
GitOrigin-RevId: e207672116dcd53dbfefa89ab9b1dcf7301abbea
上级
92e2ed6e
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
15 addition
and
0 deletion
+15
-0
imperative/python/test/integration/test_optimizer.py
imperative/python/test/integration/test_optimizer.py
+15
-0
未找到文件。
imperative/python/test/integration/test_optimizer.py
浏览文件 @
9451a961
...
...
@@ -66,10 +66,17 @@ def _test_optimizer(opt_str, test_case, check_class, update_lr=False):
gm
.
backward
(
loss
)
ori_params
=
{}
ori_grads
=
{}
for
param
in
net
.
parameters
():
assert
param
.
_tuple_shape
is
()
ori_params
[
param
]
=
np
.
copy
(
param
.
numpy
())
ori_grads
[
param
]
=
np
.
copy
(
param
.
grad
.
numpy
())
opt
.
step
()
# check grad not change
for
param
in
net
.
parameters
():
assert
np
.
equal
(
ori_grads
[
param
],
param
.
grad
.
numpy
()
),
"step should not change param.grad"
step
+=
1
check_func
(
ori_params
,
net
.
parameters
(),
step
)
...
...
@@ -135,6 +142,8 @@ def test_sgd(monkeypatch, case, update_lr, inplace_mode):
def
__call__
(
self
,
ori_params
,
new_params
,
step
):
for
param
in
new_params
:
grad
=
param
.
grad
.
numpy
()
if
hasattr
(
self
,
"weight_decay"
)
and
self
.
weight_decay
!=
0.0
:
grad
=
grad
+
ori_params
[
param
]
*
self
.
weight_decay
if
hasattr
(
self
,
"momentum"
):
self
.
slots
[
param
]
=
grad
+
self
.
slots
[
param
]
*
self
.
momentum
delta
=
-
self
.
lr
*
self
.
slots
[
param
]
...
...
@@ -177,6 +186,8 @@ def test_adam(monkeypatch, case, update_lr, inplace_mode):
def
__call__
(
self
,
ori_params
,
new_params
,
step
):
for
param
in
new_params
:
grad
=
param
.
grad
.
numpy
()
if
hasattr
(
self
,
"weight_decay"
)
and
self
.
weight_decay
!=
0.0
:
grad
=
grad
+
ori_params
[
param
]
*
self
.
weight_decay
m
=
self
.
m_slots
[
param
]
v
=
self
.
v_slots
[
param
]
m
*=
self
.
betas
[
0
]
...
...
@@ -222,6 +233,8 @@ def test_adagrad(monkeypatch, case, update_lr, inplace_mode):
def
__call__
(
self
,
ori_params
,
new_params
,
step
):
for
param
in
new_params
:
grad
=
param
.
grad
.
numpy
()
if
hasattr
(
self
,
"weight_decay"
)
and
self
.
weight_decay
!=
0.0
:
grad
=
grad
+
ori_params
[
param
]
*
self
.
weight_decay
self
.
s_slots
[
param
]
+=
grad
**
2
delta
=
grad
/
(
self
.
s_slots
[
param
]
+
self
.
eps
)
**
0.5
delta
*=
-
(
self
.
lr
/
(
1
+
(
step
-
1
)
*
self
.
lr_decay
))
...
...
@@ -257,6 +270,8 @@ def test_adadelta(monkeypatch, case, update_lr, inplace_mode):
def
__call__
(
self
,
ori_params
,
new_params
,
step
):
for
param
in
new_params
:
grad
=
param
.
grad
.
numpy
()
if
hasattr
(
self
,
"weight_decay"
)
and
self
.
weight_decay
!=
0.0
:
grad
=
grad
+
ori_params
[
param
]
*
self
.
weight_decay
self
.
s_slots
[
param
]
=
self
.
s_slots
[
param
]
*
self
.
rho
+
grad
**
2
*
(
1
-
self
.
rho
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录