Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
087ceb52
MegEngine
项目概览
MegEngine 天元
/
MegEngine
1 年多 前同步成功
通知
403
Star
4705
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
087ceb52
编写于
9月 06, 2020
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
feat(mge/imperative): add more optimizer trace tests
GitOrigin-RevId: 4127de1d22b97a4abbcb223575d7756200d163a1
上级
38a5c1c9
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
75 addition
and
0 deletion
+75
-0
imperative/python/test/integration/test_optimizer.py
imperative/python/test/integration/test_optimizer.py
+32
-0
imperative/python/test/integration/test_sgd_momentum.py
imperative/python/test/integration/test_sgd_momentum.py
+43
-0
未找到文件。
imperative/python/test/integration/test_optimizer.py
浏览文件 @
087ceb52
...
...
@@ -10,6 +10,7 @@ import numpy as np
import
megengine.functional
as
F
from
megengine
import
Parameter
,
optimizer
from
megengine.jit
import
trace
from
megengine.module
import
Linear
,
Module
from
megengine.tensor
import
TensorDict
,
tensor
...
...
@@ -66,6 +67,37 @@ def _test_optimizer(opt_str, test_case, check_class, update_lr=False):
step
+=
1
check_func
(
ori_params
,
net
.
parameters
(),
step
)
# static graph
for
symbolic
in
(
False
,
True
):
@
trace
(
symbolic
=
symbolic
)
def
train_func
(
data
,
*
,
opt
=
None
):
opt
.
zero_grad
()
with
opt
.
record
():
pred
=
net
(
data
)
loss
=
pred
.
sum
()
opt
.
backward
(
loss
)
opt
.
step
()
# reset net and opt
net
=
Simple
()
opt
=
getattr
(
optimizer
,
opt_str
)(
net
.
parameters
(),
**
test_case
)
check_func
=
check_class
(
net
,
**
test_case
)
step
=
0
for
i
in
range
(
iter_num
):
if
update_lr
and
i
==
1
:
# change learning rate
for
group
in
opt
.
param_groups
:
group
[
"lr"
]
+=
0.01
check_func
.
lr
+=
0.01
ori_params
=
TensorDict
()
for
param
in
net
.
parameters
():
ori_params
[
param
]
=
np
.
copy
(
param
.
numpy
())
train_func
(
np
.
random
.
random
(
data_shape
).
astype
(
np
.
float32
),
opt
=
opt
)
step
+=
1
check_func
(
ori_params
,
net
.
parameters
(),
step
)
def
test_sgd
():
class
CheckValue
:
...
...
imperative/python/test/integration/test_sgd_momentum.py
浏览文件 @
087ceb52
...
...
@@ -11,6 +11,7 @@ import numpy as np
import
megengine
import
megengine.optimizer
as
optimizer
from
megengine
import
Parameter
,
tensor
from
megengine.jit
import
trace
from
megengine.module
import
Module
...
...
@@ -57,3 +58,45 @@ def test_sgd_momentum():
np
.
testing
.
assert_almost_equal
(
optim
.
_state
[
net
.
a
][
"momentum_buffer"
].
numpy
(),
0.9
*
2.34
+
2.34
)
def
test_sgd_momentum_trace
():
for
symbolic
in
(
True
,
False
):
@
trace
(
symbolic
=
symbolic
)
def
train_func
(
data
,
*
,
model
=
None
,
optim
=
None
):
optim
.
zero_grad
()
with
optim
.
record
():
loss
=
net
(
data
)
optim
.
backward
(
loss
)
optim
.
step
()
return
loss
@
trace
(
symbolic
=
symbolic
)
def
eval_func
(
data
,
*
,
model
=
None
,
optim
=
None
):
loss
=
net
(
data
)
return
loss
net
=
Simple
()
optim
=
optimizer
.
SGD
(
net
.
parameters
(),
lr
=
1.0
,
momentum
=
0.9
)
data
=
tensor
([
2.34
])
train_func
(
data
,
model
=
net
,
optim
=
optim
)
np
.
testing
.
assert_almost_equal
(
optim
.
_state
[
net
.
a
][
"momentum_buffer"
].
numpy
(),
2.34
)
# do 3 steps of infer
for
_
in
range
(
3
):
loss
=
eval_func
(
data
)
np
.
testing
.
assert_almost_equal
(
loss
.
numpy
(),
2.34
*
(
1.23
-
2.34
),
5
)
np
.
testing
.
assert_almost_equal
(
optim
.
_state
[
net
.
a
][
"momentum_buffer"
].
numpy
(),
2.34
)
# do a step of train
train_func
(
data
,
model
=
net
,
optim
=
optim
)
np
.
testing
.
assert_almost_equal
(
loss
.
numpy
(),
2.34
*
(
1.23
-
2.34
),
5
)
np
.
testing
.
assert_almost_equal
(
optim
.
_state
[
net
.
a
][
"momentum_buffer"
].
numpy
(),
0.9
*
2.34
+
2.34
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录