Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
2efba9a3
MegEngine
项目概览
MegEngine 天元
/
MegEngine
1 年多 前同步成功
通知
404
Star
4705
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
2efba9a3
编写于
9月 29, 2020
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix(mgb/test): use both rtol and atol for stable test result
GitOrigin-RevId: 82a1453e4a482f43df5ae94bf44c666a79a16734
上级
f5f86a05
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
30 addition
and
32 deletion
+30
-32
imperative/python/test/unit/module/test_batchnorm.py
imperative/python/test/unit/module/test_batchnorm.py
+26
-31
imperative/python/test/unit/quantization/test_fake_quant.py
imperative/python/test/unit/quantization/test_fake_quant.py
+4
-1
未找到文件。
imperative/python/test/unit/module/test_batchnorm.py
浏览文件 @
2efba9a3
...
...
@@ -6,6 +6,7 @@
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT ARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
import
functools
import
multiprocessing
as
mp
import
platform
...
...
@@ -18,6 +19,8 @@ from megengine import Tensor
from
megengine.core._trace_option
import
use_tensor_shape
from
megengine.module
import
BatchNorm1d
,
BatchNorm2d
,
SyncBatchNorm
_assert_allclose
=
functools
.
partial
(
np
.
testing
.
assert_allclose
,
atol
=
5e-6
,
rtol
=
5e-6
)
@
pytest
.
mark
.
skipif
(
platform
.
system
()
==
"Darwin"
,
reason
=
"do not imp GPU mode at macos now"
...
...
@@ -46,9 +49,9 @@ def test_syncbn():
for
i
in
range
(
steps
):
yv
=
bn
(
Tensor
(
data
[
i
]))
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
_assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
)
_assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
)
xv
=
[]
for
i
in
range
(
steps
):
...
...
@@ -118,13 +121,9 @@ def test_batchnorm():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_mean
.
numpy
().
reshape
(
-
1
),
running_mean
.
reshape
(
-
1
),
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_var
.
numpy
().
reshape
(
-
1
),
running_var
.
reshape
(
-
1
),
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
_assert_allclose
(
bn
.
running_mean
.
numpy
().
reshape
(
-
1
),
running_mean
.
reshape
(
-
1
))
_assert_allclose
(
bn
.
running_var
.
numpy
().
reshape
(
-
1
),
running_var
.
reshape
(
-
1
))
# test set 'training' flag to False
mean_backup
=
bn
.
running_mean
.
numpy
()
...
...
@@ -138,7 +137,7 @@ def test_batchnorm():
np
.
testing
.
assert_equal
(
mean_backup
,
bn
.
running_mean
.
numpy
())
np
.
testing
.
assert_equal
(
var_backup
,
bn
.
running_var
.
numpy
())
yv_expect
=
(
xv
-
running_mean
)
/
np
.
sqrt
(
running_var
+
bn
.
eps
)
np
.
testing
.
assert_allclose
(
yv1
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv1
.
numpy
(),
yv_expect
)
@
pytest
.
mark
.
skipif
(
...
...
@@ -172,13 +171,9 @@ def test_syncbn1d():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_mean
.
numpy
().
reshape
(
-
1
),
running_mean
.
reshape
(
-
1
),
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_var
.
numpy
().
reshape
(
-
1
),
running_var
.
reshape
(
-
1
),
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
_assert_allclose
(
bn
.
running_mean
.
numpy
().
reshape
(
-
1
),
running_mean
.
reshape
(
-
1
))
_assert_allclose
(
bn
.
running_var
.
numpy
().
reshape
(
-
1
),
running_var
.
reshape
(
-
1
))
# test set 'training' flag to False
mean_backup
=
bn
.
running_mean
.
numpy
()
...
...
@@ -192,7 +187,7 @@ def test_syncbn1d():
np
.
testing
.
assert_equal
(
mean_backup
,
bn
.
running_mean
.
numpy
())
np
.
testing
.
assert_equal
(
var_backup
,
bn
.
running_var
.
numpy
())
yv_expect
=
(
xv
-
running_mean
)
/
np
.
sqrt
(
running_var
+
bn
.
eps
)
np
.
testing
.
assert_allclose
(
yv1
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv1
.
numpy
(),
yv_expect
)
def
test_batchnorm2d
():
...
...
@@ -220,9 +215,9 @@ def test_batchnorm2d():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
_assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
)
_assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
)
# test set 'training' flag to False
mean_backup
=
bn
.
running_mean
.
numpy
()
...
...
@@ -236,7 +231,7 @@ def test_batchnorm2d():
np
.
testing
.
assert_equal
(
mean_backup
,
bn
.
running_mean
.
numpy
())
np
.
testing
.
assert_equal
(
var_backup
,
bn
.
running_var
.
numpy
())
yv_expect
=
(
xv
-
running_mean
)
/
np
.
sqrt
(
running_var
+
bn
.
eps
)
np
.
testing
.
assert_allclose
(
yv1
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv1
.
numpy
(),
yv_expect
)
@
pytest
.
mark
.
skipif
(
...
...
@@ -271,9 +266,9 @@ def test_syncbn2d():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
,
atol
=
5e-6
)
np
.
testing
.
assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
_assert_allclose
(
bn
.
running_mean
.
numpy
(),
running_mean
)
_assert_allclose
(
bn
.
running_var
.
numpy
(),
running_var
)
# test set 'training' flag to False
mean_backup
=
bn
.
running_mean
.
numpy
()
...
...
@@ -287,7 +282,7 @@ def test_syncbn2d():
np
.
testing
.
assert_equal
(
mean_backup
,
bn
.
running_mean
.
numpy
())
np
.
testing
.
assert_equal
(
var_backup
,
bn
.
running_var
.
numpy
())
yv_expect
=
(
xv
-
running_mean
)
/
np
.
sqrt
(
running_var
+
bn
.
eps
)
np
.
testing
.
assert_allclose
(
yv1
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv1
.
numpy
(),
yv_expect
)
def
test_batchnorm_no_stats
():
...
...
@@ -310,7 +305,7 @@ def test_batchnorm_no_stats():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
@
pytest
.
mark
.
skipif
(
...
...
@@ -340,7 +335,7 @@ def test_syncbn_no_stats():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
def
test_batchnorm2d_no_stats
():
...
...
@@ -362,7 +357,7 @@ def test_batchnorm2d_no_stats():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
@
pytest
.
mark
.
skipif
(
...
...
@@ -391,4 +386,4 @@ def test_syncbn2d_no_stats():
yv
=
bn
(
Tensor
(
xv
))
yv_expect
=
(
xv
-
mean
)
/
sd
np
.
testing
.
assert_allclose
(
yv
.
numpy
(),
yv_expect
,
atol
=
5e-6
)
_assert_allclose
(
yv
.
numpy
(),
yv_expect
)
imperative/python/test/unit/quantization/test_fake_quant.py
浏览文件 @
2efba9a3
...
...
@@ -60,7 +60,10 @@ def test_TQT():
def
check_inp
(
a
,
b
,
c
,
a_np
,
b_np
,
c_np
):
np
.
testing
.
assert_allclose
(
f
.
forward
(
a
,
b
).
numpy
(),
nf
.
forward
(
a_np
,
b_np
).
astype
(
"float32"
),
rtol
=
1e-6
f
.
forward
(
a
,
b
).
numpy
(),
nf
.
forward
(
a_np
,
b_np
).
astype
(
"float32"
),
rtol
=
1e-6
,
atol
=
1e-6
,
)
c1
,
c2
=
f
.
backward
(
c
)
c1_np
,
c2_np
=
nf
.
backward
(
c_np
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录