Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
正统之独孤求败
mindspore
提交
5cd31363
M
mindspore
项目概览
正统之独孤求败
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
5cd31363
编写于
6月 25, 2020
作者:
T
tinazhang66
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
remove local defined mse and add missing mse/md5 validation
上级
51c4f4a4
变更
36
显示空白变更内容
内联
并排
Showing
36 changed file
with
565 addition
and
84 deletion
+565
-84
tests/ut/data/dataset/golden/cut_out_01_c_result.npz
tests/ut/data/dataset/golden/cut_out_01_c_result.npz
+0
-0
tests/ut/data/dataset/golden/cut_out_01_py_result.npz
tests/ut/data/dataset/golden/cut_out_01_py_result.npz
+0
-0
tests/ut/data/dataset/golden/equalize_01_result.npz
tests/ut/data/dataset/golden/equalize_01_result.npz
+0
-0
tests/ut/data/dataset/golden/five_crop_01_result.npz
tests/ut/data/dataset/golden/five_crop_01_result.npz
+0
-0
tests/ut/data/dataset/golden/invert_01_result.npz
tests/ut/data/dataset/golden/invert_01_result.npz
+0
-0
tests/ut/data/dataset/golden/pad_01_c_result.npz
tests/ut/data/dataset/golden/pad_01_c_result.npz
+0
-0
tests/ut/data/dataset/golden/pad_01_py_result.npz
tests/ut/data/dataset/golden/pad_01_py_result.npz
+0
-0
tests/ut/data/dataset/golden/random_color_01_result.npz
tests/ut/data/dataset/golden/random_color_01_result.npz
+0
-0
tests/ut/data/dataset/golden/random_color_adjust_01_c_result.npz
...t/data/dataset/golden/random_color_adjust_01_c_result.npz
+0
-0
tests/ut/data/dataset/golden/random_color_adjust_01_py_result.npz
.../data/dataset/golden/random_color_adjust_01_py_result.npz
+0
-0
tests/ut/data/dataset/golden/random_crop_decode_resize_01_result.npz
...ta/dataset/golden/random_crop_decode_resize_01_result.npz
+0
-0
tests/ut/data/dataset/golden/random_erasing_01_result.npz
tests/ut/data/dataset/golden/random_erasing_01_result.npz
+0
-0
tests/ut/data/dataset/golden/random_resize_01_result.npz
tests/ut/data/dataset/golden/random_resize_01_result.npz
+0
-0
tests/ut/data/dataset/golden/random_rotation_01_c_result.npz
tests/ut/data/dataset/golden/random_rotation_01_c_result.npz
+0
-0
tests/ut/data/dataset/golden/random_rotation_01_py_result.npz
...s/ut/data/dataset/golden/random_rotation_01_py_result.npz
+0
-0
tests/ut/data/dataset/golden/random_sharpness_01_result.npz
tests/ut/data/dataset/golden/random_sharpness_01_result.npz
+0
-0
tests/ut/data/dataset/golden/rescale_01_result.npz
tests/ut/data/dataset/golden/rescale_01_result.npz
+0
-0
tests/ut/python/dataset/test_autocontrast.py
tests/ut/python/dataset/test_autocontrast.py
+2
-2
tests/ut/python/dataset/test_cut_out.py
tests/ut/python/dataset/test_cut_out.py
+97
-8
tests/ut/python/dataset/test_equalize.py
tests/ut/python/dataset/test_equalize.py
+22
-2
tests/ut/python/dataset/test_five_crop.py
tests/ut/python/dataset/test_five_crop.py
+24
-2
tests/ut/python/dataset/test_invert.py
tests/ut/python/dataset/test_invert.py
+22
-1
tests/ut/python/dataset/test_linear_transformation.py
tests/ut/python/dataset/test_linear_transformation.py
+16
-16
tests/ut/python/dataset/test_pad.py
tests/ut/python/dataset/test_pad.py
+35
-1
tests/ut/python/dataset/test_random_color.py
tests/ut/python/dataset/test_random_color.py
+37
-8
tests/ut/python/dataset/test_random_color_adjust.py
tests/ut/python/dataset/test_random_color_adjust.py
+39
-1
tests/ut/python/dataset/test_random_crop_and_resize.py
tests/ut/python/dataset/test_random_crop_and_resize.py
+2
-0
tests/ut/python/dataset/test_random_crop_decode_resize.py
tests/ut/python/dataset/test_random_crop_decode_resize.py
+33
-9
tests/ut/python/dataset/test_random_erasing.py
tests/ut/python/dataset/test_random_erasing.py
+30
-1
tests/ut/python/dataset/test_random_horizontal_flip.py
tests/ut/python/dataset/test_random_horizontal_flip.py
+2
-1
tests/ut/python/dataset/test_random_resize.py
tests/ut/python/dataset/test_random_resize.py
+28
-2
tests/ut/python/dataset/test_random_rotation.py
tests/ut/python/dataset/test_random_rotation.py
+103
-13
tests/ut/python/dataset/test_random_sharpness.py
tests/ut/python/dataset/test_random_sharpness.py
+43
-9
tests/ut/python/dataset/test_random_vertical_flip.py
tests/ut/python/dataset/test_random_vertical_flip.py
+3
-4
tests/ut/python/dataset/test_rescale_op.py
tests/ut/python/dataset/test_rescale_op.py
+24
-1
tests/ut/python/dataset/test_uniform_augment.py
tests/ut/python/dataset/test_uniform_augment.py
+3
-3
未找到文件。
tests/ut/data/dataset/golden/cut_out_01_c_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/cut_out_01_py_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/equalize_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/five_crop_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/invert_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/pad_01_c_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/pad_01_py_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_color_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_color_adjust_01_c_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_color_adjust_01_py_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_crop_decode_resize_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_erasing_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_resize_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_rotation_01_c_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_rotation_01_py_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/random_sharpness_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/data/dataset/golden/rescale_01_result.npz
0 → 100644
浏览文件 @
5cd31363
文件已添加
tests/ut/python/dataset/test_autocontrast.py
浏览文件 @
5cd31363
...
...
@@ -20,7 +20,7 @@ import numpy as np
import
mindspore.dataset.engine
as
de
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
diff_mse
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
...
...
@@ -75,7 +75,7 @@ def test_auto_contrast(plot=False):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_auto_contrast
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_auto_contrast
[
i
],
images_original
[
i
]
)
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
if
plot
:
...
...
tests/ut/python/dataset/test_cut_out.py
浏览文件 @
5cd31363
...
...
@@ -21,11 +21,13 @@ import mindspore.dataset as ds
import
mindspore.dataset.transforms.vision.c_transforms
as
c
import
mindspore.dataset.transforms.vision.py_transforms
as
f
from
mindspore
import
log
as
logger
from
util
import
visualize_image
,
diff_mse
from
util
import
visualize_image
,
visualize_list
,
diff_mse
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_cut_out_op
(
plot
=
False
):
"""
...
...
@@ -34,7 +36,7 @@ def test_cut_out_op(plot=False):
logger
.
info
(
"test_cut_out"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
])
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
]
,
shuffle
=
False
)
transforms_1
=
[
f
.
Decode
(),
...
...
@@ -45,7 +47,7 @@ def test_cut_out_op(plot=False):
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
transform_1
())
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
])
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
]
,
shuffle
=
False
)
decode_op
=
c
.
Decode
()
cut_out_op
=
c
.
CutOut
(
80
)
...
...
@@ -74,25 +76,24 @@ def test_cut_out_op(plot=False):
visualize_image
(
image_1
,
image_2
,
mse
)
def
test_cut_out_op_multicut
():
def
test_cut_out_op_multicut
(
plot
=
False
):
"""
Test Cutout
"""
logger
.
info
(
"test_cut_out"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
])
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
]
,
shuffle
=
False
)
transforms_1
=
[
f
.
Decode
(),
f
.
ToTensor
(),
f
.
RandomErasing
(
value
=
'random'
)
]
transform_1
=
f
.
ComposeOp
(
transforms_1
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
transform_1
())
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
])
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
]
,
shuffle
=
False
)
decode_op
=
c
.
Decode
()
cut_out_op
=
c
.
CutOut
(
80
,
num_patches
=
10
)
...
...
@@ -104,19 +105,107 @@ def test_cut_out_op_multicut():
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transforms_2
)
num_iter
=
0
image_list_1
,
image_list_2
=
[],
[]
for
item1
,
item2
in
zip
(
data1
.
create_dict_iterator
(),
data2
.
create_dict_iterator
()):
num_iter
+=
1
image_1
=
(
item1
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
# C image doesn't require transpose
image_2
=
item2
[
"image"
]
image_list_1
.
append
(
image_1
)
image_list_2
.
append
(
image_2
)
logger
.
info
(
"shape of image_1: {}"
.
format
(
image_1
.
shape
))
logger
.
info
(
"shape of image_2: {}"
.
format
(
image_2
.
shape
))
logger
.
info
(
"dtype of image_1: {}"
.
format
(
image_1
.
dtype
))
logger
.
info
(
"dtype of image_2: {}"
.
format
(
image_2
.
dtype
))
if
plot
:
visualize_list
(
image_list_1
,
image_list_2
)
def
test_cut_out_md5
():
"""
Test Cutout with md5 check
"""
logger
.
info
(
"test_cut_out_md5"
)
original_seed
=
config_get_set_seed
(
2
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c
.
Decode
()
cut_out_op
=
c
.
CutOut
(
100
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
cut_out_op
)
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transforms
=
[
f
.
Decode
(),
f
.
ToTensor
(),
f
.
Cutout
(
100
)
]
transform
=
f
.
ComposeOp
(
transforms
)
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
# Compare with expected md5 from images
filename1
=
"cut_out_01_c_result.npz"
save_and_check_md5
(
data1
,
filename1
,
generate_golden
=
GENERATE_GOLDEN
)
filename2
=
"cut_out_01_py_result.npz"
save_and_check_md5
(
data2
,
filename2
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore config
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
(
original_num_parallel_workers
)
def
test_cut_out_comp
(
plot
=
False
):
"""
Test Cutout with c++ and python op comparison
"""
logger
.
info
(
"test_cut_out_comp"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transforms_1
=
[
f
.
Decode
(),
f
.
ToTensor
(),
f
.
Cutout
(
200
)
]
transform_1
=
f
.
ComposeOp
(
transforms_1
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
transform_1
())
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transforms_2
=
[
c
.
Decode
(),
c
.
CutOut
(
200
)
]
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transforms_2
)
num_iter
=
0
image_list_1
,
image_list_2
=
[],
[]
for
item1
,
item2
in
zip
(
data1
.
create_dict_iterator
(),
data2
.
create_dict_iterator
()):
num_iter
+=
1
image_1
=
(
item1
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
# C image doesn't require transpose
image_2
=
item2
[
"image"
]
image_list_1
.
append
(
image_1
)
image_list_2
.
append
(
image_2
)
logger
.
info
(
"shape of image_1: {}"
.
format
(
image_1
.
shape
))
logger
.
info
(
"shape of image_2: {}"
.
format
(
image_2
.
shape
))
logger
.
info
(
"dtype of image_1: {}"
.
format
(
image_1
.
dtype
))
logger
.
info
(
"dtype of image_2: {}"
.
format
(
image_2
.
dtype
))
if
plot
:
visualize_list
(
image_list_1
,
image_list_2
,
visualize_mode
=
2
)
if
__name__
==
"__main__"
:
test_cut_out_op
(
plot
=
True
)
test_cut_out_op_multicut
()
test_cut_out_op_multicut
(
plot
=
True
)
test_cut_out_md5
()
test_cut_out_comp
(
plot
=
True
)
tests/ut/python/dataset/test_equalize.py
浏览文件 @
5cd31363
...
...
@@ -20,10 +20,11 @@ import numpy as np
import
mindspore.dataset.engine
as
de
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
diff_mse
,
save_and_check_md5
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
GENERATE_GOLDEN
=
False
def
test_equalize
(
plot
=
False
):
"""
...
...
@@ -75,12 +76,31 @@ def test_equalize(plot=False):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_equalize
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_equalize
[
i
],
images_original
[
i
]
)
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
if
plot
:
visualize_list
(
images_original
,
images_equalize
)
def
test_equalize_md5
():
"""
Test Equalize with md5 check
"""
logger
.
info
(
"Test Equalize"
)
# First dataset
data1
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Equalize
(),
F
.
ToTensor
()])
data1
=
data1
.
map
(
input_columns
=
"image"
,
operations
=
transforms
())
# Compare with expected md5 from images
filename
=
"equalize_01_result.npz"
save_and_check_md5
(
data1
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
if
__name__
==
"__main__"
:
test_equalize
(
plot
=
True
)
test_equalize_md5
()
tests/ut/python/dataset/test_five_crop.py
浏览文件 @
5cd31363
...
...
@@ -20,11 +20,12 @@ import numpy as np
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.py_transforms
as
vision
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
save_and_check_md5
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_five_crop_op
(
plot
=
False
):
"""
...
...
@@ -63,7 +64,7 @@ def test_five_crop_op(plot=False):
logger
.
info
(
"dtype of image_1: {}"
.
format
(
image_1
.
dtype
))
logger
.
info
(
"dtype of image_2: {}"
.
format
(
image_2
.
dtype
))
if
plot
:
visualize_list
(
np
.
array
([
image_1
]
*
10
),
(
image_2
*
255
).
astype
(
np
.
uint8
).
transpose
(
0
,
2
,
3
,
1
))
visualize_list
(
np
.
array
([
image_1
]
*
5
),
(
image_2
*
255
).
astype
(
np
.
uint8
).
transpose
(
0
,
2
,
3
,
1
))
# The output data should be of a 4D tensor shape, a stack of 5 images.
assert
len
(
image_2
.
shape
)
==
4
...
...
@@ -93,6 +94,27 @@ def test_five_crop_error_msg():
assert
error_msg
in
str
(
info
.
value
)
def
test_five_crop_md5
():
"""
Test FiveCrop with md5 check
"""
logger
.
info
(
"test_five_crop_md5"
)
# First dataset
data
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transforms
=
[
vision
.
Decode
(),
vision
.
FiveCrop
(
100
),
lambda
images
:
np
.
stack
([
vision
.
ToTensor
()(
image
)
for
image
in
images
])
# 4D stack of 5 images
]
transform
=
vision
.
ComposeOp
(
transforms
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
# Compare with expected md5 from images
filename
=
"five_crop_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
if
__name__
==
"__main__"
:
test_five_crop_op
(
plot
=
True
)
test_five_crop_error_msg
()
test_five_crop_md5
()
tests/ut/python/dataset/test_invert.py
浏览文件 @
5cd31363
...
...
@@ -20,10 +20,11 @@ import numpy as np
import
mindspore.dataset.engine
as
de
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
save_and_check_md5
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
GENERATE_GOLDEN
=
False
def
test_invert
(
plot
=
False
):
"""
...
...
@@ -82,5 +83,25 @@ def test_invert(plot=False):
visualize_list
(
images_original
,
images_invert
)
def
test_invert_md5
():
"""
Test Invert with md5 check
"""
logger
.
info
(
"Test Invert with md5 check"
)
# Generate dataset
ds
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms_invert
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Invert
(),
F
.
ToTensor
()])
data
=
ds
.
map
(
input_columns
=
"image"
,
operations
=
transforms_invert
())
# Compare with expected md5 from images
filename
=
"invert_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
if
__name__
==
"__main__"
:
test_invert
(
plot
=
True
)
test_invert_md5
()
tests/ut/python/dataset/test_linear_transformation.py
浏览文件 @
5cd31363
...
...
@@ -73,12 +73,12 @@ def test_linear_transformation_op(plot=False):
if
plot
:
visualize_list
(
image
,
image_transformed
)
def
test_linear_transformation_md5
_01
():
def
test_linear_transformation_md5
():
"""
Test LinearTransformation op: valid params (transformation_matrix, mean_vector)
Expected to pass
"""
logger
.
info
(
"test_linear_transformation_md5
_01
"
)
logger
.
info
(
"test_linear_transformation_md5"
)
# Initialize parameters
height
=
50
...
...
@@ -102,12 +102,12 @@ def test_linear_transformation_md5_01():
filename
=
"linear_transformation_01_result.npz"
save_and_check_md5
(
data1
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
def
test_linear_transformation_
md5_02
():
def
test_linear_transformation_
exception_01
():
"""
Test LinearTransformation op: transformation_matrix is not provided
Expected to raise ValueError
"""
logger
.
info
(
"test_linear_transformation_
md5_02
"
)
logger
.
info
(
"test_linear_transformation_
exception_01
"
)
# Initialize parameters
height
=
50
...
...
@@ -130,12 +130,12 @@ def test_linear_transformation_md5_02():
logger
.
info
(
"Got an exception in DE: {}"
.
format
(
str
(
e
)))
assert
"not provided"
in
str
(
e
)
def
test_linear_transformation_
md5_03
():
def
test_linear_transformation_
exception_02
():
"""
Test LinearTransformation op: mean_vector is not provided
Expected to raise ValueError
"""
logger
.
info
(
"test_linear_transformation_
md5_03
"
)
logger
.
info
(
"test_linear_transformation_
exception_02
"
)
# Initialize parameters
height
=
50
...
...
@@ -158,12 +158,12 @@ def test_linear_transformation_md5_03():
logger
.
info
(
"Got an exception in DE: {}"
.
format
(
str
(
e
)))
assert
"not provided"
in
str
(
e
)
def
test_linear_transformation_
md5_04
():
def
test_linear_transformation_
exception_03
():
"""
Test LinearTransformation op: transformation_matrix is not a square matrix
Expected to raise ValueError
"""
logger
.
info
(
"test_linear_transformation_
md5_04
"
)
logger
.
info
(
"test_linear_transformation_
exception_03
"
)
# Initialize parameters
height
=
50
...
...
@@ -187,12 +187,12 @@ def test_linear_transformation_md5_04():
logger
.
info
(
"Got an exception in DE: {}"
.
format
(
str
(
e
)))
assert
"square matrix"
in
str
(
e
)
def
test_linear_transformation_
md5_05
():
def
test_linear_transformation_
exception_04
():
"""
Test LinearTransformation op: mean_vector does not match dimension of transformation_matrix
Expected to raise ValueError
"""
logger
.
info
(
"test_linear_transformation_
md5_05
"
)
logger
.
info
(
"test_linear_transformation_
exception_04
"
)
# Initialize parameters
height
=
50
...
...
@@ -217,9 +217,9 @@ def test_linear_transformation_md5_05():
assert
"should match"
in
str
(
e
)
if
__name__
==
'__main__'
:
test_linear_transformation_op
(
True
)
test_linear_transformation_md5
_01
()
test_linear_transformation_
md5_02
()
test_linear_transformation_
md5_03
()
test_linear_transformation_
md5_04
()
test_linear_transformation_
md5_05
()
test_linear_transformation_op
(
plot
=
True
)
test_linear_transformation_md5
()
test_linear_transformation_
exception_01
()
test_linear_transformation_
exception_02
()
test_linear_transformation_
exception_03
()
test_linear_transformation_
exception_04
()
tests/ut/python/dataset/test_pad.py
浏览文件 @
5cd31363
...
...
@@ -21,11 +21,12 @@ import mindspore.dataset as ds
import
mindspore.dataset.transforms.vision.c_transforms
as
c_vision
import
mindspore.dataset.transforms.vision.py_transforms
as
py_vision
from
mindspore
import
log
as
logger
from
util
import
diff_mse
from
util
import
diff_mse
,
save_and_check_md5
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_pad_op
():
"""
...
...
@@ -116,6 +117,39 @@ def test_pad_grayscale():
assert
shape1
[
0
:
1
]
==
shape2
[
0
:
1
]
def
test_pad_md5
():
"""
Test Pad with md5 check
"""
logger
.
info
(
"test_pad_md5"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
pad_op
=
c_vision
.
Pad
(
150
)
ctrans
=
[
decode_op
,
pad_op
,
]
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
ctrans
)
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
pytrans
=
[
py_vision
.
Decode
(),
py_vision
.
Pad
(
150
),
py_vision
.
ToTensor
(),
]
transform
=
py_vision
.
ComposeOp
(
pytrans
)
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
# Compare with expected md5 from images
filename1
=
"pad_01_c_result.npz"
save_and_check_md5
(
data1
,
filename1
,
generate_golden
=
GENERATE_GOLDEN
)
filename2
=
"pad_01_py_result.npz"
save_and_check_md5
(
data2
,
filename2
,
generate_golden
=
GENERATE_GOLDEN
)
if
__name__
==
"__main__"
:
test_pad_op
()
test_pad_grayscale
()
test_pad_md5
()
tests/ut/python/dataset/test_random_color.py
浏览文件 @
5cd31363
...
...
@@ -17,13 +17,16 @@ Testing RandomColor op in DE
"""
import
numpy
as
np
import
mindspore.dataset
as
ds
import
mindspore.dataset.engine
as
de
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
diff_mse
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
GENERATE_GOLDEN
=
False
def
test_random_color
(
degrees
=
(
0.1
,
1.9
),
plot
=
False
):
"""
...
...
@@ -32,13 +35,13 @@ def test_random_color(degrees=(0.1, 1.9), plot=False):
logger
.
info
(
"Test RandomColor"
)
# Original Images
d
s
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
d
ata
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms_original
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Resize
((
224
,
224
)),
F
.
ToTensor
()])
ds_original
=
d
s
.
map
(
input_columns
=
"image"
,
ds_original
=
d
ata
.
map
(
input_columns
=
"image"
,
operations
=
transforms_original
())
ds_original
=
ds_original
.
batch
(
512
)
...
...
@@ -52,14 +55,14 @@ def test_random_color(degrees=(0.1, 1.9), plot=False):
axis
=
0
)
# Random Color Adjusted Images
d
s
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
d
ata
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms_random_color
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Resize
((
224
,
224
)),
F
.
RandomColor
(
degrees
=
degrees
),
F
.
ToTensor
()])
ds_random_color
=
d
s
.
map
(
input_columns
=
"image"
,
ds_random_color
=
d
ata
.
map
(
input_columns
=
"image"
,
operations
=
transforms_random_color
())
ds_random_color
=
ds_random_color
.
batch
(
512
)
...
...
@@ -75,14 +78,40 @@ def test_random_color(degrees=(0.1, 1.9), plot=False):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_random_color
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_random_color
[
i
],
images_original
[
i
]
)
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
if
plot
:
visualize_list
(
images_original
,
images_random_color
)
def
test_random_color_md5
():
"""
Test RandomColor with md5 check
"""
logger
.
info
(
"Test RandomColor with md5 check"
)
original_seed
=
config_get_set_seed
(
10
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# Generate dataset
data
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
RandomColor
((
0.5
,
1.5
)),
F
.
ToTensor
()])
data
=
data
.
map
(
input_columns
=
"image"
,
operations
=
transforms
())
# Compare with expected md5 from images
filename
=
"random_color_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
((
original_num_parallel_workers
))
if
__name__
==
"__main__"
:
test_random_color
()
test_random_color
(
plot
=
True
)
test_random_color
(
degrees
=
(
0.5
,
1.5
),
plot
=
True
)
test_random_color_md5
()
tests/ut/python/dataset/test_random_color_adjust.py
浏览文件 @
5cd31363
...
...
@@ -22,11 +22,13 @@ import mindspore.dataset as ds
import
mindspore.dataset.transforms.vision.c_transforms
as
c_vision
import
mindspore.dataset.transforms.vision.py_transforms
as
py_vision
from
mindspore
import
log
as
logger
from
util
import
diff_mse
,
visualize_image
from
util
import
diff_mse
,
visualize_image
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
util_test_random_color_adjust_error
(
brightness
=
(
1
,
1
),
contrast
=
(
1
,
1
),
saturation
=
(
1
,
1
),
hue
=
(
0
,
0
)):
"""
...
...
@@ -188,6 +190,41 @@ def test_random_color_adjust_op_hue_error():
util_test_random_color_adjust_error
(
hue
=
(
0.5
,
0.5
))
def
test_random_color_adjust_md5
():
"""
Test RandomColorAdjust with md5 check
"""
logger
.
info
(
"Test RandomColorAdjust with md5 check"
)
original_seed
=
config_get_set_seed
(
10
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
random_adjust_op
=
c_vision
.
RandomColorAdjust
(
0.4
,
0.4
,
0.4
,
0.1
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
random_adjust_op
)
# Second dataset
transforms
=
[
py_vision
.
Decode
(),
py_vision
.
RandomColorAdjust
(
0.4
,
0.4
,
0.4
,
0.1
),
py_vision
.
ToTensor
()
]
transform
=
py_vision
.
ComposeOp
(
transforms
)
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
# Compare with expected md5 from images
filename
=
"random_color_adjust_01_c_result.npz"
save_and_check_md5
(
data1
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
filename
=
"random_color_adjust_01_py_result.npz"
save_and_check_md5
(
data2
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
(
original_num_parallel_workers
)
if
__name__
==
"__main__"
:
test_random_color_adjust_op_brightness
(
plot
=
True
)
test_random_color_adjust_op_brightness_error
()
...
...
@@ -197,3 +234,4 @@ if __name__ == "__main__":
test_random_color_adjust_op_saturation_error
()
test_random_color_adjust_op_hue
(
plot
=
True
)
test_random_color_adjust_op_hue_error
()
test_random_color_adjust_md5
()
tests/ut/python/dataset/test_random_crop_and_resize.py
浏览文件 @
5cd31363
...
...
@@ -331,6 +331,8 @@ def test_random_crop_and_resize_comp(plot=False):
py_image
=
(
item2
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
image_c_cropped
.
append
(
c_image
)
image_py_cropped
.
append
(
py_image
)
mse
=
diff_mse
(
c_image
,
py_image
)
assert
mse
<
0.02
# rounding error
if
plot
:
visualize_list
(
image_c_cropped
,
image_py_cropped
,
visualize_mode
=
2
)
...
...
tests/ut/python/dataset/test_random_crop_decode_resize.py
浏览文件 @
5cd31363
...
...
@@ -15,16 +15,16 @@
"""
Testing RandomCropDecodeResize op in DE
"""
import
cv2
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.c_transforms
as
vision
from
mindspore
import
log
as
logger
from
util
import
diff_mse
,
visualize_image
from
util
import
diff_mse
,
visualize_image
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_random_crop_decode_resize_op
(
plot
=
False
):
"""
...
...
@@ -40,22 +40,46 @@ def test_random_crop_decode_resize_op(plot=False):
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
random_crop_resize_op
=
vision
.
RandomResizedCrop
((
256
,
512
),
(
1
,
1
),
(
0.5
,
0.5
))
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
random_crop_resize_op
)
num_iter
=
0
for
item1
,
item2
in
zip
(
data1
.
create_dict_iterator
(),
data2
.
create_dict_iterator
()):
if
num_iter
>
0
:
break
crop_and_resize_de
=
item1
[
"image"
]
original
=
item2
[
"image"
]
crop_and_resize_cv
=
cv2
.
resize
(
original
,
(
512
,
256
)
)
mse
=
diff_mse
(
crop_and_resize_de
,
crop_and_resize_cv
)
image1
=
item1
[
"image"
]
image2
=
item2
[
"image"
]
mse
=
diff_mse
(
image1
,
image2
)
assert
mse
==
0
logger
.
info
(
"random_crop_decode_resize_op_{}, mse: {}"
.
format
(
num_iter
+
1
,
mse
))
if
plot
:
visualize_image
(
original
,
crop_and_resize_de
,
mse
,
crop_and_resize_cv
)
visualize_image
(
image1
,
image2
,
mse
)
num_iter
+=
1
def
test_random_crop_decode_resize_md5
():
"""
Test RandomCropDecodeResize with md5 check
"""
logger
.
info
(
"Test RandomCropDecodeResize with md5 check"
)
original_seed
=
config_get_set_seed
(
10
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# Generate dataset
data
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
random_crop_decode_resize_op
=
vision
.
RandomCropDecodeResize
((
256
,
512
),
(
1
,
1
),
(
0.5
,
0.5
))
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
random_crop_decode_resize_op
)
# Compare with expected md5 from images
filename
=
"random_crop_decode_resize_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
((
original_num_parallel_workers
))
if
__name__
==
"__main__"
:
test_random_crop_decode_resize_op
(
plot
=
True
)
test_random_crop_decode_resize_md5
()
tests/ut/python/dataset/test_random_erasing.py
浏览文件 @
5cd31363
...
...
@@ -20,11 +20,13 @@ import numpy as np
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.py_transforms
as
vision
from
mindspore
import
log
as
logger
from
util
import
diff_mse
,
visualize_image
from
util
import
diff_mse
,
visualize_image
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_random_erasing_op
(
plot
=
False
):
"""
...
...
@@ -69,5 +71,32 @@ def test_random_erasing_op(plot=False):
visualize_image
(
image_1
,
image_2
,
mse
)
def
test_random_erasing_md5
():
"""
Test RandomErasing with md5 check
"""
logger
.
info
(
"Test RandomErasing with md5 check"
)
original_seed
=
config_get_set_seed
(
5
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# Generate dataset
data
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transforms_1
=
[
vision
.
Decode
(),
vision
.
ToTensor
(),
vision
.
RandomErasing
(
value
=
'random'
)
]
transform_1
=
vision
.
ComposeOp
(
transforms_1
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
transform_1
())
# Compare with expected md5 from images
filename
=
"random_erasing_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
((
original_num_parallel_workers
))
if
__name__
==
"__main__"
:
test_random_erasing_op
(
plot
=
True
)
test_random_erasing_md5
()
tests/ut/python/dataset/test_random_horizontal_flip.py
浏览文件 @
5cd31363
...
...
@@ -49,7 +49,7 @@ def test_random_horizontal_op(plot=False):
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
random_horizontal_op
=
c_vision
.
RandomHorizontalFlip
()
random_horizontal_op
=
c_vision
.
RandomHorizontalFlip
(
1.0
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
random_horizontal_op
)
...
...
@@ -69,6 +69,7 @@ def test_random_horizontal_op(plot=False):
image_h_flipped_2
=
h_flip
(
image
)
mse
=
diff_mse
(
image_h_flipped
,
image_h_flipped_2
)
assert
mse
==
0
logger
.
info
(
"image_{}, mse: {}"
.
format
(
num_iter
+
1
,
mse
))
num_iter
+=
1
if
plot
:
...
...
tests/ut/python/dataset/test_random_resize.py
浏览文件 @
5cd31363
...
...
@@ -13,16 +13,18 @@
# limitations under the License.
# ==============================================================================
"""
Testing
the r
esize op in DE
Testing
RandomR
esize op in DE
"""
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.c_transforms
as
vision
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_random_resize_op
(
plot
=
False
):
"""
...
...
@@ -52,5 +54,29 @@ def test_random_resize_op(plot=False):
visualize_list
(
image_original
,
image_resized
)
def
test_random_resize_md5
():
"""
Test RandomResize with md5 check
"""
logger
.
info
(
"Test RandomResize with md5 check"
)
original_seed
=
config_get_set_seed
(
5
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# Generate dataset
data
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
vision
.
Decode
()
resize_op
=
vision
.
RandomResize
(
10
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
resize_op
)
# Compare with expected md5 from images
filename
=
"random_resize_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
(
original_num_parallel_workers
)
if
__name__
==
"__main__"
:
test_random_resize_op
(
plot
=
True
)
test_random_resize_md5
()
tests/ut/python/dataset/test_random_rotation.py
浏览文件 @
5cd31363
...
...
@@ -21,18 +21,21 @@ import cv2
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.c_transforms
as
c_vision
import
mindspore.dataset.transforms.vision.py_transforms
as
py_vision
from
mindspore.dataset.transforms.vision.utils
import
Inter
from
mindspore
import
log
as
logger
from
util
import
visualize_image
,
diff_mse
from
util
import
visualize_image
,
visualize_list
,
diff_mse
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
test_random_rotation_op
(
plot
=
False
):
def
test_random_rotation_op
_c
(
plot
=
False
):
"""
Test RandomRotation op
Test RandomRotation
in c++ transformations
op
"""
logger
.
info
(
"test_random_rotation_op"
)
logger
.
info
(
"test_random_rotation_op
_c
"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
shuffle
=
False
)
...
...
@@ -62,6 +65,42 @@ def test_random_rotation_op(plot=False):
visualize_image
(
original
,
rotation_de
,
mse
,
rotation_cv
)
def
test_random_rotation_op_py
(
plot
=
False
):
"""
Test RandomRotation in python transformations op
"""
logger
.
info
(
"test_random_rotation_op_py"
)
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
shuffle
=
False
)
# use [90, 90] to force rotate 90 degrees, expand is set to be True to match output size
transform1
=
py_vision
.
ComposeOp
([
py_vision
.
Decode
(),
py_vision
.
RandomRotation
((
90
,
90
),
expand
=
True
),
py_vision
.
ToTensor
()])
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
transform1
())
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
transform2
=
py_vision
.
ComposeOp
([
py_vision
.
Decode
(),
py_vision
.
ToTensor
()])
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform2
())
num_iter
=
0
for
item1
,
item2
in
zip
(
data1
.
create_dict_iterator
(),
data2
.
create_dict_iterator
()):
if
num_iter
>
0
:
break
rotation_de
=
(
item1
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
original
=
(
item2
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
logger
.
info
(
"shape before rotate: {}"
.
format
(
original
.
shape
))
rotation_cv
=
cv2
.
rotate
(
original
,
cv2
.
ROTATE_90_COUNTERCLOCKWISE
)
mse
=
diff_mse
(
rotation_de
,
rotation_cv
)
logger
.
info
(
"random_rotation_op_{}, mse: {}"
.
format
(
num_iter
+
1
,
mse
))
assert
mse
==
0
num_iter
+=
1
if
plot
:
visualize_image
(
original
,
rotation_de
,
mse
,
rotation_cv
)
def
test_random_rotation_expand
():
"""
Test RandomRotation op
...
...
@@ -71,7 +110,7 @@ def test_random_rotation_expand():
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
#
use [90, 90] to force rotate 90 degrees,
expand is set to be True to match output size
# expand is set to be True to match output size
random_rotation_op
=
c_vision
.
RandomRotation
((
0
,
90
),
expand
=
True
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
random_rotation_op
)
...
...
@@ -83,9 +122,50 @@ def test_random_rotation_expand():
num_iter
+=
1
def
test_r
otation_diff
():
def
test_r
andom_rotation_md5
():
"""
Test Rotation op
Test RandomRotation with md5 check
"""
logger
.
info
(
"Test RandomRotation with md5 check"
)
original_seed
=
config_get_set_seed
(
5
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# Fisrt dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
resize_op
=
c_vision
.
RandomRotation
((
0
,
90
),
expand
=
True
,
resample
=
Inter
.
BILINEAR
,
center
=
(
50
,
50
),
fill_value
=
150
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
resize_op
)
# Second dataset
data2
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
shuffle
=
False
)
transform2
=
py_vision
.
ComposeOp
([
py_vision
.
Decode
(),
py_vision
.
RandomRotation
((
0
,
90
),
expand
=
True
,
resample
=
Inter
.
BILINEAR
,
center
=
(
50
,
50
),
fill_value
=
150
),
py_vision
.
ToTensor
()])
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform2
())
# Compare with expected md5 from images
filename1
=
"random_rotation_01_c_result.npz"
save_and_check_md5
(
data1
,
filename1
,
generate_golden
=
GENERATE_GOLDEN
)
filename2
=
"random_rotation_01_py_result.npz"
save_and_check_md5
(
data2
,
filename2
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
(
original_num_parallel_workers
)
def
test_rotation_diff
(
plot
=
False
):
"""
Test RandomRotation op
"""
logger
.
info
(
"test_random_rotation_op"
)
...
...
@@ -93,7 +173,7 @@ def test_rotation_diff():
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
rotation_op
=
c_vision
.
RandomRotation
((
45
,
45
)
,
expand
=
True
)
rotation_op
=
c_vision
.
RandomRotation
((
45
,
45
))
ctrans
=
[
decode_op
,
rotation_op
]
...
...
@@ -103,7 +183,7 @@ def test_rotation_diff():
# Second dataset
transforms
=
[
py_vision
.
Decode
(),
py_vision
.
RandomRotation
((
45
,
45
)
,
expand
=
True
),
py_vision
.
RandomRotation
((
45
,
45
)),
py_vision
.
ToTensor
(),
]
transform
=
py_vision
.
ComposeOp
(
transforms
)
...
...
@@ -111,10 +191,13 @@ def test_rotation_diff():
data2
=
data2
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
num_iter
=
0
image_list_c
,
image_list_py
=
[],
[]
for
item1
,
item2
in
zip
(
data1
.
create_dict_iterator
(),
data2
.
create_dict_iterator
()):
num_iter
+=
1
c_image
=
item1
[
"image"
]
py_image
=
(
item2
[
"image"
].
transpose
(
1
,
2
,
0
)
*
255
).
astype
(
np
.
uint8
)
image_list_c
.
append
(
c_image
)
image_list_py
.
append
(
py_image
)
logger
.
info
(
"shape of c_image: {}"
.
format
(
c_image
.
shape
))
logger
.
info
(
"shape of py_image: {}"
.
format
(
py_image
.
shape
))
...
...
@@ -122,8 +205,15 @@ def test_rotation_diff():
logger
.
info
(
"dtype of c_image: {}"
.
format
(
c_image
.
dtype
))
logger
.
info
(
"dtype of py_image: {}"
.
format
(
py_image
.
dtype
))
mse
=
diff_mse
(
c_image
,
py_image
)
assert
mse
<
0.001
# Rounding error
if
plot
:
visualize_list
(
image_list_c
,
image_list_py
,
visualize_mode
=
2
)
if
__name__
==
"__main__"
:
test_random_rotation_op
(
True
)
test_random_rotation_op_c
(
plot
=
True
)
test_random_rotation_op_py
(
plot
=
True
)
test_random_rotation_expand
()
test_rotation_diff
()
test_random_rotation_md5
()
test_rotation_diff
(
plot
=
True
)
tests/ut/python/dataset/test_random_sharpness.py
浏览文件 @
5cd31363
...
...
@@ -16,14 +16,17 @@
Testing RandomSharpness op in DE
"""
import
numpy
as
np
import
mindspore.dataset
as
ds
import
mindspore.dataset.engine
as
de
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
diff_mse
,
save_and_check_md5
,
\
config_get_set_seed
,
config_get_set_num_parallel_workers
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
GENERATE_GOLDEN
=
False
def
test_random_sharpness
(
degrees
=
(
0.1
,
1.9
),
plot
=
False
):
"""
...
...
@@ -32,13 +35,13 @@ def test_random_sharpness(degrees=(0.1, 1.9), plot=False):
logger
.
info
(
"Test RandomSharpness"
)
# Original Images
d
s
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
d
ata
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms_original
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Resize
((
224
,
224
)),
F
.
ToTensor
()])
ds_original
=
d
s
.
map
(
input_columns
=
"image"
,
ds_original
=
d
ata
.
map
(
input_columns
=
"image"
,
operations
=
transforms_original
())
ds_original
=
ds_original
.
batch
(
512
)
...
...
@@ -52,14 +55,14 @@ def test_random_sharpness(degrees=(0.1, 1.9), plot=False):
axis
=
0
)
# Random Sharpness Adjusted Images
d
s
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
d
ata
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
transforms_random_sharpness
=
F
.
ComposeOp
([
F
.
Decode
(),
F
.
Resize
((
224
,
224
)),
F
.
RandomSharpness
(
degrees
=
degrees
),
F
.
ToTensor
()])
ds_random_sharpness
=
d
s
.
map
(
input_columns
=
"image"
,
ds_random_sharpness
=
d
ata
.
map
(
input_columns
=
"image"
,
operations
=
transforms_random_sharpness
())
ds_random_sharpness
=
ds_random_sharpness
.
batch
(
512
)
...
...
@@ -75,14 +78,45 @@ def test_random_sharpness(degrees=(0.1, 1.9), plot=False):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_random_sharpness
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_random_sharpness
[
i
],
images_original
[
i
])
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
if
plot
:
visualize_list
(
images_original
,
images_random_sharpness
)
def
test_random_sharpness_md5
():
"""
Test RandomSharpness with md5 comparison
"""
logger
.
info
(
"Test RandomSharpness with md5 comparison"
)
original_seed
=
config_get_set_seed
(
5
)
original_num_parallel_workers
=
config_get_set_num_parallel_workers
(
1
)
# define map operations
transforms
=
[
F
.
Decode
(),
F
.
RandomSharpness
((
0.5
,
1.5
)),
F
.
ToTensor
()
]
transform
=
F
.
ComposeOp
(
transforms
)
# Generate dataset
data
=
de
.
ImageFolderDatasetV2
(
dataset_dir
=
DATA_DIR
,
shuffle
=
False
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
transform
())
# check results with md5 comparison
filename
=
"random_sharpness_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
# Restore configuration
ds
.
config
.
set_seed
(
original_seed
)
ds
.
config
.
set_num_parallel_workers
(
original_num_parallel_workers
)
if
__name__
==
"__main__"
:
test_random_sharpness
()
test_random_sharpness
(
plot
=
True
)
test_random_sharpness
(
degrees
=
(
0.5
,
1.5
),
plot
=
True
)
test_random_sharpness_md5
()
tests/ut/python/dataset/test_random_vertical_flip.py
浏览文件 @
5cd31363
...
...
@@ -49,7 +49,7 @@ def test_random_vertical_op(plot=False):
# First dataset
data1
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
c_vision
.
Decode
()
random_vertical_op
=
c_vision
.
RandomVerticalFlip
()
random_vertical_op
=
c_vision
.
RandomVerticalFlip
(
1.0
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data1
=
data1
.
map
(
input_columns
=
[
"image"
],
operations
=
random_vertical_op
)
...
...
@@ -65,12 +65,11 @@ def test_random_vertical_op(plot=False):
break
image_v_flipped
=
item1
[
"image"
]
image
=
item2
[
"image"
]
image_v_flipped_2
=
v_flip
(
image
)
diff
=
image_v_flipped
-
image_v_flipped_2
mse
=
np
.
sum
(
np
.
power
(
diff
,
2
))
mse
=
diff_mse
(
image_v_flipped
,
image_v_flipped_2
)
assert
mse
==
0
logger
.
info
(
"image_{}, mse: {}"
.
format
(
num_iter
+
1
,
mse
))
num_iter
+=
1
if
plot
:
...
...
tests/ut/python/dataset/test_rescale_op.py
浏览文件 @
5cd31363
...
...
@@ -18,11 +18,12 @@ Testing the rescale op in DE
import
mindspore.dataset
as
ds
import
mindspore.dataset.transforms.vision.c_transforms
as
vision
from
mindspore
import
log
as
logger
from
util
import
visualize_image
,
diff_mse
from
util
import
visualize_image
,
diff_mse
,
save_and_check_md5
DATA_DIR
=
[
"../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"
]
SCHEMA_DIR
=
"../data/dataset/test_tf_file_3_images/datasetSchema.json"
GENERATE_GOLDEN
=
False
def
rescale_np
(
image
):
"""
...
...
@@ -72,11 +73,33 @@ def test_rescale_op(plot=False):
image_de_rescaled
=
item2
[
"image"
]
image_np_rescaled
=
get_rescaled
(
num_iter
)
mse
=
diff_mse
(
image_de_rescaled
,
image_np_rescaled
)
assert
mse
<
0.001
# rounding error
logger
.
info
(
"image_{}, mse: {}"
.
format
(
num_iter
+
1
,
mse
))
num_iter
+=
1
if
plot
:
visualize_image
(
image_original
,
image_de_rescaled
,
mse
,
image_np_rescaled
)
def
test_rescale_md5
():
"""
Test Rescale with md5 comparison
"""
logger
.
info
(
"Test Rescale with md5 comparison"
)
# generate dataset
data
=
ds
.
TFRecordDataset
(
DATA_DIR
,
SCHEMA_DIR
,
columns_list
=
[
"image"
],
shuffle
=
False
)
decode_op
=
vision
.
Decode
()
rescale_op
=
vision
.
Rescale
(
1.0
/
255.0
,
-
1.0
)
# apply map operations on images
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
decode_op
)
data
=
data
.
map
(
input_columns
=
[
"image"
],
operations
=
rescale_op
)
# check results with md5 comparison
filename
=
"rescale_01_result.npz"
save_and_check_md5
(
data
,
filename
,
generate_golden
=
GENERATE_GOLDEN
)
if
__name__
==
"__main__"
:
test_rescale_op
(
plot
=
True
)
test_rescale_md5
()
tests/ut/python/dataset/test_uniform_augment.py
浏览文件 @
5cd31363
...
...
@@ -21,7 +21,7 @@ import mindspore.dataset.engine as de
import
mindspore.dataset.transforms.vision.c_transforms
as
C
import
mindspore.dataset.transforms.vision.py_transforms
as
F
from
mindspore
import
log
as
logger
from
util
import
visualize_list
from
util
import
visualize_list
,
diff_mse
DATA_DIR
=
"../data/dataset/testImageNetData/train/"
...
...
@@ -83,7 +83,7 @@ def test_uniform_augment(plot=False, num_ops=2):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_ua
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_ua
[
i
],
images_original
[
i
]
)
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
if
plot
:
...
...
@@ -147,7 +147,7 @@ def test_cpp_uniform_augment(plot=False, num_ops=2):
num_samples
=
images_original
.
shape
[
0
]
mse
=
np
.
zeros
(
num_samples
)
for
i
in
range
(
num_samples
):
mse
[
i
]
=
np
.
mean
((
images_ua
[
i
]
-
images_original
[
i
])
**
2
)
mse
[
i
]
=
diff_mse
(
images_ua
[
i
],
images_original
[
i
]
)
logger
.
info
(
"MSE= {}"
.
format
(
str
(
np
.
mean
(
mse
))))
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录