Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
毕竟曾有刹那
Mace
提交
437da1ea
Mace
项目概览
毕竟曾有刹那
/
Mace
与 Fork 源项目一致
Fork自
Xiaomi / Mace
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
Mace
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
437da1ea
编写于
4月 26, 2019
作者:
L
liuqi
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Support accuracy validation using python script as a plugin.
上级
3e9bb73e
变更
7
显示空白变更内容
内联
并排
Showing
7 changed file
with
502 addition
and
195 deletion
+502
-195
docs/user_guide/advanced_usage.rst
docs/user_guide/advanced_usage.rst
+15
-0
docs/user_guide/models/demo_models.yml
docs/user_guide/models/demo_models.yml
+4
-0
tools/accuracy_validator.py
tools/accuracy_validator.py
+153
-0
tools/common.py
tools/common.py
+1
-0
tools/converter.py
tools/converter.py
+10
-0
tools/device.py
tools/device.py
+265
-174
tools/sh_commands.py
tools/sh_commands.py
+54
-21
未找到文件。
docs/user_guide/advanced_usage.rst
浏览文件 @
437da1ea
...
...
@@ -76,6 +76,8 @@ in one deployment file.
- The numerical range of the input tensors' data, default [-1, 1]. It is only for test.
* - validation_inputs_data
- [optional] Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used.
* - accuracy_validation_script
- [optional] Specify the accuracy validation script as a plugin to test accuracy, see `doc <#validate-accuracy-of-mace-model>`__.
* - validation_threshold
- [optional] Specify the similarity threshold for validation. A dict with key in 'CPU', 'GPU' and/or 'HEXAGON' and value <= 1.0.
* - backend
...
...
@@ -358,6 +360,19 @@ Tuning for specific SoC's GPU
// ... Same with the code in basic usage.
Validate accuracy of MACE model
-------------------------------
MACE supports **python validation script** as a plugin to test the accuracy, the plugin script could be used for below two purpose.
1. Test the **accuracy(like Top-1)** of MACE model(specifically quantization model) converted from other framework(like tensorflow)
2. Show some real output if you want to see it.
The script define some interfaces like `preprocess` and `postprocess` to deal with input/outut and calculate the accuracy,
you could refer to the `sample code <https://github.com/XiaoMi/mace/tree/master/tools/accuracy_validator.py>`__ for detail.
the sample code show how to calculate the Top-1 accuracy with imagenet validation dataset.
Useful Commands
---------------
* **run the model**
...
...
docs/user_guide/models/demo_models.yml
浏览文件 @
437da1ea
...
...
@@ -2,6 +2,8 @@
library_name
:
mobile_squeeze
# host, armeabi-v7a or arm64-v8a
target_abis
:
[
arm64-v8a
]
# soc's name or all
target_socs
:
[
all
]
# The build mode for model(s).
# 'code' for transferring model(s) into cpp code, 'file' for keeping model(s) in protobuf file(s) (.pb).
model_graph_format
:
code
...
...
@@ -43,6 +45,8 @@ models:
-
prob
output_shapes
:
-
1,1,1,1000
accuracy_validation_script
:
-
path/to/your/script
runtime
:
cpu+gpu
limit_opencl_kernel_time
:
0
obfuscate
:
0
...
...
tools/accuracy_validator.py
0 → 100644
浏览文件 @
437da1ea
# Copyright 2019 The MACE Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
os.path
import
numpy
as
np
from
PIL
import
Image
class
AccuracyValidator
(
object
):
"""Accuracy Validator Plugin:
Usage: This script is used to calculate the accuracy(like Top-1)
of MACE model.
User could reload this validator script to do
other accuracy validation(like MIOU for segmentation),
the new script's interface should be same
with current AccuracyValidator exactly,
Warning: Do not use relative path in this script.
"""
def
__init__
(
self
,
**
kwargs
):
# absolute path
validation_set_image_dir
=
\
'/path/to/your/validation/set/directory'
validation_set_label_file_path
=
\
'/path/to/imagenet_groundtruth_labels.txt'
black_list_file_path
=
\
'/path/to/imagenet_blacklist.txt'
imagenet_classes_file
=
\
'/path/to/imagenet_classes.txt'
self
.
_imagenet_classes
=
[
line
.
rstrip
(
'
\n
'
)
for
line
in
open
(
imagenet_classes_file
)]
imagenet_classes_map
=
{}
for
idx
in
range
(
len
(
self
.
_imagenet_classes
)):
imagenet_classes_map
[
self
.
_imagenet_classes
[
idx
]]
=
idx
black_list
=
[
int
(
line
.
rstrip
(
'
\n
'
))
for
line
in
open
(
black_list_file_path
)]
self
.
_samples
=
[]
self
.
_labels
=
[
0
]
# image id start from 1
self
.
_correct_count
=
0
for
img_file
in
os
.
listdir
(
validation_set_image_dir
):
if
img_file
.
endswith
(
".JPEG"
):
img_id
=
int
(
os
.
path
.
splitext
(
img_file
)[
0
].
split
(
'_'
)[
-
1
])
if
img_id
not
in
black_list
:
self
.
_samples
.
append
(
os
.
path
.
join
(
validation_set_image_dir
,
img_file
))
for
label
in
open
(
validation_set_label_file_path
):
label
=
label
.
rstrip
(
'
\n
'
)
self
.
_labels
.
append
(
imagenet_classes_map
[
label
])
def
sample_size
(
self
):
"""
:return: the size of samples in validation set
"""
return
len
(
self
.
_samples
)
def
batch_size
(
self
):
"""
batch size to do validation to speed up validation.
Keep same with batch size of input_shapes
in model deployment file(.yml). do not set too large
:return: batch size
"""
return
1
def
preprocess
(
self
,
sample_idx_start
,
sample_idx_end
,
**
kwargs
):
"""
pre-process the input sample
:param sample_idx_start: start index of the sample.
:param sample_idx_end: end index of the sample(not include).
:param kwargs: other parameters.
:return: the batched inputs' map(name: data) feed into your model
"""
inputs
=
{}
batch_sample_data
=
[]
sample_idx_end
=
min
(
sample_idx_end
,
self
.
sample_size
())
for
sample_idx
in
range
(
sample_idx_start
,
sample_idx_end
):
sample_file_path
=
self
.
_samples
[
sample_idx
]
sample_img
=
Image
.
open
(
sample_file_path
).
resize
((
224
,
224
))
sample_data
=
np
.
asarray
(
sample_img
,
dtype
=
np
.
float32
)
sample_data
=
(
2.0
/
255.0
)
*
sample_data
-
1.0
batch_sample_data
.
append
(
sample_data
.
tolist
())
inputs
[
"input"
]
=
batch_sample_data
return
inputs
def
postprocess
(
self
,
sample_idx_start
,
sample_idx_end
,
output_map
,
**
kwargs
):
"""
post-process the outputs of your model and calculate the accuracy
:param sample_idx_start: start index of input sample
:param sample_idx_end: end index of input sample
:param output_map: output map of the model
:param kwargs: other parameters.
:return: None
"""
output
=
output_map
[
'MobilenetV2/Predictions/Reshape_1'
]
sample_idx_end
=
min
(
sample_idx_end
,
self
.
sample_size
())
batch_size
=
sample_idx_end
-
sample_idx_start
output
=
np
.
array
(
output
).
reshape
((
batch_size
,
-
1
))
output
=
np
.
argmax
(
output
,
axis
=-
1
)
output_idx
=
0
for
sample_idx
in
range
(
sample_idx_start
,
sample_idx_end
):
sample_file_path
=
self
.
_samples
[
sample_idx
]
img_id
=
int
(
os
.
path
.
splitext
(
sample_file_path
)[
0
].
split
(
'_'
)[
-
1
])
if
output
[
output_idx
]
==
self
.
_labels
[
img_id
]:
self
.
_correct_count
+=
1
else
:
print
(
img_id
,
'predict %s vs gt %s'
%
(
self
.
_imagenet_classes
[
output
[
output_idx
]],
self
.
_imagenet_classes
[
self
.
_labels
[
img_id
]]))
output_idx
+=
1
def
result
(
self
):
"""
print or show the result
:return: None
"""
print
(
"=========================================="
)
print
(
"Top 1 accuracy: %f"
%
(
self
.
_correct_count
*
1.0
/
self
.
sample_size
()))
print
(
"=========================================="
)
if
__name__
==
'__main__'
:
# sample usage code
validator
=
AccuracyValidator
()
sample_size
=
validator
.
sample_size
()
val_batch_size
=
validator
.
batch_size
()
for
i
in
range
(
0
,
sample_size
,
val_batch_size
):
inputs
=
validator
.
preprocess
(
i
,
i
+
val_batch_size
)
print
(
np
.
array
(
inputs
[
'input'
]).
shape
)
output_map
=
{
'MobilenetV2/Predictions/Reshape_1'
:
np
.
array
([[
0
,
1
],
[
1
,
0
]])
}
validator
.
postprocess
(
i
,
i
+
val_batch_size
,
output_map
)
validator
.
result
()
tools/common.py
浏览文件 @
437da1ea
...
...
@@ -413,6 +413,7 @@ class YAMLKeyword(object):
cl_mem_type
=
'cl_mem_type'
backend
=
'backend'
validation_outputs_data
=
'validation_outputs_data'
accuracy_validation_script
=
'accuracy_validation_script'
docker_image_tag
=
'docker_image_tag'
dockerfile_path
=
'dockerfile_path'
dockerfile_sha256_checksum
=
'dockerfile_sha256_checksum'
...
...
tools/converter.py
浏览文件 @
437da1ea
...
...
@@ -532,6 +532,16 @@ def format_model_config(flags):
subgraph
[
YAMLKeyword
.
input_ranges
]
=
\
[
str
(
v
)
for
v
in
subgraph
[
YAMLKeyword
.
input_ranges
]]
accuracy_validation_script
=
subgraph
.
get
(
YAMLKeyword
.
accuracy_validation_script
,
""
)
if
isinstance
(
accuracy_validation_script
,
list
):
mace_check
(
len
(
accuracy_validation_script
)
==
1
,
ModuleName
.
YAML_CONFIG
,
"Only support one accuracy validation script"
)
accuracy_validation_script
=
accuracy_validation_script
[
0
]
subgraph
[
YAMLKeyword
.
accuracy_validation_script
]
=
\
accuracy_validation_script
for
key
in
[
YAMLKeyword
.
limit_opencl_kernel_time
,
YAMLKeyword
.
nnlib_graph_mode
,
YAMLKeyword
.
obfuscate
,
...
...
tools/device.py
浏览文件 @
437da1ea
...
...
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import
numpy
as
np
import
os
import
sys
import
socket
...
...
@@ -387,7 +388,7 @@ class DeviceWrapper:
subgraphs
=
model_config
[
YAMLKeyword
.
subgraphs
]
# generate input data
sh_commands
.
gen_
random_
input
(
sh_commands
.
gen_input
(
model_output_dir
,
subgraphs
[
0
][
YAMLKeyword
.
input_tensors
],
subgraphs
[
0
][
YAMLKeyword
.
input_shapes
],
...
...
@@ -460,18 +461,14 @@ class DeviceWrapper:
return
output_configs
def
run_specify_abi
(
self
,
flags
,
configs
,
target_abi
):
if
target_abi
not
in
self
.
target_abis
:
six
.
print_
(
'The device %s with soc %s do not support the abi %s'
%
(
self
.
device_name
,
self
.
target_socs
,
target_abi
))
return
def
run_model
(
self
,
flags
,
configs
,
target_abi
,
model_name
,
output_config
,
runtime
,
tuning
):
library_name
=
configs
[
YAMLKeyword
.
library_name
]
mace_lib_type
=
flags
.
mace_lib_type
embed_model_data
=
\
configs
[
YAMLKeyword
.
model_data_format
]
==
ModelFormat
.
code
build_tmp_binary_dir
=
get_build_binary_dir
(
library_name
,
target_abi
)
# get target name for run
mace_lib_type
=
flags
.
mace_lib_type
if
flags
.
example
:
if
mace_lib_type
==
MACELibType
.
static
:
target_name
=
EXAMPLE_STATIC_NAME
...
...
@@ -483,21 +480,11 @@ class DeviceWrapper:
else
:
target_name
=
MACE_RUN_DYNAMIC_NAME
link_dynamic
=
mace_lib_type
==
MACELibType
.
dynamic
model_output_dirs
=
[]
for
model_name
in
configs
[
YAMLKeyword
.
models
]:
check_model_converted
(
library_name
,
model_name
,
configs
[
YAMLKeyword
.
model_graph_format
],
configs
[
YAMLKeyword
.
model_data_format
],
target_abi
)
if
target_abi
!=
ABIType
.
host
:
self
.
clear_data_dir
()
MaceLogger
.
header
(
StringFormatter
.
block
(
'Run model {} on {}'
.
format
(
model_name
,
self
.
device_name
)))
model_config
=
configs
[
YAMLKeyword
.
models
][
model_name
]
model_runtime
=
model_config
[
YAMLKeyword
.
runtime
]
subgraphs
=
model_config
[
YAMLKeyword
.
subgraphs
]
model_output_base_dir
,
model_output_dir
,
mace_model_dir
=
\
...
...
@@ -505,28 +492,9 @@ class DeviceWrapper:
library_name
,
model_name
,
target_abi
,
self
,
model_config
[
YAMLKeyword
.
model_file_path
])
# clear temp model output dir
if
os
.
path
.
exists
(
model_output_dir
):
sh
.
rm
(
'-rf'
,
model_output_dir
)
os
.
makedirs
(
model_output_dir
)
is_tuned
=
False
model_opencl_output_bin_path
=
''
model_opencl_parameter_path
=
''
if
not
flags
.
address_sanitizer
\
and
not
flags
.
example
\
and
target_abi
!=
ABIType
.
host
\
and
(
configs
[
YAMLKeyword
.
target_socs
]
or
flags
.
target_socs
)
\
and
self
.
target_socs
\
and
model_runtime
in
[
RuntimeType
.
gpu
,
RuntimeType
.
cpu_gpu
]
\
and
not
flags
.
disable_tuning
:
self
.
tuning
(
library_name
,
model_name
,
model_config
,
configs
[
YAMLKeyword
.
model_graph_format
],
configs
[
YAMLKeyword
.
model_data_format
],
target_abi
,
mace_lib_type
)
model_output_dirs
.
append
(
model_output_dir
)
if
tuning
:
model_opencl_output_bin_path
=
\
'{}/{}/{}'
.
format
(
model_output_dir
,
BUILD_TMP_OPENCL_BIN_DIR
,
...
...
@@ -535,8 +503,6 @@ class DeviceWrapper:
'{}/{}/{}'
.
format
(
model_output_dir
,
BUILD_TMP_OPENCL_BIN_DIR
,
CL_TUNED_PARAMETER_FILE_NAME
)
self
.
clear_data_dir
()
is_tuned
=
True
elif
target_abi
!=
ABIType
.
host
and
self
.
target_socs
:
model_opencl_output_bin_path
=
get_opencl_binary_output_path
(
library_name
,
target_abi
,
self
...
...
@@ -544,53 +510,9 @@ class DeviceWrapper:
model_opencl_parameter_path
=
get_opencl_parameter_output_path
(
library_name
,
target_abi
,
self
)
sh_commands
.
gen_random_input
(
model_output_dir
,
subgraphs
[
0
][
YAMLKeyword
.
input_tensors
],
subgraphs
[
0
][
YAMLKeyword
.
input_shapes
],
subgraphs
[
0
][
YAMLKeyword
.
validation_inputs_data
],
input_ranges
=
subgraphs
[
0
][
YAMLKeyword
.
input_ranges
],
input_data_types
=
subgraphs
[
0
][
YAMLKeyword
.
input_data_types
]
)
runtime_list
=
[]
if
target_abi
==
ABIType
.
host
:
runtime_list
.
append
(
RuntimeType
.
cpu
)
elif
model_runtime
==
RuntimeType
.
cpu_gpu
:
runtime_list
.
extend
([
RuntimeType
.
cpu
,
RuntimeType
.
gpu
])
else
:
runtime_list
.
append
(
model_runtime
)
for
runtime
in
runtime_list
:
device_type
=
parse_device_type
(
runtime
)
# run for specified soc
if
not
subgraphs
[
0
][
YAMLKeyword
.
check_tensors
]:
output_nodes
=
subgraphs
[
0
][
YAMLKeyword
.
output_tensors
]
output_shapes
=
subgraphs
[
0
][
YAMLKeyword
.
output_shapes
]
else
:
output_nodes
=
subgraphs
[
0
][
YAMLKeyword
.
check_tensors
]
output_shapes
=
subgraphs
[
0
][
YAMLKeyword
.
check_shapes
]
output_configs
=
[]
log_file
=
""
if
flags
.
layers
!=
"-1"
:
mace_check
(
configs
[
YAMLKeyword
.
model_graph_format
]
==
ModelFormat
.
file
and
configs
[
YAMLKeyword
.
model_data_format
]
==
ModelFormat
.
file
,
"Device"
,
"'--layers' only supports model format 'file'."
)
output_configs
=
self
.
get_layers
(
mace_model_dir
,
model_name
,
flags
.
layers
)
log_dir
=
mace_model_dir
+
"/"
+
runtime
if
os
.
path
.
exists
(
log_dir
):
sh
.
rm
(
'-rf'
,
log_dir
)
os
.
makedirs
(
log_dir
)
log_file
=
log_dir
+
"/log.csv"
model_path
=
"%s/%s.pb"
%
(
mace_model_dir
,
model_name
)
output_config
=
{
YAMLKeyword
.
model_file_path
:
model_path
,
YAMLKeyword
.
output_tensors
:
output_nodes
,
YAMLKeyword
.
output_shapes
:
output_shapes
}
output_configs
.
append
(
output_config
)
for
output_config
in
output_configs
:
run_output
=
self
.
tuning_run
(
device_type
=
parse_device_type
(
runtime
)
self
.
tuning_run
(
abi
=
target_abi
,
target_dir
=
build_tmp_binary_dir
,
target_name
=
target_name
,
...
...
@@ -633,25 +555,194 @@ class DeviceWrapper:
layers_validate_file
=
output_config
[
YAMLKeyword
.
model_file_path
]
)
def
get_output_map
(
self
,
target_abi
,
output_nodes
,
output_shapes
,
model_output_dir
):
output_map
=
{}
for
i
in
range
(
len
(
output_nodes
)):
output_name
=
output_nodes
[
i
]
formatted_name
=
common
.
formatted_file_name
(
"model_out"
,
output_name
)
if
target_abi
!=
"host"
:
if
os
.
path
.
exists
(
"%s/%s"
%
(
model_output_dir
,
formatted_name
)):
sh
.
rm
(
"-rf"
,
"%s/%s"
%
(
model_output_dir
,
formatted_name
))
self
.
pull_from_data_dir
(
formatted_name
,
model_output_dir
)
output_file_path
=
os
.
path
.
join
(
model_output_dir
,
formatted_name
)
output_shape
=
[
int
(
x
)
for
x
in
common
.
split_shape
(
output_shapes
[
i
])]
output_map
[
output_name
]
=
np
.
fromfile
(
output_file_path
,
dtype
=
np
.
float32
).
reshape
(
output_shape
)
return
output_map
def
run_specify_abi
(
self
,
flags
,
configs
,
target_abi
):
if
target_abi
not
in
self
.
target_abis
:
six
.
print_
(
'The device %s with soc %s do not support the abi %s'
%
(
self
.
device_name
,
self
.
target_socs
,
target_abi
))
return
library_name
=
configs
[
YAMLKeyword
.
library_name
]
model_output_dirs
=
[]
for
model_name
in
configs
[
YAMLKeyword
.
models
]:
check_model_converted
(
library_name
,
model_name
,
configs
[
YAMLKeyword
.
model_graph_format
],
configs
[
YAMLKeyword
.
model_data_format
],
target_abi
)
if
target_abi
!=
ABIType
.
host
:
self
.
clear_data_dir
()
MaceLogger
.
header
(
StringFormatter
.
block
(
'Run model {} on {}'
.
format
(
model_name
,
self
.
device_name
)))
model_config
=
configs
[
YAMLKeyword
.
models
][
model_name
]
model_runtime
=
model_config
[
YAMLKeyword
.
runtime
]
subgraphs
=
model_config
[
YAMLKeyword
.
subgraphs
]
model_output_base_dir
,
model_output_dir
,
mace_model_dir
=
\
get_build_model_dirs
(
library_name
,
model_name
,
target_abi
,
self
,
model_config
[
YAMLKeyword
.
model_file_path
])
# clear temp model output dir
if
os
.
path
.
exists
(
model_output_dir
):
sh
.
rm
(
'-rf'
,
model_output_dir
)
os
.
makedirs
(
model_output_dir
)
tuning
=
False
if
not
flags
.
address_sanitizer
\
and
not
flags
.
example
\
and
target_abi
!=
ABIType
.
host
\
and
(
configs
[
YAMLKeyword
.
target_socs
]
or
flags
.
target_socs
)
\
and
self
.
target_socs
\
and
model_runtime
in
[
RuntimeType
.
gpu
,
RuntimeType
.
cpu_gpu
]
\
and
not
flags
.
disable_tuning
:
self
.
tuning
(
library_name
,
model_name
,
model_config
,
configs
[
YAMLKeyword
.
model_graph_format
],
configs
[
YAMLKeyword
.
model_data_format
],
target_abi
,
flags
.
mace_lib_type
)
model_output_dirs
.
append
(
model_output_dir
)
self
.
clear_data_dir
()
tuning
=
True
accuracy_validation_script
=
\
subgraphs
[
0
][
YAMLKeyword
.
accuracy_validation_script
]
output_configs
=
[]
if
not
accuracy_validation_script
and
flags
.
layers
!=
"-1"
:
mace_check
(
configs
[
YAMLKeyword
.
model_graph_format
]
==
ModelFormat
.
file
and
configs
[
YAMLKeyword
.
model_data_format
]
==
ModelFormat
.
file
,
"Device"
,
"'--layers' only supports model format 'file'."
)
output_configs
=
self
.
get_layers
(
mace_model_dir
,
model_name
,
flags
.
layers
)
# run for specified soc
if
not
subgraphs
[
0
][
YAMLKeyword
.
check_tensors
]:
output_nodes
=
subgraphs
[
0
][
YAMLKeyword
.
output_tensors
]
output_shapes
=
subgraphs
[
0
][
YAMLKeyword
.
output_shapes
]
else
:
output_nodes
=
subgraphs
[
0
][
YAMLKeyword
.
check_tensors
]
output_shapes
=
subgraphs
[
0
][
YAMLKeyword
.
check_shapes
]
model_path
=
"%s/%s.pb"
%
(
mace_model_dir
,
model_name
)
output_config
=
{
YAMLKeyword
.
model_file_path
:
model_path
,
YAMLKeyword
.
output_tensors
:
output_nodes
,
YAMLKeyword
.
output_shapes
:
output_shapes
}
output_configs
.
append
(
output_config
)
runtime_list
=
[]
if
target_abi
==
ABIType
.
host
:
runtime_list
.
append
(
RuntimeType
.
cpu
)
elif
model_runtime
==
RuntimeType
.
cpu_gpu
:
runtime_list
.
extend
([
RuntimeType
.
cpu
,
RuntimeType
.
gpu
])
else
:
runtime_list
.
append
(
model_runtime
)
if
accuracy_validation_script
:
flags
.
validate
=
False
flags
.
report
=
False
import
imp
accuracy_val_module
=
imp
.
load_source
(
'accuracy_val_module'
,
accuracy_validation_script
)
for
runtime
in
runtime_list
:
accuracy_validator
=
\
accuracy_val_module
.
AccuracyValidator
()
sample_size
=
accuracy_validator
.
sample_size
()
val_batch_size
=
accuracy_validator
.
batch_size
()
for
i
in
range
(
0
,
sample_size
,
val_batch_size
):
inputs
=
accuracy_validator
.
preprocess
(
i
,
i
+
val_batch_size
)
sh_commands
.
gen_input
(
model_output_dir
,
subgraphs
[
0
][
YAMLKeyword
.
input_tensors
],
subgraphs
[
0
][
YAMLKeyword
.
input_shapes
],
input_data_types
=
subgraphs
[
0
][
YAMLKeyword
.
input_data_types
],
# noqa
input_data_map
=
inputs
)
self
.
run_model
(
flags
,
configs
,
target_abi
,
model_name
,
output_configs
[
-
1
],
runtime
,
tuning
)
accuracy_validator
.
postprocess
(
i
,
i
+
val_batch_size
,
self
.
get_output_map
(
target_abi
,
output_nodes
,
subgraphs
[
0
][
YAMLKeyword
.
output_shapes
],
model_output_dir
))
accuracy_validator
.
result
()
else
:
sh_commands
.
gen_input
(
model_output_dir
,
subgraphs
[
0
][
YAMLKeyword
.
input_tensors
],
subgraphs
[
0
][
YAMLKeyword
.
input_shapes
],
subgraphs
[
0
][
YAMLKeyword
.
validation_inputs_data
],
input_ranges
=
subgraphs
[
0
][
YAMLKeyword
.
input_ranges
],
input_data_types
=
subgraphs
[
0
][
YAMLKeyword
.
input_data_types
]
)
for
runtime
in
runtime_list
:
device_type
=
parse_device_type
(
runtime
)
for
output_config
in
output_configs
:
self
.
run_model
(
flags
,
configs
,
target_abi
,
model_name
,
output_config
,
runtime
,
tuning
)
if
flags
.
validate
:
model_file_path
,
weight_file_path
=
get_model_files
(
log_file
=
""
if
flags
.
layers
!=
"-1"
:
log_dir
=
mace_model_dir
+
"/"
+
runtime
if
os
.
path
.
exists
(
log_dir
):
sh
.
rm
(
'-rf'
,
log_dir
)
os
.
makedirs
(
log_dir
)
log_file
=
log_dir
+
"/log.csv"
model_file_path
,
weight_file_path
=
\
get_model_files
(
model_config
[
YAMLKeyword
.
model_file_path
],
model_config
[
YAMLKeyword
.
model_sha256_checksum
],
model_config
[
YAMLKeyword
.
model_sha256_checksum
],
BUILD_DOWNLOADS_DIR
,
model_config
[
YAMLKeyword
.
weight_file_path
],
model_config
[
YAMLKeyword
.
weight_sha256_checksum
]
)
model_config
[
YAMLKeyword
.
weight_sha256_checksum
]
)
validate_type
=
device_type
if
model_config
[
YAMLKeyword
.
quantize
]
==
1
:
validate_type
=
device_type
+
'_QUANTIZE'
dockerfile_path
,
docker_image_tag
=
\
get_dockerfile_info
(
model_config
.
get
(
YAMLKeyword
.
dockerfile_path
),
model_config
.
get
(
YAMLKeyword
.
dockerfile_sha256_checksum
),
model_config
.
get
(
YAMLKeyword
.
docker_image_tag
)
)
if
YAMLKeyword
.
dockerfile_path
in
model_config
\
YAMLKeyword
.
dockerfile_path
),
model_config
.
get
(
YAMLKeyword
.
dockerfile_sha256_checksum
),
# noqa
model_config
.
get
(
YAMLKeyword
.
docker_image_tag
)
)
if
YAMLKeyword
.
dockerfile_path
\
in
model_config
\
else
(
"third_party/caffe"
,
"lastest"
)
sh_commands
.
validate_model
(
...
...
@@ -688,14 +779,14 @@ class DeviceWrapper:
log_file
=
log_file
,
)
if
flags
.
report
and
flags
.
round
>
0
:
tuned
=
is_tuned
and
device_type
==
DeviceType
.
GPU
tuned
=
tuning
and
device_type
==
DeviceType
.
GPU
self
.
report_run_statistics
(
target_abi
=
target_abi
,
model_name
=
model_name
,
device_type
=
device_type
,
output_dir
=
flags
.
report_dir
,
tuned
=
tuned
)
tuned
=
tuned
)
if
model_output_dirs
:
opencl_output_bin_path
=
get_opencl_binary_output_path
(
library_name
,
target_abi
,
self
...
...
@@ -956,7 +1047,7 @@ class DeviceWrapper:
if
target_abi
!=
ABIType
.
host
:
self
.
clear_data_dir
()
sh_commands
.
gen_
random_
input
(
sh_commands
.
gen_input
(
model_output_dir
,
subgraphs
[
0
][
YAMLKeyword
.
input_tensors
],
subgraphs
[
0
][
YAMLKeyword
.
input_shapes
],
...
...
tools/sh_commands.py
浏览文件 @
437da1ea
...
...
@@ -549,41 +549,63 @@ def gen_model_code(model_codegen_dir,
_fg
=
True
)
def
gen_
random_
input
(
model_output_dir
,
def
gen_input
(
model_output_dir
,
input_nodes
,
input_shapes
,
input_files
,
input_ranges
,
input_data_types
,
input_files
=
None
,
input_ranges
=
None
,
input_data_types
=
None
,
input_data_map
=
None
,
input_file_name
=
"model_input"
):
for
input_name
in
input_nodes
:
formatted_name
=
common
.
formatted_file_name
(
input_file_name
,
input_name
)
if
os
.
path
.
exists
(
"%s/%s"
%
(
model_output_dir
,
formatted_name
)):
sh
.
rm
(
"%s/%s"
%
(
model_output_dir
,
formatted_name
))
input_nodes_str
=
","
.
join
(
input_nodes
)
input_shapes_str
=
":"
.
join
(
input_shapes
)
input_ranges_str
=
":"
.
join
(
input_ranges
)
input_data_types_str
=
","
.
join
(
input_data_types
)
generate_input_data
(
"%s/%s"
%
(
model_output_dir
,
input_file_name
),
input_nodes_str
,
input_shapes_str
,
input_ranges_str
,
input_data_types_str
)
input_file_list
=
[]
if
isinstance
(
input_files
,
list
):
input_file_list
.
extend
(
input_files
)
else
:
input_file_list
.
append
(
input_files
)
if
len
(
input_file_list
)
!=
0
:
if
input_data_map
:
for
i
in
range
(
len
(
input_nodes
)):
dst_input_file
=
model_output_dir
+
'/'
+
\
common
.
formatted_file_name
(
input_file_name
,
input_nodes
[
i
])
input_name
=
input_nodes
[
i
]
common
.
mace_check
(
input_name
in
input_data_map
,
common
.
ModuleName
.
RUN
,
"The preprocessor API in PrecisionValidator"
" script should return all inputs of model"
)
if
input_data_types
[
i
]
==
'float32'
:
input_data
=
np
.
array
(
input_data_map
[
input_name
],
dtype
=
np
.
float32
)
elif
input_data_types
[
i
]
==
'int32'
:
input_data
=
np
.
array
(
input_data_map
[
input_name
],
dtype
=
np
.
int32
)
else
:
common
.
mace_check
(
False
,
common
.
ModuleName
.
RUN
,
'Do not support input data type %s'
%
input_data_types
[
i
])
common
.
mace_check
(
list
(
map
(
int
,
common
.
split_shape
(
input_shapes
[
i
])))
==
list
(
input_data
.
shape
),
common
.
ModuleName
.
RUN
,
"The shape return from preprocessor API of"
" PrecisionValidator script is not same with"
" model deployment file. %s vs %s"
%
(
str
(
input_shapes
[
i
]),
str
(
input_data
.
shape
)))
input_data
.
tofile
(
dst_input_file
)
elif
len
(
input_file_list
)
!=
0
:
input_name_list
=
[]
if
isinstance
(
input_nodes
,
list
):
input_name_list
.
extend
(
input_nodes
)
else
:
input_name_list
.
append
(
input_nodes
)
if
len
(
input_file_list
)
!=
len
(
input_name_list
):
raise
Exception
(
'If input_files set, the input files should '
common
.
mace_check
(
len
(
input_file_list
)
==
len
(
input_name_list
),
common
.
ModuleName
.
RUN
,
'If input_files set, the input files should '
'match the input names.'
)
for
i
in
range
(
len
(
input_file_list
)):
if
input_file_list
[
i
]
is
not
None
:
...
...
@@ -596,6 +618,17 @@ def gen_random_input(model_output_dir,
dst_input_file
)
else
:
sh
.
cp
(
"-f"
,
input_file_list
[
i
],
dst_input_file
)
else
:
# generate random input files
input_nodes_str
=
","
.
join
(
input_nodes
)
input_shapes_str
=
":"
.
join
(
input_shapes
)
input_ranges_str
=
":"
.
join
(
input_ranges
)
input_data_types_str
=
","
.
join
(
input_data_types
)
generate_input_data
(
"%s/%s"
%
(
model_output_dir
,
input_file_name
),
input_nodes_str
,
input_shapes_str
,
input_ranges_str
,
input_data_types_str
)
def
gen_opencl_binary_cpps
(
opencl_bin_file_path
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录