Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
93e27f03
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
93e27f03
编写于
7月 08, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
7月 08, 2020
浏览文件
操作
浏览文件
下载
差异文件
!2917 bug fix in quantization aware training auto create graph
Merge pull request !2917 from chenzhongming/master
上级
a7fc7e50
34469401
变更
2
显示空白变更内容
内联
并排
Showing
2 changed file
with
17 addition
and
9 deletion
+17
-9
mindspore/nn/layer/quant.py
mindspore/nn/layer/quant.py
+3
-3
mindspore/train/quant/quant.py
mindspore/train/quant/quant.py
+14
-6
未找到文件。
mindspore/nn/layer/quant.py
浏览文件 @
93e27f03
...
...
@@ -855,7 +855,7 @@ class ActQuant(_QuantActivation):
symmetric
=
symmetric
,
narrow_range
=
narrow_range
,
quant_delay
=
quant_delay
)
self
.
act
=
activation
self
.
act
=
activation
()
def
construct
(
self
,
x
):
x
=
self
.
act
(
x
)
...
...
@@ -921,7 +921,7 @@ class HSwishQuant(_QuantActivation):
narrow_range
=
narrow_range
,
quant_delay
=
quant_delay
)
if
isinstance
(
activation
,
nn
.
HSwish
):
self
.
act
=
activation
self
.
act
=
activation
()
else
:
raise
ValueError
(
"Activation should be `nn.HSwish`"
)
...
...
@@ -990,7 +990,7 @@ class HSigmoidQuant(_QuantActivation):
narrow_range
=
narrow_range
,
quant_delay
=
quant_delay
)
if
isinstance
(
activation
,
nn
.
HSwish
):
self
.
act
=
activation
self
.
act
=
activation
()
else
:
raise
ValueError
(
"Activation should be `nn.HSigmoid`"
)
...
...
mindspore/train/quant/quant.py
浏览文件 @
93e27f03
...
...
@@ -114,7 +114,6 @@ class ConvertToQuantNetwork:
def
run
(
self
):
self
.
network
.
update_cell_prefix
()
network
=
self
.
_convert_subcells2quant
(
self
.
network
)
network
=
_AddFakeQuantInput
(
network
)
self
.
network
.
update_cell_type
(
"quant"
)
return
network
...
...
@@ -275,16 +274,20 @@ class ExportToQuantInferNetwork:
Args:
network (Cell): MindSpore network API `convert_quant_network`.
inputs (Tensor): Input tensors of the `quantization aware training network`.
mean (int): Input data mean. Default: 127.5.
std_dev (int, float): Input data variance. Default: 127.5.
Returns:
Cell, GEIR backend Infer network.
"""
__quant_op_name__
=
[
"TensorAdd"
,
"Sub"
,
"Mul"
,
"RealDiv"
]
def
__init__
(
self
,
network
,
*
inputs
):
def
__init__
(
self
,
network
,
mean
,
std_dev
,
*
inputs
):
network
=
validator
.
check_isinstance
(
'network'
,
network
,
(
nn
.
Cell
,))
# quantize for inputs: q = f / scale + zero_point
# dequantize for outputs: f = (q - zero_point) * scale
self
.
input_scale
=
round
(
mean
)
self
.
input_zero_point
=
1
/
std_dev
self
.
data_type
=
mstype
.
int8
self
.
network
=
copy
.
deepcopy
(
network
)
self
.
all_parameters
=
{
p
.
name
:
p
for
p
in
self
.
network
.
get_parameters
()}
...
...
@@ -395,7 +398,7 @@ class ExportToQuantInferNetwork:
return
network
def
export
(
network
,
*
inputs
,
file_name
,
file_format
=
'GEIR'
):
def
export
(
network
,
*
inputs
,
file_name
,
mean
=
127.5
,
std_dev
=
127.5
,
file_format
=
'GEIR'
):
"""
Exports MindSpore quantization predict model to deploy with GEIR.
...
...
@@ -403,12 +406,17 @@ def export(network, *inputs, file_name, file_format='GEIR'):
network (Cell): MindSpore network produced by `convert_quant_network`.
inputs (Tensor): Inputs of the `quantization aware training network`.
file_name (str): File name of model to export.
mean (int): Input data mean. Default: 127.5.
std_dev (int, float): Input data variance. Default: 127.5.
file_format (str): MindSpore currently supports 'GEIR' format for exported quantization aware model.
- GEIR: Graph Engine Intermediate Representation. An Intermediate representation format of Ascend model.
"""
supported_device
=
[
"Ascend"
]
supported_formats
=
[
'GEIR'
]
mean
=
validator
.
check_type
(
"mean"
,
mean
,
(
int
,
float
))
std_dev
=
validator
.
check_type
(
"std_dev"
,
std_dev
,
(
int
,
float
))
if
context
.
get_context
(
'device_target'
)
not
in
supported_device
:
raise
KeyError
(
"Unsupported {} device target."
.
format
(
context
.
get_context
(
'device_target'
)))
...
...
@@ -418,7 +426,7 @@ def export(network, *inputs, file_name, file_format='GEIR'):
network
.
set_train
(
False
)
if
file_format
==
'GEIR'
:
exporter
=
ExportToQuantInferNetwork
(
network
,
*
inputs
)
exporter
=
ExportToQuantInferNetwork
(
network
,
mean
,
std_dev
,
*
inputs
)
deploy_net
=
exporter
.
run
()
serialization
.
export
(
deploy_net
,
*
inputs
,
file_name
=
file_name
,
file_format
=
file_format
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录