Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
31ecc13b
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
31ecc13b
编写于
6月 08, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
6月 08, 2020
浏览文件
操作
浏览文件
下载
差异文件
!1900 bug fix in fake quant ops
Merge pull request !1900 from chenzhongming/master
上级
ffb5339e
97a54878
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
17 addition
and
3 deletion
+17
-3
mindspore/ops/operations/_quant_ops.py
mindspore/ops/operations/_quant_ops.py
+17
-3
未找到文件。
mindspore/ops/operations/_quant_ops.py
浏览文件 @
31ecc13b
...
...
@@ -15,6 +15,7 @@
"""Operators for quantization."""
import
mindspore.context
as
context
from
..._checkparam
import
Validator
as
validator
from
..._checkparam
import
Rel
from
..primitive
import
PrimitiveWithInfer
,
prim_attr_register
...
...
@@ -82,6 +83,8 @@ class FakeQuantPerLayer(PrimitiveWithInfer):
narrow_range
=
False
,
training
=
True
):
"""init FakeQuantPerLayer OP"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_perlayer
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' attr
\'
num_bits
\'
is not support."
)
...
...
@@ -143,6 +146,8 @@ class FakeQuantPerLayerGrad(PrimitiveWithInfer):
quant_delay
=
0
,
symmetric
=
False
,
narrow_range
=
False
):
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_perlayer_grad
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' attr
\'
num_bits
\'
is not support."
)
...
...
@@ -222,6 +227,8 @@ class FakeQuantPerChannel(PrimitiveWithInfer):
training
=
True
,
channel_axis
=
1
):
"""init FakeQuantPerChannel OP"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_perchannel
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' Attr
\'
num_bits
\'
is not support."
)
...
...
@@ -286,6 +293,8 @@ class FakeQuantPerChannelGrad(PrimitiveWithInfer):
narrow_range
=
False
,
channel_axis
=
1
):
"""init FakeQuantPerChannelGrad Fill"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_perchannel_grad
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' attr
\'
num_bits
\'
is not support."
)
...
...
@@ -454,6 +463,8 @@ class CorrectionMul(PrimitiveWithInfer):
@
prim_attr_register
def
__init__
(
self
,
channel_axis
=
0
):
"""init correction mul layer"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
correction_mul
self
.
channel_axis
=
channel_axis
self
.
init_prim_io_names
(
inputs
=
[
'x'
,
'batch_std'
,
'running_std'
],
outputs
=
[
'out'
])
...
...
@@ -486,6 +497,8 @@ class CorrectionMulGrad(PrimitiveWithInfer):
@
prim_attr_register
def
__init__
(
self
,
channel_axis
=
0
):
"""init correction mul layer"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
correction_mul_grad
self
.
channel_axis
=
channel_axis
self
.
init_prim_io_names
(
inputs
=
[
'dout'
,
'x'
,
'gamma'
,
'running_std'
],
outputs
=
[
'dx'
,
'd_gamma'
])
...
...
@@ -847,9 +860,8 @@ class FakeQuantMinMaxPerLayerUpdate(PrimitiveWithInfer):
def
__init__
(
self
,
num_bits
=
8
,
ema
=
False
,
ema_decay
=
0.999
,
symmetric
=
False
,
narrow_range
=
False
,
training
=
True
):
"""init FakeQuantMinMaxPerLayerUpdate OP"""
from
mindspore.ops._op_impl._custom_op
import
correction_mul
,
correction_mul_grad
from
mindspore.ops._op_impl._custom_op
import
fake_quant_with_min_max
,
fake_quant_with_min_max_grad
from
mindspore.ops._op_impl._custom_op
import
fake_quant_with_min_max_update
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_minmax_perlayer_update
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' attr
\'
num_bits
\'
is not support."
)
...
...
@@ -922,6 +934,8 @@ class FakeQuantMinMaxPerChannelUpdate(PrimitiveWithInfer):
def
__init__
(
self
,
num_bits
=
8
,
ema
=
False
,
ema_decay
=
0.999
,
symmetric
=
False
,
narrow_range
=
False
,
training
=
True
,
channel_axis
=
1
):
"""init FakeQuantPerChannelUpdate OP for Ascend"""
if
context
.
get_context
(
'device_target'
)
==
"Ascend"
:
from
mindspore.ops._op_impl._custom_op
import
fake_quant_minmax_perchannel_update
if
num_bits
not
in
self
.
support_quant_bit
:
raise
ValueError
(
f
"For '
{
self
.
name
}
' attr
\'
num_bits
\'
is not support."
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录