Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
8fc31d50
P
Paddle
项目概览
PaddlePaddle
/
Paddle
1 年多 前同步成功
通知
2305
Star
20932
Fork
5423
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
8fc31d50
编写于
6月 17, 2020
作者:
C
cc
提交者:
GitHub
6月 17, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Support conv2d_traspose quantize, test=develop (#25084)
上级
6783441e
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
204 addition
and
201 deletion
+204
-201
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
...ddle/fluid/contrib/slim/quantization/quantization_pass.py
+204
-201
未找到文件。
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
浏览文件 @
8fc31d50
...
...
@@ -55,6 +55,7 @@ _out_scale_op_list = [
_op_real_in_out_name
=
{
"conv2d"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"depthwise_conv2d"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"conv2d_transpose"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"mul"
:
[[
"X"
,
"Y"
],
[
"Out"
]],
"matmul"
:
[[
"X"
,
"Y"
],
[
"Out"
]],
"pool2d"
:
[[
"X"
],
[
"Out"
]],
...
...
@@ -155,7 +156,7 @@ class QuantizationTransformPass(object):
ops's inputs.
"""
_supported_quantizable_op_type
=
[
'conv2d'
,
'depthwise_conv2d'
,
'mul'
,
'matmul'
'conv2d'
,
'depthwise_conv2d'
,
'
conv2d_transpose'
,
'
mul'
,
'matmul'
]
def
__init__
(
self
,
...
...
@@ -205,32 +206,34 @@ class QuantizationTransformPass(object):
quantizable_op_type(list[str]): List the type of ops that will be quantized.
Default is ["conv2d", "depthwise_conv2d", "mul"]. The quantizable_op_type in
QuantizationFreezePass and ConvertToInt8Pass must be the same as this.
weight_quantize_func(function): Function that defines how to quantize weight. Using this
can quickly test if user's quantization method works or not. In this function, user should
both define quantization function and dequantization function, that is, the function's input
is non-quantized weight and function returns dequantized weight. If None, will use
quantization op defined by 'weight_quantize_type'.
weight_quantize_func(function): Function that defines how to quantize weight.
Using this can quickly test if user's quantization method works or not.
In this function, user should both define quantization function and
dequantization function, that is, the function's input is non-quantized
weight and function returns dequantized weight. If None, will use
quantization op defined by 'weight_quantize_type'. Default is None.
act_quantize_func(function): Function that defines how to quantize activation.
Using this can quickly test if user's quantization method works or not.
In this function, user should both define quantization and dequantization
process, that is, the function's input is non-quantized activation and
function returns dequantized activation. If None, will use quantization
op defined by 'activation_quantize_type'. Default is None.
weight_preprocess_func(function): Function that defines how to preprocess
weight before quantization. Using this can quickly test if user's preprocess
method works or not. The function's input is non-quantized weight and
function returns processed weight to be quantized. If None, the weight will
be quantized directly. Default is None.
act_preprocess_func(function): Function that defines how to preprocess
activation before quantization. Using this can quickly test if user's
preprocess method works or not. The function's input is non-quantized
activation and function returns processed activation to be quantized.
If None, the activation will be quantized directly. Default is None.
optimizer_func(function): Fuction return a optimizer. When 'is_test' is
False and user want to use self-defined quantization function and
preprocess function, this function must be set. Default is None.
executor(Fluid.Executor): If user want to use self-defined quantization
function and preprocess function, executor must be set for initialization.
Default is None.
act_quantize_func(function): Function that defines how to quantize activation. Using this
can quickly test if user's quantization method works or not. In this function, user should
both define quantization and dequantization process, that is, the function's input
is non-quantized activation and function returns dequantized activation. If None, will use
quantization op defined by 'activation_quantize_type'.
Default is None.
weight_preprocess_func(function): Function that defines how to preprocess weight before quantization. Using this
can quickly test if user's preprocess method works or not. The function's input
is non-quantized weight and function returns processed weight to be quantized. If None, the weight will
be quantized directly.
Default is None.
act_preprocess_func(function): Function that defines how to preprocess activation before quantization. Using this
can quickly test if user's preprocess method works or not. The function's input
is non-quantized activation and function returns processed activation to be quantized. If None, the activation will
be quantized directly.
Default is None.
optimizer_func(function): Fuction return a optimizer. When 'is_test' is False and user want to use self-defined
quantization function and preprocess function, this function must be set. Default is None.
executor(Fluid.Executor): If user want to use self-defined quantization function and preprocess function,
executor must be set for initialization. Default is None.
Examples:
...
...
@@ -295,180 +298,6 @@ class QuantizationTransformPass(object):
self
.
create_var_map
=
{}
self
.
create_op_map
=
{}
def
_create_new_node
(
self
,
graph
,
in_node
):
"""
create a node that same with in_node in graph
Args:
graph(IrGraph): create node in graph.
in_node(IrVarNode): create node that same with in_node.
Returns:
created new node
"""
key
=
''
for
inp
in
in_node
.
inputs
:
key
=
key
+
inp
.
name
()
key
=
key
+
in_node
.
name
()
for
inp
in
in_node
.
outputs
:
key
=
key
+
inp
.
name
()
if
key
in
self
.
create_var_map
.
keys
():
new_node
=
self
.
create_var_map
[
key
]
elif
in_node
.
is_ctrl_var
():
new_node
=
graph
.
create_control_dep_var
()
self
.
create_var_map
[
key
]
=
new_node
else
:
new_node
=
graph
.
create_var_node_from_desc
(
in_node
.
node
.
var
())
self
.
create_var_map
[
key
]
=
new_node
return
new_node
def
_copy_graph
(
self
,
graph
,
source_graph
,
op_node
):
"""
copy op_node in source_graph to graph. And will run recursively
for next ops that link to op_node's outputs.
Args:
graph(IrGraph): target graph to copy.
source_graph(IrGraph): source graph to copy.
op_node(IrOpNode): op node in source_graph.
Returns:
None
"""
key
=
''
for
inp
in
op_node
.
inputs
:
key
=
key
+
inp
.
name
()
key
=
key
+
op_node
.
name
()
for
inp
in
op_node
.
outputs
:
key
=
key
+
inp
.
name
()
has_created
=
False
if
key
in
self
.
create_op_map
.
keys
():
new_op_node
=
self
.
create_op_map
[
key
]
has_created
=
True
else
:
new_op_node
=
graph
.
create_op_node_from_desc
(
op_node
.
node
.
op
())
self
.
create_op_map
[
key
]
=
new_op_node
if
has_created
:
return
for
in_node
in
op_node
.
inputs
:
new_node
=
self
.
_create_new_node
(
graph
,
in_node
)
graph
.
link_to
(
new_node
,
new_op_node
)
for
in_node
in
op_node
.
outputs
:
new_node
=
self
.
_create_new_node
(
graph
,
in_node
)
graph
.
link_to
(
new_op_node
,
new_node
)
for
var_node
in
op_node
.
outputs
:
for
next_op_node
in
var_node
.
outputs
:
self
.
_copy_graph
(
graph
,
source_graph
,
next_op_node
)
return
def
_insert_func
(
self
,
graph
,
func
,
var_node
,
op
):
"""
Insert a tmp program that returned by func between var_node and op.
Args:
graph(IrGraph): target graph to insert tmp program.
func(Function): function to define a tmp program
var_node(IrVarNode): node in target graph.
op(IrOpNode): op in target graph.
Returns:
op's new input that replaces var_node
"""
tmp_program
=
Program
()
startup_program
=
Program
()
with
program_guard
(
tmp_program
,
startup_program
):
with
unique_name
.
guard
(
var_node
.
name
()
+
"_"
):
in_node
=
data
(
var_node
.
name
()
+
'_tmp_input'
,
shape
=
var_node
.
shape
(),
dtype
=
'float32'
)
out_node
=
func
(
in_node
)
# loss shape must be 1 when minimize
loss
=
mean
(
out_node
)
if
not
graph
.
_for_test
:
assert
self
.
_optimizer
,
"optimizer_func must be set when graph is test graph"
in_node
.
stop_gradient
=
False
optimizer
=
self
.
_optimizer
()
optimizer
.
minimize
(
loss
)
with
scope_guard
(
self
.
_scope
):
self
.
_exe
.
run
(
startup_program
)
tmp_graph
=
IrGraph
(
core
.
Graph
(
tmp_program
.
desc
),
for_test
=
graph
.
_for_test
)
in_node
=
tmp_graph
.
_find_node_by_name
(
tmp_graph
.
all_var_nodes
(),
in_node
.
name
)
out_node
=
tmp_graph
.
_find_node_by_name
(
tmp_graph
.
all_var_nodes
(),
out_node
.
name
)
in_node_params
=
[]
in_op_node
=
[]
# copy tmp graph to graph, after that, we can insert tmp graph's copy to graph.
for
node
in
tmp_graph
.
all_var_nodes
():
if
node
.
inputs
==
[]
and
node
.
persistable
():
in_node_params
.
append
(
node
)
for
node
in
tmp_graph
.
all_op_nodes
():
if
node
.
inputs
==
[]:
in_op_node
.
append
(
node
)
for
node
in
in_node
.
outputs
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
node
)
for
node
in
in_node_params
:
for
op_node
in
node
.
outputs
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
op_node
)
for
node
in
in_op_node
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
node
)
target_in_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
in_node
.
name
())
target_out_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
out_node
.
name
())
loss_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
loss
.
name
)
outputs
=
target_in_node
.
outputs
for
node
in
outputs
:
graph
.
update_input_link
(
target_in_node
,
var_node
,
node
)
graph
.
update_input_link
(
var_node
,
target_out_node
,
op
)
# update grad
if
not
graph
.
_for_test
:
op_out
=
op
.
outputs
[
0
]
op_out_grad
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
op_out
.
name
()
+
"@GRAD"
)
# find op's gradient op, such as conv2d_grad
op_grad
=
op_out_grad
.
outputs
[
0
]
target_out_grad_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
target_out_node
.
name
()
+
"@GRAD"
)
in_node_grad
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
target_in_node
.
name
()
+
"@GRAD"
)
in_node_grad_op
=
in_node_grad
.
inputs
# update op_grad's input
graph
.
update_input_link
(
var_node
,
target_out_node
,
op_grad
)
op_grad_out
=
None
# find var_node's corresponding grad node
for
node
in
op_grad
.
outputs
:
if
var_node
.
name
()
+
"@GRAD"
in
node
.
name
():
op_grad_out
=
node
# update op_grad's output
if
op_grad_out
is
not
None
:
graph
.
update_output_link
(
op_grad_out
,
target_out_grad_node
,
op_grad
)
else
:
graph
.
link_to
(
op_grad
,
target_out_grad_node
)
for
node
in
in_node_grad_op
:
graph
.
update_input_link
(
target_in_node
,
var_node
,
node
)
if
op_grad_out
:
graph
.
update_output_link
(
in_node_grad
,
op_grad_out
,
node
)
# remove useless nodes
mean_grad
=
target_out_grad_node
.
inputs
[
0
]
mean_out_grad
=
mean_grad
.
inputs
[
0
]
fill_constant_node
=
mean_out_grad
.
inputs
[
0
]
graph
.
safe_remove_nodes
(
mean_grad
)
graph
.
safe_remove_nodes
(
mean_out_grad
)
graph
.
safe_remove_nodes
(
fill_constant_node
)
graph
.
safe_remove_nodes
(
in_node_grad
)
graph
.
safe_remove_nodes
(
loss_node
.
inputs
[
0
])
graph
.
safe_remove_nodes
(
loss_node
)
graph
.
safe_remove_nodes
(
target_in_node
)
return
target_out_node
def
apply
(
self
,
graph
):
"""
Quantize the graph for training process. According to weight and
...
...
@@ -923,6 +752,180 @@ class QuantizationTransformPass(object):
graph
.
link_to
(
dequant_op_node
,
dequant_var_node
)
return
dequant_var_node
def
_create_new_node
(
self
,
graph
,
in_node
):
"""
create a node that same with in_node in graph
Args:
graph(IrGraph): create node in graph.
in_node(IrVarNode): create node that same with in_node.
Returns:
created new node
"""
key
=
''
for
inp
in
in_node
.
inputs
:
key
=
key
+
inp
.
name
()
key
=
key
+
in_node
.
name
()
for
inp
in
in_node
.
outputs
:
key
=
key
+
inp
.
name
()
if
key
in
self
.
create_var_map
.
keys
():
new_node
=
self
.
create_var_map
[
key
]
elif
in_node
.
is_ctrl_var
():
new_node
=
graph
.
create_control_dep_var
()
self
.
create_var_map
[
key
]
=
new_node
else
:
new_node
=
graph
.
create_var_node_from_desc
(
in_node
.
node
.
var
())
self
.
create_var_map
[
key
]
=
new_node
return
new_node
def
_copy_graph
(
self
,
graph
,
source_graph
,
op_node
):
"""
copy op_node in source_graph to graph. And will run recursively
for next ops that link to op_node's outputs.
Args:
graph(IrGraph): target graph to copy.
source_graph(IrGraph): source graph to copy.
op_node(IrOpNode): op node in source_graph.
Returns:
None
"""
key
=
''
for
inp
in
op_node
.
inputs
:
key
=
key
+
inp
.
name
()
key
=
key
+
op_node
.
name
()
for
inp
in
op_node
.
outputs
:
key
=
key
+
inp
.
name
()
has_created
=
False
if
key
in
self
.
create_op_map
.
keys
():
new_op_node
=
self
.
create_op_map
[
key
]
has_created
=
True
else
:
new_op_node
=
graph
.
create_op_node_from_desc
(
op_node
.
node
.
op
())
self
.
create_op_map
[
key
]
=
new_op_node
if
has_created
:
return
for
in_node
in
op_node
.
inputs
:
new_node
=
self
.
_create_new_node
(
graph
,
in_node
)
graph
.
link_to
(
new_node
,
new_op_node
)
for
in_node
in
op_node
.
outputs
:
new_node
=
self
.
_create_new_node
(
graph
,
in_node
)
graph
.
link_to
(
new_op_node
,
new_node
)
for
var_node
in
op_node
.
outputs
:
for
next_op_node
in
var_node
.
outputs
:
self
.
_copy_graph
(
graph
,
source_graph
,
next_op_node
)
return
def
_insert_func
(
self
,
graph
,
func
,
var_node
,
op
):
"""
Insert a tmp program that returned by func between var_node and op.
Args:
graph(IrGraph): target graph to insert tmp program.
func(Function): function to define a tmp program
var_node(IrVarNode): node in target graph.
op(IrOpNode): op in target graph.
Returns:
op's new input that replaces var_node
"""
tmp_program
=
Program
()
startup_program
=
Program
()
with
program_guard
(
tmp_program
,
startup_program
):
with
unique_name
.
guard
(
var_node
.
name
()
+
"_"
):
in_node
=
data
(
var_node
.
name
()
+
'_tmp_input'
,
shape
=
var_node
.
shape
(),
dtype
=
'float32'
)
out_node
=
func
(
in_node
)
# loss shape must be 1 when minimize
loss
=
mean
(
out_node
)
if
not
graph
.
_for_test
:
assert
self
.
_optimizer
,
"optimizer_func must be set when graph is test graph"
in_node
.
stop_gradient
=
False
optimizer
=
self
.
_optimizer
()
optimizer
.
minimize
(
loss
)
with
scope_guard
(
self
.
_scope
):
self
.
_exe
.
run
(
startup_program
)
tmp_graph
=
IrGraph
(
core
.
Graph
(
tmp_program
.
desc
),
for_test
=
graph
.
_for_test
)
in_node
=
tmp_graph
.
_find_node_by_name
(
tmp_graph
.
all_var_nodes
(),
in_node
.
name
)
out_node
=
tmp_graph
.
_find_node_by_name
(
tmp_graph
.
all_var_nodes
(),
out_node
.
name
)
in_node_params
=
[]
in_op_node
=
[]
# copy tmp graph to graph, after that, we can insert tmp graph's copy to graph.
for
node
in
tmp_graph
.
all_var_nodes
():
if
node
.
inputs
==
[]
and
node
.
persistable
():
in_node_params
.
append
(
node
)
for
node
in
tmp_graph
.
all_op_nodes
():
if
node
.
inputs
==
[]:
in_op_node
.
append
(
node
)
for
node
in
in_node
.
outputs
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
node
)
for
node
in
in_node_params
:
for
op_node
in
node
.
outputs
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
op_node
)
for
node
in
in_op_node
:
self
.
_copy_graph
(
graph
,
tmp_graph
,
node
)
target_in_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
in_node
.
name
())
target_out_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
out_node
.
name
())
loss_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
loss
.
name
)
outputs
=
target_in_node
.
outputs
for
node
in
outputs
:
graph
.
update_input_link
(
target_in_node
,
var_node
,
node
)
graph
.
update_input_link
(
var_node
,
target_out_node
,
op
)
# update grad
if
not
graph
.
_for_test
:
op_out
=
op
.
outputs
[
0
]
op_out_grad
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
op_out
.
name
()
+
"@GRAD"
)
# find op's gradient op, such as conv2d_grad
op_grad
=
op_out_grad
.
outputs
[
0
]
target_out_grad_node
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
target_out_node
.
name
()
+
"@GRAD"
)
in_node_grad
=
graph
.
_find_node_by_name
(
graph
.
all_var_nodes
(),
target_in_node
.
name
()
+
"@GRAD"
)
in_node_grad_op
=
in_node_grad
.
inputs
# update op_grad's input
graph
.
update_input_link
(
var_node
,
target_out_node
,
op_grad
)
op_grad_out
=
None
# find var_node's corresponding grad node
for
node
in
op_grad
.
outputs
:
if
var_node
.
name
()
+
"@GRAD"
in
node
.
name
():
op_grad_out
=
node
# update op_grad's output
if
op_grad_out
is
not
None
:
graph
.
update_output_link
(
op_grad_out
,
target_out_grad_node
,
op_grad
)
else
:
graph
.
link_to
(
op_grad
,
target_out_grad_node
)
for
node
in
in_node_grad_op
:
graph
.
update_input_link
(
target_in_node
,
var_node
,
node
)
if
op_grad_out
:
graph
.
update_output_link
(
in_node_grad
,
op_grad_out
,
node
)
# remove useless nodes
mean_grad
=
target_out_grad_node
.
inputs
[
0
]
mean_out_grad
=
mean_grad
.
inputs
[
0
]
fill_constant_node
=
mean_out_grad
.
inputs
[
0
]
graph
.
safe_remove_nodes
(
mean_grad
)
graph
.
safe_remove_nodes
(
mean_out_grad
)
graph
.
safe_remove_nodes
(
fill_constant_node
)
graph
.
safe_remove_nodes
(
in_node_grad
)
graph
.
safe_remove_nodes
(
loss_node
.
inputs
[
0
])
graph
.
safe_remove_nodes
(
loss_node
)
graph
.
safe_remove_nodes
(
target_in_node
)
return
target_out_node
def
_quantized_var_name
(
self
,
var_name
):
"""
Return quantized variable name for the input `var_name`.
...
...
@@ -995,7 +998,7 @@ class QuantizationFreezePass(object):
self
.
_weight_bits
=
weight_bits
self
.
_activation_bits
=
activation_bits
self
.
_weight_quantize_type
=
weight_quantize_type
self
.
_conv_ops
=
[
'conv2d'
,
'depthwise_conv2d'
]
self
.
_conv_ops
=
[
'conv2d'
,
'depthwise_conv2d'
,
'conv2d_transpose'
]
self
.
_fake_quant_op_names
=
_fake_quant_op_list
self
.
_fake_dequant_op_names
=
_fake_dequant_op_list
self
.
_op_input_rename_map
=
collections
.
OrderedDict
()
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录