Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
430f8449
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
430f8449
编写于
1月 20, 2021
作者:
G
guofei
提交者:
GitHub
1月 20, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix the error of save_quantized_model (#30583)
* Fix the error of save_quantized_model
上级
10271ddf
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
10 addition
and
5 deletion
+10
-5
python/paddle/fluid/contrib/slim/quantization/imperative/qat.py
.../paddle/fluid/contrib/slim/quantization/imperative/qat.py
+10
-5
未找到文件。
python/paddle/fluid/contrib/slim/quantization/imperative/qat.py
浏览文件 @
430f8449
...
...
@@ -36,7 +36,7 @@ _logger = get_logger(
_op_real_in_out_name
=
{
"conv2d"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"
conv2d_transpose
"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"
depthwise_conv2d
"
:
[[
"Input"
,
"Filter"
],
[
"Output"
]],
"pool2d"
:
[[
"X"
],
[
"Out"
]],
"elementwise_add"
:
[[
"X"
,
"Y"
],
[
"Out"
]],
"softmax"
:
[[
"X"
],
[
"Out"
]],
...
...
@@ -329,9 +329,9 @@ class ImperativeCalcOutScale(object):
super
(
ImperativeCalcOutScale
,
self
).
__init__
()
self
.
_moving_rate
=
moving_rate
self
.
_out_scale_layer_type_list
=
(
BatchNorm
,
BatchNorm1D
,
BatchNorm2D
,
BatchNorm3D
,
Conv2D
,
Conv2DTranspose
,
LeakyReLU
,
Linear
,
PReLU
,
Pool2D
,
MaxPool1D
,
MaxPool2D
,
ReLU
,
ReLU6
,
Sigmoid
,
Softmax
,
Tanh
,
Swish
)
BatchNorm
,
BatchNorm1D
,
BatchNorm2D
,
BatchNorm3D
,
Conv2D
,
LeakyReLU
,
Linear
,
PReLU
,
Pool2D
,
MaxPool1D
,
MaxPool2D
,
ReLU
,
ReLU6
,
Sigmoid
,
Softmax
,
Tanh
,
Swish
)
self
.
_register_hook_handle_list
=
[]
self
.
_out_scale_dict
=
collections
.
OrderedDict
()
...
...
@@ -415,9 +415,11 @@ class ImperativeCalcOutScale(object):
# Traverse all ops in the program and find out the op matching
# the Layer in the dynamic graph.
layer_var_dict
=
{}
layer_var_dict
=
collections
.
OrderedDict
()
ops_list
=
[
key
for
key
,
_
in
self
.
_out_scale_dict
.
items
()]
op_count
=
0
conv_count
=
0
for
block
in
inference_program
.
blocks
:
for
op
in
block
.
ops
:
if
op
.
type
in
_op_real_in_out_name
:
...
...
@@ -472,6 +474,9 @@ class ImperativeCalcOutScale(object):
layer_name
=
layer_name
.
replace
(
'prelu'
,
'p_re_lu'
)
if
'relu'
in
layer_name
:
layer_name
=
layer_name
.
replace
(
'relu'
,
're_lu'
)
if
'conv2d'
in
layer_name
:
layer_name
=
'conv2d_'
+
str
(
conv_count
)
conv_count
=
conv_count
+
1
if
layer_name
not
in
self
.
_out_scale_dict
:
continue
var_name_op_list
[
1
].
_set_attr
(
'out_threshold'
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录