Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleSlim
提交
2378d44c
P
PaddleSlim
项目概览
PaddlePaddle
/
PaddleSlim
1 年多 前同步成功
通知
51
Star
1434
Fork
344
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
53
列表
看板
标记
里程碑
合并请求
16
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleSlim
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
53
Issue
53
列表
看板
标记
里程碑
合并请求
16
合并请求
16
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
2378d44c
编写于
12月 06, 2019
作者:
W
wanghaoshuang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add doc for sensitive API.
上级
f381b6fc
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
146 addition
and
4 deletion
+146
-4
doc/prune_api.md
doc/prune_api.md
+143
-0
paddleslim/prune/sensitive.py
paddleslim/prune/sensitive.py
+3
-4
未找到文件。
doc/prune_api.md
浏览文件 @
2378d44c
...
...
@@ -142,3 +142,146 @@ for param in main_program.global_block().all_parameters():
---
## sensitivity
>paddleslim.prune.sensitivity(program, place, param_names, eval_func, sensitivities_file=None, pruned_ratios=None) [源代码]()
计算网络中每个卷积层的敏感度。每个卷积层的敏感度信息统计方法为:依次剪掉当前卷积层不同比例的输出通道数,在测试集上计算剪裁后的精度损失。得到敏感度信息后,可以通过观察或其它方式确定每层卷积的剪裁率。
**参数:**
-
**program(paddle.fluid.Program):**
待评估的目标网络。更多关于Program的介绍请参考:
[
Program概念介绍
](
https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program
)
。
-
**place(paddle.fluid.Place):**
待分析的参数所在的设备位置,可以是
`CUDAPlace`
或
`CPUPLace`
。
[
Place概念介绍
](
)
-
**param_names(list<str>):**
待分析的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:
```
for block in program.blocks:
for param in block.all_parameters():
print("param: {}; shape: {}".format(param.name, param.shape))
```
-
**eval_func(function):**
用于评估裁剪后模型效果的回调函数。该回调函数接受被裁剪后的
`program`
为参数,返回一个表示当前program的精度,用以计算当前裁剪带来的精度损失。
-
**sensitivities_file(str):**
保存敏感度信息的本地文件系统的文件。在敏感度计算过程中,会持续将新计算出的敏感度信息追加到该文件中。重启任务后,文件中已有敏感度信息不会被重复计算。该文件可以用
`pickle`
加载。
-
**pruned_ratios(list<float>):**
计算卷积层敏感度信息时,依次剪掉的通道数比例。默认为[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]。
**返回:**
-
**sensitivities(dict):**
存放敏感度信息的dict,其格式为:
```
{"weight_0":
{"loss": [0.22, 0.33],
"pruned_percent": [0.1, 0.2]
},
"weight_1":
{"loss": [0.21, 0.4],
"pruned_percent": [0.1, 0.2]
}
}
```
其中,
`weight_0`
是卷积层参数的名称,
`weight_0`
对应的
`loss[i]`
为将
`weight_0`
裁掉
`pruned_percent[i]`
后的精度损失。
**示例:**
点击
[
AIStudio
](
https://aistudio.baidu.com/aistudio/projectdetail/201401
)
运行以下示例代码。
```
import paddle
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddleslim.prune import sensitivity
import paddle.dataset.mnist as reader
def conv_bn_layer(input,
num_filters,
filter_size,
name,
stride=1,
groups=1,
act=None):
conv = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
groups=groups,
act=None,
param_attr=ParamAttr(name=name + "_weights"),
bias_attr=False,
name=name + "_out")
bn_name = name + "_bn"
return fluid.layers.batch_norm(
input=conv,
act=act,
name=bn_name + '_output',
param_attr=ParamAttr(name=bn_name + '_scale'),
bias_attr=ParamAttr(bn_name + '_offset'),
moving_mean_name=bn_name + '_mean',
moving_variance_name=bn_name + '_variance', )
main_program = fluid.Program()
startup_program = fluid.Program()
# X X O X O
# conv1-->conv2-->sum1-->conv3-->conv4-->sum2-->conv5-->conv6
# | ^ | ^
# |____________| |____________________|
#
# X: prune output channels
# O: prune input channels
image_shape = [1,28,28]
with fluid.program_guard(main_program, startup_program):
image = fluid.data(name='image', shape=[None]+image_shape, dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
conv1 = conv_bn_layer(image, 8, 3, "conv1")
conv2 = conv_bn_layer(conv1, 8, 3, "conv2")
sum1 = conv1 + conv2
conv3 = conv_bn_layer(sum1, 8, 3, "conv3")
conv4 = conv_bn_layer(conv3, 8, 3, "conv4")
sum2 = conv4 + sum1
conv5 = conv_bn_layer(sum2, 8, 3, "conv5")
conv6 = conv_bn_layer(conv5, 8, 3, "conv6")
out = fluid.layers.fc(conv6, size=10, act="softmax")
# cost = fluid.layers.cross_entropy(input=out, label=label)
# avg_cost = fluid.layers.mean(x=cost)
acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1)
# acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_program)
val_reader = paddle.batch(reader.test(), batch_size=128)
val_feeder = feeder = fluid.DataFeeder(
[image, label], place, program=main_program)
def eval_func(program):
acc_top1_ns = []
for data in val_reader():
acc_top1_n = exe.run(program,
feed=val_feeder.feed(data),
fetch_list=[acc_top1.name])
acc_top1_ns.append(np.mean(acc_top1_n))
return np.mean(acc_top1_ns)
param_names = []
for param in main_program.global_block().all_parameters():
if "weights" in param.name:
param_names.append(param.name)
sensitivities = sensitivity(main_program,
place,
param_names,
eval_func,
sensitivities_file="./sensitive.data",
pruned_ratios=[0.1, 0.2, 0.3])
print(sensitivities)
```
paddleslim/prune/sensitive.py
浏览文件 @
2378d44c
...
...
@@ -58,11 +58,10 @@ def sensitivity(program,
if
baseline
is
None
:
baseline
=
eval_func
(
graph
.
program
)
param_backup
=
{}
pruner
=
Pruner
()
_logger
.
info
(
"sensitive - param: {}; ratios: {}"
.
format
(
name
,
ratio
))
pruned_program
=
pruner
.
prune
(
pruned_program
,
param_backup
,
_
=
pruner
.
prune
(
program
=
graph
.
program
,
scope
=
scope
,
params
=
[
name
],
...
...
@@ -70,7 +69,7 @@ def sensitivity(program,
place
=
place
,
lazy
=
True
,
only_graph
=
False
,
param_backup
=
param_backup
)
param_backup
=
True
)
pruned_metric
=
eval_func
(
pruned_program
)
loss
=
(
baseline
-
pruned_metric
)
/
baseline
_logger
.
info
(
"pruned param: {}; {}; loss={}"
.
format
(
name
,
ratio
,
...
...
@@ -118,7 +117,7 @@ def flops_sensitivity(program,
baseline
=
None
for
name
in
sensitivities
:
pruned_program
=
pruner
.
prune
(
pruned_program
,
_
,
_
=
pruner
.
prune
(
program
=
graph
.
program
,
scope
=
None
,
params
=
[
name
],
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录