prune_api.md 13.4 KB
Newer Older
W
whs 已提交
1
# 卷积层通道剪裁
W
wanghaoshuang 已提交
2

3 4
## Pruner
paddleslim.prune.Pruner(criterion="l1_norm")[源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L28)
W
wanghaoshuang 已提交
5

6
: 对卷积网络的通道进行一次剪裁。剪裁一个卷积层的通道,是指剪裁该卷积层输出的通道。卷积层的权重形状为`[output_channel, input_channel, kernel_size, kernel_size]`,通过剪裁该权重的第一纬度达到剪裁输出通道数的目的。
W
wanghaoshuang 已提交
7 8 9

**参数:**

10
- **criterion** - 评估一个卷积层内通道重要性所参考的指标。目前仅支持`l1_norm`。默认为`l1_norm`
W
wanghaoshuang 已提交
11 12 13 14 15

**返回:** 一个Pruner类的实例

**示例代码:**

16
```python
W
wanghaoshuang 已提交
17 18 19 20
from paddleslim.prune import Pruner
pruner = Pruner()
```

21
paddleslim.prune.Pruner.prune(program, scope, params, ratios, place=None, lazy=False, only_graph=False, param_backup=False, param_shape_backup=False)[源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L36)
W
wanghaoshuang 已提交
22

23
: 对目标网络的一组卷积层的权重进行裁剪。
W
wanghaoshuang 已提交
24 25 26

**参数:**

27
- **program(paddle.fluid.Program)** - 要裁剪的目标网络。更多关于Program的介绍请参考:[Program概念介绍](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program)
W
wanghaoshuang 已提交
28

29
- **scope(paddle.fluid.Scope)** - 要裁剪的权重所在的`scope`,Paddle中用`scope`实例存放模型参数和运行时变量的值。Scope中的参数值会被`inplace`的裁剪。更多介绍请参考[Scope概念介绍]()
W
wanghaoshuang 已提交
30

31
- **params(list<str>)** - 需要被裁剪的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:
32
```python
W
wanghaoshuang 已提交
33 34 35 36 37
for block in program.blocks:
    for param in block.all_parameters():
        print("param: {}; shape: {}".format(param.name, param.shape))
```

38
- **ratios(list<float>)** - 用于裁剪`params`的剪切率,类型为列表。该列表长度必须与`params`的长度一致。
W
wanghaoshuang 已提交
39

40
- **place(paddle.fluid.Place)** - 待裁剪参数所在的设备位置,可以是`CUDAPlace``CPUPlace`[Place概念介绍]()
W
wanghaoshuang 已提交
41

42
- **lazy(bool)** - `lazy`为True时,通过将指定通道的参数置零达到裁剪的目的,参数的`shape保持不变``lazy`为False时,直接将要裁的通道的参数删除,参数的`shape`会发生变化。
W
wanghaoshuang 已提交
43

44
- **only_graph(bool)** - 是否只裁剪网络结构。在Paddle中,Program定义了网络结构,Scope存储参数的数值。一个Scope实例可以被多个Program使用,比如定义了训练网络的Program和定义了测试网络的Program是使用同一个Scope实例的。`only_graph`为True时,只对Program中定义的卷积的通道进行剪裁;`only_graph`为false时,Scope中卷积参数的数值也会被剪裁。默认为False。
W
wanghaoshuang 已提交
45

46
- **param_backup(bool)** - 是否返回对参数值的备份。默认为False。
W
wanghaoshuang 已提交
47

48
- **param_shape_backup(bool)** - 是否返回对参数`shape`的备份。默认为False。
W
wanghaoshuang 已提交
49 50 51

**返回:**

52
- **pruned_program(paddle.fluid.Program)** - 被裁剪后的Program。
W
wanghaoshuang 已提交
53

54
- **param_backup(dict)** - 对参数数值的备份,用于恢复Scope中的参数数值。
W
wanghaoshuang 已提交
55

56
- **param_shape_backup(dict)** - 对参数形状的备份。
W
wanghaoshuang 已提交
57 58 59

**示例:**

W
wanghaoshuang 已提交
60
点击[AIStudio](https://aistudio.baidu.com/aistudio/projectDetail/200786)执行以下示例代码。
61
```python
W
wanghaoshuang 已提交
62 63

import paddle.fluid as fluid
W
wanghaoshuang 已提交
64
from paddle.fluid.param_attr import ParamAttr
W
wanghaoshuang 已提交
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
from paddleslim.prune import Pruner

def conv_bn_layer(input,
                  num_filters,
                  filter_size,
                  name,
                  stride=1,
                  groups=1,
                  act=None):
    conv = fluid.layers.conv2d(
        input=input,
        num_filters=num_filters,
        filter_size=filter_size,
        stride=stride,
        padding=(filter_size - 1) // 2,
        groups=groups,
        act=None,
        param_attr=ParamAttr(name=name + "_weights"),
        bias_attr=False,
        name=name + "_out")
    bn_name = name + "_bn"
    return fluid.layers.batch_norm(
        input=conv,
        act=act,
        name=bn_name + '_output',
        param_attr=ParamAttr(name=bn_name + '_scale'),
        bias_attr=ParamAttr(bn_name + '_offset'),
        moving_mean_name=bn_name + '_mean',
        moving_variance_name=bn_name + '_variance', )

main_program = fluid.Program()
startup_program = fluid.Program()
#   X       X              O       X              O
# conv1-->conv2-->sum1-->conv3-->conv4-->sum2-->conv5-->conv6
#     |            ^ |                    ^
#     |____________| |____________________|
#
# X: prune output channels
# O: prune input channels
with fluid.program_guard(main_program, startup_program):
    input = fluid.data(name="image", shape=[None, 3, 16, 16])
    conv1 = conv_bn_layer(input, 8, 3, "conv1")
    conv2 = conv_bn_layer(conv1, 8, 3, "conv2")
    sum1 = conv1 + conv2
    conv3 = conv_bn_layer(sum1, 8, 3, "conv3")
    conv4 = conv_bn_layer(conv3, 8, 3, "conv4")
    sum2 = conv4 + sum1
    conv5 = conv_bn_layer(sum2, 8, 3, "conv5")
    conv6 = conv_bn_layer(conv5, 8, 3, "conv6")

place = fluid.CPUPlace()
exe = fluid.Executor(place)
scope = fluid.Scope()
exe.run(startup_program, scope=scope)
pruner = Pruner()
main_program, _, _ = pruner.prune(
    main_program,
    scope,
    params=["conv4_weights"],
    ratios=[0.5],
    place=place,
    lazy=False,
    only_graph=False,
Q
qingqing01 已提交
128 129
    param_backup=False,
    param_shape_backup=False)
W
wanghaoshuang 已提交
130 131 132 133 134 135 136

for param in main_program.global_block().all_parameters():
    if "weights" in param.name:
        print("param name: {}; param shape: {}".format(param.name, param.shape))

```

W
wanghaoshuang 已提交
137 138

---
W
wanghaoshuang 已提交
139 140

## sensitivity
141
paddleslim.prune.sensitivity(program, place, param_names, eval_func, sensitivities_file=None, pruned_ratios=None) [源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L34)
W
wanghaoshuang 已提交
142

143
: 计算网络中每个卷积层的敏感度。每个卷积层的敏感度信息统计方法为:依次剪掉当前卷积层不同比例的输出通道数,在测试集上计算剪裁后的精度损失。得到敏感度信息后,可以通过观察或其它方式确定每层卷积的剪裁率。
W
wanghaoshuang 已提交
144 145 146

**参数:**

147
- **program(paddle.fluid.Program)** - 待评估的目标网络。更多关于Program的介绍请参考:[Program概念介绍](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program)
W
wanghaoshuang 已提交
148

149
- **place(paddle.fluid.Place)** - 待分析的参数所在的设备位置,可以是`CUDAPlace``CPUPlace`[Place概念介绍]()
W
wanghaoshuang 已提交
150

151
- **param_names(list<str>)** - 待分析的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:
W
wanghaoshuang 已提交
152

153
```python
W
wanghaoshuang 已提交
154 155 156 157 158
for block in program.blocks:
    for param in block.all_parameters():
        print("param: {}; shape: {}".format(param.name, param.shape))
```

159
- **eval_func(function)** - 用于评估裁剪后模型效果的回调函数。该回调函数接受被裁剪后的`program`为参数,返回一个表示当前program的精度,用以计算当前裁剪带来的精度损失。
W
wanghaoshuang 已提交
160

161
- **sensitivities_file(str)** - 保存敏感度信息的本地文件系统的文件。在敏感度计算过程中,会持续将新计算出的敏感度信息追加到该文件中。重启任务后,文件中已有敏感度信息不会被重复计算。该文件可以用`pickle`加载。
W
wanghaoshuang 已提交
162

163
- **pruned_ratios(list<float>)** - 计算卷积层敏感度信息时,依次剪掉的通道数比例。默认为[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]。
W
wanghaoshuang 已提交
164 165 166

**返回:**

167
- **sensitivities(dict)** - 存放敏感度信息的dict,其格式为:
W
wanghaoshuang 已提交
168

169
```python
170
{"weight_0":
W
wanghaoshuang 已提交
171 172
   {0.1: 0.22,
    0.2: 0.33
W
wanghaoshuang 已提交
173 174
   },
 "weight_1":
W
wanghaoshuang 已提交
175 176
   {0.1: 0.21,
    0.2: 0.4
W
wanghaoshuang 已提交
177 178 179 180
   }
}
```

W
wanghaoshuang 已提交
181
其中,`weight_0`是卷积层参数的名称,sensitivities['weight_0']的`value`为剪裁比例,`value`为精度损失的比例。
W
wanghaoshuang 已提交
182 183 184 185 186

**示例:**

点击[AIStudio](https://aistudio.baidu.com/aistudio/projectdetail/201401)运行以下示例代码。

187
```python
W
wanghaoshuang 已提交
188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234
import paddle
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddleslim.prune import sensitivity
import paddle.dataset.mnist as reader

def conv_bn_layer(input,
                  num_filters,
                  filter_size,
                  name,
                  stride=1,
                  groups=1,
                  act=None):
    conv = fluid.layers.conv2d(
        input=input,
        num_filters=num_filters,
        filter_size=filter_size,
        stride=stride,
        padding=(filter_size - 1) // 2,
        groups=groups,
        act=None,
        param_attr=ParamAttr(name=name + "_weights"),
        bias_attr=False,
        name=name + "_out")
    bn_name = name + "_bn"
    return fluid.layers.batch_norm(
        input=conv,
        act=act,
        name=bn_name + '_output',
        param_attr=ParamAttr(name=bn_name + '_scale'),
        bias_attr=ParamAttr(bn_name + '_offset'),
        moving_mean_name=bn_name + '_mean',
        moving_variance_name=bn_name + '_variance', )

main_program = fluid.Program()
startup_program = fluid.Program()
#   X       X              O       X              O
# conv1-->conv2-->sum1-->conv3-->conv4-->sum2-->conv5-->conv6
#     |            ^ |                    ^
#     |____________| |____________________|
#
# X: prune output channels
# O: prune input channels
image_shape = [1,28,28]
with fluid.program_guard(main_program, startup_program):
    image = fluid.data(name='image', shape=[None]+image_shape, dtype='float32')
235
    label = fluid.data(name='label', shape=[None, 1], dtype='int64')  
W
wanghaoshuang 已提交
236 237 238 239 240 241 242 243 244 245 246 247 248
    conv1 = conv_bn_layer(image, 8, 3, "conv1")
    conv2 = conv_bn_layer(conv1, 8, 3, "conv2")
    sum1 = conv1 + conv2
    conv3 = conv_bn_layer(sum1, 8, 3, "conv3")
    conv4 = conv_bn_layer(conv3, 8, 3, "conv4")
    sum2 = conv4 + sum1
    conv5 = conv_bn_layer(sum2, 8, 3, "conv5")
    conv6 = conv_bn_layer(conv5, 8, 3, "conv6")
    out = fluid.layers.fc(conv6, size=10, act="softmax")
#    cost = fluid.layers.cross_entropy(input=out, label=label)
#    avg_cost = fluid.layers.mean(x=cost)
    acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1)
#    acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5)
249 250


W
wanghaoshuang 已提交
251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_program)

val_reader = paddle.batch(reader.test(), batch_size=128)
val_feeder = feeder = fluid.DataFeeder(
        [image, label], place, program=main_program)

def eval_func(program):

    acc_top1_ns = []
    for data in val_reader():
        acc_top1_n = exe.run(program,
                             feed=val_feeder.feed(data),
                             fetch_list=[acc_top1.name])
        acc_top1_ns.append(np.mean(acc_top1_n))
    return np.mean(acc_top1_ns)
param_names = []
for param in main_program.global_block().all_parameters():
    if "weights" in param.name:
        param_names.append(param.name)
sensitivities = sensitivity(main_program,
                            place,
                            param_names,
                            eval_func,
                            sensitivities_file="./sensitive.data",
                            pruned_ratios=[0.1, 0.2, 0.3])
print(sensitivities)

```
W
wanghaoshuang 已提交
281 282

## merge_sensitive
283
paddleslim.prune.merge_sensitive(sensitivities)[源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L161)
W
wanghaoshuang 已提交
284

285
: 合并多个敏感度信息。
W
wanghaoshuang 已提交
286 287 288

参数:

289
- **sensitivities(list<dict> | list<str>)** - 待合并的敏感度信息,可以是字典的列表,或者是存放敏感度信息的文件的路径列表。
W
wanghaoshuang 已提交
290 291 292

返回:

293
- **sensitivities(dict)** - 合并后的敏感度信息。其格式为:
W
wanghaoshuang 已提交
294

295
```bash
296
{"weight_0":
W
wanghaoshuang 已提交
297 298 299 300 301 302 303 304 305 306 307 308 309 310
   {0.1: 0.22,
    0.2: 0.33
   },
 "weight_1":
   {0.1: 0.21,
    0.2: 0.4
   }
}
```

其中,`weight_0`是卷积层参数的名称,sensitivities['weight_0']的`value`为剪裁比例,`value`为精度损失的比例。

示例:

W
whs 已提交
311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332
```
from paddleslim.prune import merge_sensitive
sen0 = {"weight_0":
   {0.1: 0.22,
    0.2: 0.33
   },
 "weight_1":
   {0.1: 0.21,
    0.2: 0.4
   }
}
sen1 = {"weight_0":
   {0.3: 0.41,
   },
 "weight_2":
   {0.1: 0.10,
    0.2: 0.35
   }
}
sensitivities = merge_sensitive([sen0, sen1])
print(sensitivities)
```
W
wanghaoshuang 已提交
333 334

## load_sensitivities
335
paddleslim.prune.load_sensitivities(sensitivities_file)[源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L184)
W
wanghaoshuang 已提交
336

337
: 从文件中加载敏感度信息。
W
wanghaoshuang 已提交
338 339 340

参数:

341
- **sensitivities_file(str)** - 存放敏感度信息的本地文件.
W
wanghaoshuang 已提交
342 343 344

返回:

345
- **sensitivities(dict)** - 敏感度信息。
W
wanghaoshuang 已提交
346 347

示例:
W
whs 已提交
348

W
whs 已提交
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366
```
import pickle
from paddleslim.prune import load_sensitivities
sen = {"weight_0":
   {0.1: 0.22,
    0.2: 0.33
   },
 "weight_1":
   {0.1: 0.21,
    0.2: 0.4
   }
}
sensitivities_file = "sensitive_api_demo.data"
with open(sensitivities_file, 'w') as f:
    pickle.dump(sen, f)
sensitivities = load_sensitivities(sensitivities_file)
print(sensitivities)
```
W
whs 已提交
367

368 369 370 371
## get_ratios_by_loss
paddleslim.prune.get_ratios_by_loss(sensitivities, loss)[源代码](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L206)

: 根据敏感度和精度损失阈值计算出一组剪切率。对于参数`w`, 其剪裁率为使精度损失低于`loss`的最大剪裁率。
W
whs 已提交
372 373 374

参数:

375
- **sensitivities(dict)** - 敏感度信息。
W
whs 已提交
376

377
- **loss** - 精度损失阈值。
W
whs 已提交
378 379 380

返回:

381
- **ratios(dict)** - 一组剪切率。`key`是待剪裁参数的名称。`value`是对应参数的剪裁率。
W
whs 已提交
382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400

示例:

```
from paddleslim.prune import get_ratios_by_loss
sen = {"weight_0":
   {0.1: 0.22,
    0.2: 0.33
   },
 "weight_1":
   {0.1: 0.21,
    0.2: 0.4
   }
}

ratios = get_ratios_by_loss(sen, 0.3)
print(ratios)

```