single_distiller_api.rst 12.1 KB
Newer Older
B
Bai Yifan 已提交
1
单进程蒸馏
W
whs 已提交
2 3 4 5 6
=========

merge
---------

B
Bai Yifan 已提交
7
.. py:function:: paddleslim.dist.merge(teacher_program, student_program, data_name_map, place, scope=None, name_prefix='teacher_')
W
whs 已提交
8

B
Bai Yifan 已提交
9
`[源代码] <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L19>`_
W
whs 已提交
10

B
Bai Yifan 已提交
11 12 13
将teacher_program融合到student_program中。

在融合的program中,可以方便地联合原本两个Program中的Tensor做计算。
W
whs 已提交
14 15 16 17 18 19 20

**参数:**

- **teacher_program** (Program)-定义了teacher模型的 `paddle program <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program>`_
- **student_program** (Program)-定义了student模型的 `paddle program <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program>`_
- **data_name_map** (dict)-teacher输入接口名与student输入接口名的映射,其中dict的 *key* 为teacher的输入名,*value* 为student的输入名
- **place** (fluid.CPUPlace()|fluid.CUDAPlace(N))-该参数表示程序运行在何种设备上,这里的N为GPU对应的ID
B
Bai Yifan 已提交
21 22
- **scope** (Scope)-该参数表示程序使用的变量作用域,如果不指定将使用默认的全局作用域 `global_scope <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/paddle_cn/global_scope_cn.html#global-scope>`_ 。默认值: None
- **name_prefix** (str)-为避免同名参数冲突,merge操作将统一为teacher的 `Variables <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/beginners_guide/basic_concept/variable.html#variable>`_ 添加的名称前缀name_prefix。默认值:teacher_
W
whs 已提交
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

**返回:** 无

**使用示例:**

.. code-block:: python

   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1)
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1)
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1)
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program,
                             data_name_map, place)


fsp_loss
---------

B
Bai Yifan 已提交
53
.. py:function:: paddleslim.dist.fsp_loss(teacher_var1_name, teacher_var2_name, student_var1_name, student_var2_name, program=None)
W
whs 已提交
54

B
Bai Yifan 已提交
55 56 57
`[源代码] <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L90>`_

为program内的teacher var和student var添加fsp_loss.
W
whs 已提交
58

B
Bai Yifan 已提交
59
fsp_loss出自论文 `A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning <http://openaccess.thecvf.com/content_cvpr_2017/papers/Yim_A_Gift_From_CVPR_2017_paper.pdf>`_
W
whs 已提交
60 61 62

**参数:**

B
Bai Yifan 已提交
63 64 65 66
- **teacher_var1_name** (str): teacher_var1的名称. 对应的variable是一个形为 ``[batch_size, x_channel, height, width]`` 的4-D特征图Tensor,数据类型为float32或float64
- **teacher_var2_name** (str): teacher_var2的名称. 对应的variable是一个形为 ``[batch_size, y_channel, height, width]`` 的4-D特征图Tensor,数据类型为float32或float64。只有y_channel可以与teacher_var1的x_channel不同,其他维度必须与teacher_var1相同
- **student_var1_name** (str): student_var1的名称. 对应的variable需与teacher_var1尺寸保持一致,是一个形为 ``[batch_size, x_channel, height, width]`` 的4-D特征图Tensor,数据类型为float32或float64
- **student_var2_name** (str): student_var2的名称. 对应的variable需与teacher_var2尺寸保持一致,是一个形为 ``[batch_size, y_channel, height, width]`` 的4-D特征图Tensor,数据类型为float32或float64。只有y_channel可以与student_var1的x_channel不同,其他维度必须与student_var1相同
B
Bai Yifan 已提交
67
- **program** (Program): 用于蒸馏训练的fluid program, 如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
68

B
Bai Yifan 已提交
69 70 71
**返回:**

- (Variable): 由teacher_var1, teacher_var2, student_var1, student_var2组合得到的fsp_loss
W
whs 已提交
72 73 74 75

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
76

W
whs 已提交
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.fsp_loss('teacher_t1.tmp_1', 'teacher_t2.tmp_1',
                                         's1.tmp_1', 's2.tmp_1', student_program)
B
Bai Yifan 已提交
97

W
whs 已提交
98 99 100 101 102


l2_loss
------------

B
Bai Yifan 已提交
103 104
.. py:function:: paddleslim.dist.l2_loss(teacher_var_name, student_var_name, program=None)

B
Bai Yifan 已提交
105
`[源代码] <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L118>`_
W
whs 已提交
106

B
Bai Yifan 已提交
107
为program内的teacher var和student var添加l2 loss
W
whs 已提交
108 109 110 111 112

**参数:**

- **teacher_var_name** (str): teacher_var的名称.
- **student_var_name** (str): student_var的名称.
B
Bai Yifan 已提交
113
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
114

B
Bai Yifan 已提交
115 116 117
**返回:**

- (Variable): 由teacher_var, student_var组合得到的l2_loss
W
whs 已提交
118 119 120 121

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
122

W
whs 已提交
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.l2_loss('teacher_t2.tmp_1', 's2.tmp_1',
                                        student_program)



soft_label_loss
-------------------

B
Bai Yifan 已提交
149 150
.. py:function:: paddleslim.dist.soft_label_loss(teacher_var_name, student_var_name, program=None, teacher_temperature=1., student_temperature=1.)

B
Bai Yifan 已提交
151 152 153
`[源代码] <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L136>`_

为program内的teacher var和student var添加soft label loss
W
whs 已提交
154

B
Bai Yifan 已提交
155
soft_label_loss出自论文 `Distilling the Knowledge in a Neural Network <https://arxiv.org/pdf/1503.02531.pdf>`_
W
whs 已提交
156 157 158 159 160

**参数:**

- **teacher_var_name** (str): teacher_var的名称.
- **student_var_name** (str): student_var的名称.
B
Bai Yifan 已提交
161
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
162 163 164
- **teacher_temperature** (float): 对teacher_var进行soft操作的温度值,温度值越大得到的特征图越平滑
- **student_temperature** (float): 对student_var进行soft操作的温度值,温度值越大得到的特征图越平滑

B
Bai Yifan 已提交
165 166 167
**返回:**

- (Variable): 由teacher_var, student_var组合得到的soft_label_loss
W
whs 已提交
168 169 170 171

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
172

W
whs 已提交
173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.soft_label_loss('teacher_t2.tmp_1',
                                                's2.tmp_1', student_program, 1., 1.)



loss
--------

B
Bai Yifan 已提交
199
.. py:function:: paddleslim.dist.loss(loss_func, program=None, **kwargs)
B
Bai Yifan 已提交
200

B
Bai Yifan 已提交
201
`[源代码] <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L165>`_
W
whs 已提交
202

B
Bai Yifan 已提交
203
支持对teacher_var和student_var使用任意自定义损失函数
W
whs 已提交
204 205 206

**参数:**

B
Bai Yifan 已提交
207
- **loss_func** (python function): 自定义的损失函数,输入为teacher var和student var,输出为自定义的loss
B
Bai Yifan 已提交
208
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
B
Bai Yifan 已提交
209 210 211
- **kwargs** : loss_func输入名与对应variable名称

**返回:**
W
whs 已提交
212

B
Bai Yifan 已提交
213
- (Variable): 自定义的损失函数loss
W
whs 已提交
214 215 216 217

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
218

W
whs 已提交
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   def adaptation_loss(t_var, s_var):
       teacher_channel = t_var.shape[1]
       s_hint = fluid.layers.conv2d(s_var, teacher_channel, 1)
       hint_loss = fluid.layers.reduce_mean(fluid.layers.square(s_hint - t_var))
       return hint_loss
   with fluid.program_guard(student_program):
       distillation_loss = dist.loss(adaptation_loss, student_program,
               t_var='teacher_t2.tmp_1', s_var='s2.tmp_1')

.. note::

    在添加蒸馏loss时会引入新的variable,需要注意新引入的variable不要与student variables命名冲突。这里建议两种用法(两种方法任选其一即可):

    1. 建议与student_program使用同一个命名空间,以避免一些未指定名称的variables(例如tmp_0, tmp_1...)多次定义为同一名称出现命名冲突

    2. 建议在添加蒸馏loss时指定一个命名空间前缀,具体用法请参考Paddle官方文档 `fluid.name_scope <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/name_scope_cn.html#name-scope>`_