single_distiller_api.rst 12.1 KB
Newer Older
W
whs 已提交
1 2 3 4 5 6
简单蒸馏
=========

merge
---------

B
Bai Yifan 已提交
7
.. py:function:: paddleslim.dist.merge(teacher_program, student_program, data_name_map, place, scope=None, name_prefix='teacher_')
W
whs 已提交
8 9 10 11 12 13 14 15 16 17 18

`源代码 <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L19>`_

merge将teacher_program融合到student_program中。在融合的program中,可以为其中合适的teacher特征图和student特征图添加蒸馏损失函数,从而达到用teacher模型的暗知识(Dark Knowledge)指导student模型学习的目的。

**参数:**

- **teacher_program** (Program)-定义了teacher模型的 `paddle program <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program>`_
- **student_program** (Program)-定义了student模型的 `paddle program <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program>`_
- **data_name_map** (dict)-teacher输入接口名与student输入接口名的映射,其中dict的 *key* 为teacher的输入名,*value* 为student的输入名
- **place** (fluid.CPUPlace()|fluid.CUDAPlace(N))-该参数表示程序运行在何种设备上,这里的N为GPU对应的ID
B
Bai Yifan 已提交
19 20
- **scope** (Scope)-该参数表示程序使用的变量作用域,如果不指定将使用默认的全局作用域。默认值: None
- **name_prefix** (str)-merge操作将统一为teacher的 `Variables <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/beginners_guide/basic_concept/variable.html#variable>`_ 添加的名称前缀name_prefix。默认值:'teacher_'
W
whs 已提交
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

**返回:** 无

.. note::

    *data_name_map* 是 **teacher_var name到student_var name的映射** ,如果写反可能无法正确进行merge


**使用示例:**

.. code-block:: python

   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1)
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1)
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1)
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program,
                             data_name_map, place)


fsp_loss
---------

B
Bai Yifan 已提交
56
.. py:function:: paddleslim.dist.fsp_loss(teacher_var1_name, teacher_var2_name, student_var1_name, student_var2_name, program=None)
W
whs 已提交
57 58 59 60 61 62 63 64 65 66 67

`源代码 <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L90>`_

fsp_loss为program内的teacher var和student var添加fsp loss,出自论文 `A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning <http://openaccess.thecvf.com/content_cvpr_2017/papers/Yim_A_Gift_From_CVPR_2017_paper.pdf>`_

**参数:**

- **teacher_var1_name** (str): teacher_var1的名称. 对应的variable是一个形为`[batch_size, x_channel, height, width]`的4-D特征图Tensor,数据类型为float32或float64
- **teacher_var2_name** (str): teacher_var2的名称. 对应的variable是一个形为`[batch_size, y_channel, height, width]`的4-D特征图Tensor,数据类型为float32或float64。只有y_channel可以与teacher_var1的x_channel不同,其他维度必须与teacher_var1相同
- **student_var1_name** (str): student_var1的名称. 对应的variable需与teacher_var1尺寸保持一致,是一个形为`[batch_size, x_channel, height, width]`的4-D特征图Tensor,数据类型为float32或float64
- **student_var2_name** (str): student_var2的名称. 对应的variable需与teacher_var2尺寸保持一致,是一个形为`[batch_size, y_channel, height, width]`的4-D特征图Tensor,数据类型为float32或float64。只有y_channel可以与student_var1的x_channel不同,其他维度必须与student_var1相同
B
Bai Yifan 已提交
68
- **program** (Program): 用于蒸馏训练的fluid program, 如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
69 70 71 72 73 74

**返回:** 由teacher_var1, teacher_var2, student_var1, student_var2组合得到的fsp_loss

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
75

W
whs 已提交
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.fsp_loss('teacher_t1.tmp_1', 'teacher_t2.tmp_1',
                                         's1.tmp_1', 's2.tmp_1', student_program)
   


l2_loss
------------

B
Bai Yifan 已提交
102 103 104
.. py:function:: paddleslim.dist.l2_loss(teacher_var_name, student_var_name, program=None)

`源代码 <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L118>`_
W
whs 已提交
105

B
Bai Yifan 已提交
106
l2_loss为program内的teacher var和student var添加l2 loss
W
whs 已提交
107 108 109 110 111

**参数:**

- **teacher_var_name** (str): teacher_var的名称.
- **student_var_name** (str): student_var的名称.
B
Bai Yifan 已提交
112
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
113 114 115 116 117 118

**返回:** 由teacher_var, student_var组合得到的l2_loss

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
119

W
whs 已提交
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.l2_loss('teacher_t2.tmp_1', 's2.tmp_1',
                                        student_program)



soft_label_loss
-------------------

B
Bai Yifan 已提交
146 147 148
.. py:function:: paddleslim.dist.soft_label_loss(teacher_var_name, student_var_name, program=None, teacher_temperature=1., student_temperature=1.)

`源代码 <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L136>`_
W
whs 已提交
149 150 151 152 153 154 155

soft_label_loss为program内的teacher var和student var添加soft label loss,出自论文 `Distilling the Knowledge in a Neural Network <https://arxiv.org/pdf/1503.02531.pdf>`_

**参数:**

- **teacher_var_name** (str): teacher_var的名称.
- **student_var_name** (str): student_var的名称.
B
Bai Yifan 已提交
156
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190
- **teacher_temperature** (float): 对teacher_var进行soft操作的温度值,温度值越大得到的特征图越平滑
- **student_temperature** (float): 对student_var进行soft操作的温度值,温度值越大得到的特征图越平滑

**返回:** 由teacher_var, student_var组合得到的soft_label_loss

**使用示例:**

.. code-block:: python
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   with fluid.program_guard(student_program):
       distillation_loss = dist.soft_label_loss('teacher_t2.tmp_1',
                                                's2.tmp_1', student_program, 1., 1.)



loss
--------

B
Bai Yifan 已提交
191 192 193
.. py:function:: paddleslim.dist.loss(loss_func, program=None, **kwargs) 

`源代码 <https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L165>`_
W
whs 已提交
194

B
Bai Yifan 已提交
195
loss函数支持对任意多对teacher_var和student_var使用自定义损失函数
W
whs 已提交
196 197 198 199

**参数:**

- **loss_func**( python function): 自定义的损失函数,输入为teacher var和student var,输出为自定义的loss
B
Bai Yifan 已提交
200
- **program** (Program): 用于蒸馏训练的fluid program。如果未指定则使用 `fluid.default_main_program() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/fluid_cn/default_main_program_cn.html#default-main-program>`_ 。默认值:None
W
whs 已提交
201 202 203 204 205 206 207
- **\**kwargs** : loss_func输入名与对应variable名称

**返回** :自定义的损失函数loss

**使用示例:**

.. code-block:: python
B
Bai Yifan 已提交
208

W
whs 已提交
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241
   import paddle.fluid as fluid
   import paddleslim.dist as dist
   student_program = fluid.Program()
   with fluid.program_guard(student_program):
       x = fluid.layers.data(name='x', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(x, 32, 1, name='s1')
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='s2')
   teacher_program = fluid.Program()
   with fluid.program_guard(teacher_program):
       y = fluid.layers.data(name='y', shape=[1, 28, 28])
       conv = fluid.layers.conv2d(y, 32, 1, name='t1')
       conv = fluid.layers.conv2d(conv, 32, 3, padding=1)
       out = fluid.layers.conv2d(conv, 64, 3, padding=1, name='t2')
   data_name_map = {'y':'x'}
   USE_GPU = False
   place = fluid.CUDAPlace(0) if USE_GPU else fluid.CPUPlace()
   dist.merge(teacher_program, student_program, data_name_map, place)
   def adaptation_loss(t_var, s_var):
       teacher_channel = t_var.shape[1]
       s_hint = fluid.layers.conv2d(s_var, teacher_channel, 1)
       hint_loss = fluid.layers.reduce_mean(fluid.layers.square(s_hint - t_var))
       return hint_loss
   with fluid.program_guard(student_program):
       distillation_loss = dist.loss(adaptation_loss, student_program,
               t_var='teacher_t2.tmp_1', s_var='s2.tmp_1')

.. note::

    在添加蒸馏loss时会引入新的variable,需要注意新引入的variable不要与student variables命名冲突。这里建议两种用法(两种方法任选其一即可):

    1. 建议与student_program使用同一个命名空间,以避免一些未指定名称的variables(例如tmp_0, tmp_1...)多次定义为同一名称出现命名冲突

    2. 建议在添加蒸馏loss时指定一个命名空间前缀,具体用法请参考Paddle官方文档 `fluid.name_scope <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/name_scope_cn.html#name-scope>`_