提交 c8e905e6 编写于 作者: B baiyfbupt

Deployed 09a1807b with MkDocs version: 1.0.4

上级 4c0695a7
...@@ -125,6 +125,9 @@ ...@@ -125,6 +125,9 @@
<a class="current" href="./">算法原理</a> <a class="current" href="./">算法原理</a>
<ul class="subnav"> <ul class="subnav">
<li class="toctree-l2"><a href="#_1">目录</a></li>
<li class="toctree-l2"><a href="#1-quantization-aware-training">1. Quantization Aware Training量化介绍</a></li> <li class="toctree-l2"><a href="#1-quantization-aware-training">1. Quantization Aware Training量化介绍</a></li>
<ul> <ul>
...@@ -196,7 +199,7 @@ ...@@ -196,7 +199,7 @@
<li>算法原理</li> <li>算法原理</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/algo/algo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/algo/algo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -206,7 +209,14 @@ ...@@ -206,7 +209,14 @@
<div role="main"> <div role="main">
<div class="section"> <div class="section">
<h2 id="1-quantization-aware-training">1. Quantization Aware Training量化介绍<a class="headerlink" href="#1-quantization-aware-training" title="Permanent link">#</a></h2> <h2 id="_1">目录<a class="headerlink" href="#_1" title="Permanent link">#</a></h2>
<ul>
<li><a href="#1-quantization-aware-training量化介绍">量化原理介绍</a></li>
<li><a href="#2-卷积核剪裁原理">剪裁原理介绍</a></li>
<li><a href="#3-蒸馏">蒸馏原理介绍</a></li>
<li><a href="#4-轻量级模型结构搜索">轻量级模型结构搜索原理介绍</a></li>
</ul>
<h2 id="1-quantization-aware-training">1. Quantization Aware Training量化介绍<a class="headerlink" href="#1-quantization-aware-training" title="Permanent link">#</a></h2>
<h3 id="11">1.1 背景<a class="headerlink" href="#11" title="Permanent link">#</a></h3> <h3 id="11">1.1 背景<a class="headerlink" href="#11" title="Permanent link">#</a></h3>
<p>近年来,定点量化使用更少的比特数(如8-bit、3-bit、2-bit等)表示神经网络的权重和激活已被验证是有效的。定点量化的优点包括低内存带宽、低功耗、低计算资源占用以及低模型存储需求等。</p> <p>近年来,定点量化使用更少的比特数(如8-bit、3-bit、2-bit等)表示神经网络的权重和激活已被验证是有效的。定点量化的优点包括低内存带宽、低功耗、低计算资源占用以及低模型存储需求等。</p>
<p align="center"> <p align="center">
...@@ -338,7 +348,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \ ...@@ -338,7 +348,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \
在剪裁一个卷积核之前,按l1_norm对filter从高到低排序,越靠后的filter越不重要,优先剪掉靠后的filter.</p> 在剪裁一个卷积核之前,按l1_norm对filter从高到低排序,越靠后的filter越不重要,优先剪掉靠后的filter.</p>
<h3 id="23">2.3 基于敏感度剪裁卷积网络<a class="headerlink" href="#23" title="Permanent link">#</a></h3> <h3 id="23">2.3 基于敏感度剪裁卷积网络<a class="headerlink" href="#23" title="Permanent link">#</a></h3>
<p>根据每个卷积层敏感度的不同,剪掉不同比例的卷积核。</p> <p>根据每个卷积层敏感度的不同,剪掉不同比例的卷积核。</p>
<h4 id="_1">两个假设<a class="headerlink" href="#_1" title="Permanent link">#</a></h4> <h4 id="_2">两个假设<a class="headerlink" href="#_2" title="Permanent link">#</a></h4>
<ul> <ul>
<li>在一个conv layer的parameter内部,按l1_norm对filter从高到低排序,越靠后的filter越不重要。</li> <li>在一个conv layer的parameter内部,按l1_norm对filter从高到低排序,越靠后的filter越不重要。</li>
<li>两个layer剪裁相同的比例的filters,我们称对模型精度影响更大的layer的敏感度相对高。</li> <li>两个layer剪裁相同的比例的filters,我们称对模型精度影响更大的layer的敏感度相对高。</li>
...@@ -348,7 +358,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \ ...@@ -348,7 +358,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \
<li>layer的剪裁比例与其敏感度成反比</li> <li>layer的剪裁比例与其敏感度成反比</li>
<li>优先剪裁layer内l1_norm相对低的filter</li> <li>优先剪裁layer内l1_norm相对低的filter</li>
</ul> </ul>
<h4 id="_2">敏感度的理解<a class="headerlink" href="#_2" title="Permanent link">#</a></h4> <h4 id="_3">敏感度的理解<a class="headerlink" href="#_3" title="Permanent link">#</a></h4>
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/PaddleSlim/develop/docs/docs/images/algo/pruning_3.png" height=200 width=400 hspace='10'/> <br /> <img src="https://raw.githubusercontent.com/PaddlePaddle/PaddleSlim/develop/docs/docs/images/algo/pruning_3.png" height=200 width=400 hspace='10'/> <br />
<strong>图7</strong> <strong>图7</strong>
...@@ -356,7 +366,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \ ...@@ -356,7 +366,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \
<p>如**图7**所示,横坐标是将filter剪裁掉的比例,竖坐标是精度的损失,每条彩色虚线表示的是网络中的一个卷积层。 <p>如**图7**所示,横坐标是将filter剪裁掉的比例,竖坐标是精度的损失,每条彩色虚线表示的是网络中的一个卷积层。
以不同的剪裁比例**单独**剪裁一个卷积层,并观察其在验证数据集上的精度损失,并绘出**图7**中的虚线。虚线上升较慢的,对应的卷积层相对不敏感,我们优先剪不敏感的卷积层的filter.</p> 以不同的剪裁比例**单独**剪裁一个卷积层,并观察其在验证数据集上的精度损失,并绘出**图7**中的虚线。虚线上升较慢的,对应的卷积层相对不敏感,我们优先剪不敏感的卷积层的filter.</p>
<h4 id="_3">选择最优的剪裁率组合<a class="headerlink" href="#_3" title="Permanent link">#</a></h4> <h4 id="_4">选择最优的剪裁率组合<a class="headerlink" href="#_4" title="Permanent link">#</a></h4>
<p>我们将**图7**中的折线拟合为**图8**中的曲线,每在竖坐标轴上选取一个精度损失值,就在横坐标轴上对应着一组剪裁率,如**图8**中黑色实线所示。 <p>我们将**图7**中的折线拟合为**图8**中的曲线,每在竖坐标轴上选取一个精度损失值,就在横坐标轴上对应着一组剪裁率,如**图8**中黑色实线所示。
用户给定一个模型整体的剪裁率,我们通过移动**图5**中的黑色实线来找到一组满足条件的且合法的剪裁率。</p> 用户给定一个模型整体的剪裁率,我们通过移动**图5**中的黑色实线来找到一组满足条件的且合法的剪裁率。</p>
<p align="center"> <p align="center">
...@@ -364,7 +374,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \ ...@@ -364,7 +374,7 @@ Y_{dq} = \frac{Y_q}{(n - 1) * (n - 1)} * X_m * W_m \
<strong>图8</strong> <strong>图8</strong>
</p> </p>
<h4 id="_4">迭代剪裁<a class="headerlink" href="#_4" title="Permanent link">#</a></h4> <h4 id="_5">迭代剪裁<a class="headerlink" href="#_5" title="Permanent link">#</a></h4>
<p>考虑到多个卷积层间的相关性,一个卷积层的修改可能会影响其它卷积层的敏感度,我们采取了多次剪裁的策略,步骤如下:</p> <p>考虑到多个卷积层间的相关性,一个卷积层的修改可能会影响其它卷积层的敏感度,我们采取了多次剪裁的策略,步骤如下:</p>
<ul> <ul>
<li>step1: 统计各卷积层的敏感度信息</li> <li>step1: 统计各卷积层的敏感度信息</li>
......
...@@ -166,7 +166,7 @@ ...@@ -166,7 +166,7 @@
<li>模型分析</li> <li>模型分析</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/analysis_api.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/analysis_api.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -178,7 +178,7 @@ ...@@ -178,7 +178,7 @@
<h2 id="flops">FLOPs<a class="headerlink" href="#flops" title="Permanent link">#</a></h2> <h2 id="flops">FLOPs<a class="headerlink" href="#flops" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.analysis.flops(program, detail=False) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/flops.py">[源代码]</a></dt> <dt>paddleslim.analysis.flops(program, detail=False) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/flops.py">源代码</a></dt>
<dd> <dd>
<p>获得指定网络的浮点运算次数(FLOPs)。</p> <p>获得指定网络的浮点运算次数(FLOPs)。</p>
</dd> </dd>
...@@ -205,68 +205,64 @@ ...@@ -205,68 +205,64 @@
</li> </li>
</ul> </ul>
<p><strong>示例:</strong></p> <p><strong>示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="kn">from</span> <span class="nn">paddle.fluid.param_attr</span> <span class="kn">import</span> <span class="n">ParamAttr</span> from paddle.fluid.param_attr import ParamAttr
<span class="kn">from</span> <span class="nn">paddleslim.analysis</span> <span class="kn">import</span> <span class="n">flops</span> from paddleslim.analysis import flops
<span class="k">def</span> <span class="nf">conv_bn_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> def conv_bn_layer(input,
<span class="n">num_filters</span><span class="p">,</span> num_filters,
<span class="n">filter_size</span><span class="p">,</span> filter_size,
<span class="n">name</span><span class="p">,</span> name,
<span class="n">stride</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> stride=1,
<span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> groups=1,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span> act=None):
<span class="n">conv</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span> conv = fluid.layers.conv2d(
<span class="nb">input</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> input=input,
<span class="n">num_filters</span><span class="o">=</span><span class="n">num_filters</span><span class="p">,</span> num_filters=num_filters,
<span class="n">filter_size</span><span class="o">=</span><span class="n">filter_size</span><span class="p">,</span> filter_size=filter_size,
<span class="n">stride</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> stride=stride,
<span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="n">filter_size</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="o">//</span> <span class="mi">2</span><span class="p">,</span> padding=(filter_size - 1) // 2,
<span class="n">groups</span><span class="o">=</span><span class="n">groups</span><span class="p">,</span> groups=groups,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> act=None,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_weights&quot;</span><span class="p">),</span> param_attr=ParamAttr(name=name + &quot;_weights&quot;),
<span class="n">bias_attr</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> bias_attr=False,
<span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_out&quot;</span><span class="p">)</span> name=name + &quot;_out&quot;)
<span class="n">bn_name</span> <span class="o">=</span> <span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_bn&quot;</span> bn_name = name + &quot;_bn&quot;
<span class="k">return</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">batch_norm</span><span class="p">(</span> return fluid.layers.batch_norm(
<span class="nb">input</span><span class="o">=</span><span class="n">conv</span><span class="p">,</span> input=conv,
<span class="n">act</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> act=act,
<span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_output&#39;</span><span class="p">,</span> name=bn_name + &#39;_output&#39;,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_scale&#39;</span><span class="p">),</span> param_attr=ParamAttr(name=bn_name + &#39;_scale&#39;),
<span class="n">bias_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_offset&#39;</span><span class="p">),</span> bias_attr=ParamAttr(bn_name + &#39;_offset&#39;),
<span class="n">moving_mean_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_mean&#39;</span><span class="p">,</span> moving_mean_name=bn_name + &#39;_mean&#39;,
<span class="n">moving_variance_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_variance&#39;</span><span class="p">,</span> <span class="p">)</span> moving_variance_name=bn_name + &#39;_variance&#39;, )
<span class="n">main_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> main_program = fluid.Program()
<span class="n">startup_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> startup_program = fluid.Program()
<span class="c1"># X X O X O</span> # X X O X O
<span class="c1"># conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6</span> # conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6
<span class="c1"># | ^ | ^</span> # | ^ | ^
<span class="c1"># |____________| |____________________|</span> # |____________| |____________________|
<span class="c1">#</span> #
<span class="c1"># X: prune output channels</span> # X: prune output channels
<span class="c1"># O: prune input channels</span> # O: prune input channels
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">startup_program</span><span class="p">):</span> with fluid.program_guard(main_program, startup_program):
<span class="nb">input</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;image&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">])</span> input = fluid.data(name=&quot;image&quot;, shape=[None, 3, 16, 16])
<span class="n">conv1</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv1&quot;</span><span class="p">)</span> conv1 = conv_bn_layer(input, 8, 3, &quot;conv1&quot;)
<span class="n">conv2</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv2&quot;</span><span class="p">)</span> conv2 = conv_bn_layer(conv1, 8, 3, &quot;conv2&quot;)
<span class="n">sum1</span> <span class="o">=</span> <span class="n">conv1</span> <span class="o">+</span> <span class="n">conv2</span> sum1 = conv1 + conv2
<span class="n">conv3</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv3&quot;</span><span class="p">)</span> conv3 = conv_bn_layer(sum1, 8, 3, &quot;conv3&quot;)
<span class="n">conv4</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv3</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv4&quot;</span><span class="p">)</span> conv4 = conv_bn_layer(conv3, 8, 3, &quot;conv4&quot;)
<span class="n">sum2</span> <span class="o">=</span> <span class="n">conv4</span> <span class="o">+</span> <span class="n">sum1</span> sum2 = conv4 + sum1
<span class="n">conv5</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum2</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv5&quot;</span><span class="p">)</span> conv5 = conv_bn_layer(sum2, 8, 3, &quot;conv5&quot;)
<span class="n">conv6</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv5</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv6&quot;</span><span class="p">)</span> conv6 = conv_bn_layer(conv5, 8, 3, &quot;conv6&quot;)
<span class="k">print</span><span class="p">(</span><span class="s2">&quot;FLOPs: {}&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">flops</span><span class="p">(</span><span class="n">main_program</span><span class="p">)))</span> print(&quot;FLOPs: {}&quot;.format(flops(main_program)))
</pre></div> </pre></div>
<h2 id="model_size">model_size<a class="headerlink" href="#model_size" title="Permanent link">#</a></h2> <h2 id="model_size">model_size<a class="headerlink" href="#model_size" title="Permanent link">#</a></h2>
<dl> <p>paddleslim.analysis.model_size(program) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/model_size.py">源代码</a></p>
<dt>paddleslim.analysis.model_size(program) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/model_size.py">[源代码]</a></dt>
<dd>
<p>获得指定网络的参数数量。</p> <p>获得指定网络的参数数量。</p>
</dd>
</dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>program(paddle.fluid.Program)</strong> - 待分析的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></li> <li><strong>program(paddle.fluid.Program)</strong> - 待分析的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></li>
...@@ -276,56 +272,56 @@ ...@@ -276,56 +272,56 @@
<li><strong>model_size(int)</strong> - 整个网络的参数数量。</li> <li><strong>model_size(int)</strong> - 整个网络的参数数量。</li>
</ul> </ul>
<p><strong>示例:</strong></p> <p><strong>示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="kn">from</span> <span class="nn">paddle.fluid.param_attr</span> <span class="kn">import</span> <span class="n">ParamAttr</span> from paddle.fluid.param_attr import ParamAttr
<span class="kn">from</span> <span class="nn">paddleslim.analysis</span> <span class="kn">import</span> <span class="n">model_size</span> from paddleslim.analysis import model_size
<span class="k">def</span> <span class="nf">conv_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> def conv_layer(input,
<span class="n">num_filters</span><span class="p">,</span> num_filters,
<span class="n">filter_size</span><span class="p">,</span> filter_size,
<span class="n">name</span><span class="p">,</span> name,
<span class="n">stride</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> stride=1,
<span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> groups=1,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span> act=None):
<span class="n">conv</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span> conv = fluid.layers.conv2d(
<span class="nb">input</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> input=input,
<span class="n">num_filters</span><span class="o">=</span><span class="n">num_filters</span><span class="p">,</span> num_filters=num_filters,
<span class="n">filter_size</span><span class="o">=</span><span class="n">filter_size</span><span class="p">,</span> filter_size=filter_size,
<span class="n">stride</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> stride=stride,
<span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="n">filter_size</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="o">//</span> <span class="mi">2</span><span class="p">,</span> padding=(filter_size - 1) // 2,
<span class="n">groups</span><span class="o">=</span><span class="n">groups</span><span class="p">,</span> groups=groups,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> act=None,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_weights&quot;</span><span class="p">),</span> param_attr=ParamAttr(name=name + &quot;_weights&quot;),
<span class="n">bias_attr</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> bias_attr=False,
<span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_out&quot;</span><span class="p">)</span> name=name + &quot;_out&quot;)
<span class="k">return</span> <span class="n">conv</span> return conv
<span class="n">main_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> main_program = fluid.Program()
<span class="n">startup_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> startup_program = fluid.Program()
<span class="c1"># X X O X O</span> # X X O X O
<span class="c1"># conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6</span> # conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6
<span class="c1"># | ^ | ^</span> # | ^ | ^
<span class="c1"># |____________| |____________________|</span> # |____________| |____________________|
<span class="c1">#</span> #
<span class="c1"># X: prune output channels</span> # X: prune output channels
<span class="c1"># O: prune input channels</span> # O: prune input channels
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">startup_program</span><span class="p">):</span> with fluid.program_guard(main_program, startup_program):
<span class="nb">input</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;image&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">])</span> input = fluid.data(name=&quot;image&quot;, shape=[None, 3, 16, 16])
<span class="n">conv1</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv1&quot;</span><span class="p">)</span> conv1 = conv_layer(input, 8, 3, &quot;conv1&quot;)
<span class="n">conv2</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="n">conv1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv2&quot;</span><span class="p">)</span> conv2 = conv_layer(conv1, 8, 3, &quot;conv2&quot;)
<span class="n">sum1</span> <span class="o">=</span> <span class="n">conv1</span> <span class="o">+</span> <span class="n">conv2</span> sum1 = conv1 + conv2
<span class="n">conv3</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="n">sum1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv3&quot;</span><span class="p">)</span> conv3 = conv_layer(sum1, 8, 3, &quot;conv3&quot;)
<span class="n">conv4</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="n">conv3</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv4&quot;</span><span class="p">)</span> conv4 = conv_layer(conv3, 8, 3, &quot;conv4&quot;)
<span class="n">sum2</span> <span class="o">=</span> <span class="n">conv4</span> <span class="o">+</span> <span class="n">sum1</span> sum2 = conv4 + sum1
<span class="n">conv5</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="n">sum2</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv5&quot;</span><span class="p">)</span> conv5 = conv_layer(sum2, 8, 3, &quot;conv5&quot;)
<span class="n">conv6</span> <span class="o">=</span> <span class="n">conv_layer</span><span class="p">(</span><span class="n">conv5</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv6&quot;</span><span class="p">)</span> conv6 = conv_layer(conv5, 8, 3, &quot;conv6&quot;)
<span class="k">print</span><span class="p">(</span><span class="s2">&quot;FLOPs: {}&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">model_size</span><span class="p">(</span><span class="n">main_program</span><span class="p">)))</span> print(&quot;FLOPs: {}&quot;.format(model_size(main_program)))
</pre></div> </pre></div>
<h2 id="tablelatencyevaluator">TableLatencyEvaluator<a class="headerlink" href="#tablelatencyevaluator" title="Permanent link">#</a></h2> <h2 id="tablelatencyevaluator">TableLatencyEvaluator<a class="headerlink" href="#tablelatencyevaluator" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.analysis.TableLatencyEvaluator(table_file, delimiter=",") <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/latency.py">[源代码]</a></dt> <dt>paddleslim.analysis.TableLatencyEvaluator(table_file, delimiter=",") <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/latency.py">源代码</a></dt>
<dd> <dd>
<p>基于硬件延时表的模型延时评估器。</p> <p>基于硬件延时表的模型延时评估器。</p>
</dd> </dd>
...@@ -333,7 +329,7 @@ ...@@ -333,7 +329,7 @@
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li> <li>
<p><strong>table_file(str)</strong> - 所使用的延时评估表的绝对路径。关于演示评估表格式请参考:<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/table_latency.md">PaddleSlim硬件延时评估表格式</a></p> <p><strong>table_file(str)</strong> - 所使用的延时评估表的绝对路径。关于演示评估表格式请参考:<a href="../paddleslim/analysis/table_latency.md">PaddleSlim硬件延时评估表格式</a></p>
</li> </li>
<li> <li>
<p><strong>delimiter(str)</strong> - 硬件延时评估表中,操作信息之前所使用的分割符,默认为英文字符逗号。</p> <p><strong>delimiter(str)</strong> - 硬件延时评估表中,操作信息之前所使用的分割符,默认为英文字符逗号。</p>
...@@ -344,7 +340,7 @@ ...@@ -344,7 +340,7 @@
<li><strong>Evaluator</strong> - 硬件延时评估器的实例。</li> <li><strong>Evaluator</strong> - 硬件延时评估器的实例。</li>
</ul> </ul>
<dl> <dl>
<dt>paddleslim.analysis.TableLatencyEvaluator.latency(graph) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/latency.py">[源代码]</a></dt> <dt>paddleslim.analysis.TableLatencyEvaluator.latency(graph) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/analysis/latency.py">源代码</a></dt>
<dd> <dd>
<p>获得指定网络的预估延时。</p> <p>获得指定网络的预估延时。</p>
</dd> </dd>
......
...@@ -150,7 +150,7 @@ ...@@ -150,7 +150,7 @@
<li>PaddleSlim API文档导航</li> <li>PaddleSlim API文档导航</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/api_guide.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/api_guide.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
......
...@@ -163,7 +163,7 @@ ...@@ -163,7 +163,7 @@
<li>SA搜索</li> <li>SA搜索</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/nas_api.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/nas_api.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -182,16 +182,12 @@ ...@@ -182,16 +182,12 @@
<li><strong>block_num(int|None)</strong>:- <code>block_num</code>表示搜索空间中block的数量。</li> <li><strong>block_num(int|None)</strong>:- <code>block_num</code>表示搜索空间中block的数量。</li>
<li><strong>block_mask(list|None)</strong>:- <code>block_mask</code>是一组由0、1组成的列表,0表示当前block是normal block,1表示当前block是reduction block。如果设置了<code>block_mask</code>,则主要以<code>block_mask</code>为主要配置,<code>input_size</code><code>output_size</code><code>block_num</code>三种配置是无效的。</li> <li><strong>block_mask(list|None)</strong>:- <code>block_mask</code>是一组由0、1组成的列表,0表示当前block是normal block,1表示当前block是reduction block。如果设置了<code>block_mask</code>,则主要以<code>block_mask</code>为主要配置,<code>input_size</code><code>output_size</code><code>block_num</code>三种配置是无效的。</li>
</ul> </ul>
<div class="admonition note"> <p>Note:<br>
<p class="admonition-title">Note</p> 1. reduction block表示经过这个block之后的feature map大小下降为之前的一半,normal block表示经过这个block之后feature map大小不变。<br>
<ol> 2. <code>input_size</code><code>output_size</code>用来计算整个模型结构中reduction block数量。</p>
<li>reduction block表示经过这个block之后的feature map大小下降为之前的一半,normal block表示经过这个block之后feature map大小不变。<br></li>
<li><code>input_size</code><code>output_size</code>用来计算整个模型结构中reduction block数量。</li>
</ol>
</div>
<h2 id="sanas">SANAS<a class="headerlink" href="#sanas" title="Permanent link">#</a></h2> <h2 id="sanas">SANAS<a class="headerlink" href="#sanas" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.nas.SANAS(configs, server_addr=("", 8881), init_temperature=100, reduce_rate=0.85, search_steps=300, save_checkpoint='./nas_checkpoint', load_checkpoint=None, is_server=True)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/nas/sa_nas.py#L36">[源代码]</a></dt> <dt>paddleslim.nas.SANAS(configs, server_addr=("", 8881), init_temperature=100, reduce_rate=0.85, search_steps=300, save_checkpoint='./nas_checkpoint', load_checkpoint=None, is_server=True)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/nas/sa_nas.py#L36">源代码</a></dt>
<dd>SANAS(Simulated Annealing Neural Architecture Search)是基于模拟退火算法进行模型结构搜索的算法,一般用于离散搜索任务。</dd> <dd>SANAS(Simulated Annealing Neural Architecture Search)是基于模拟退火算法进行模型结构搜索的算法,一般用于离散搜索任务。</dd>
</dl> </dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
...@@ -208,18 +204,16 @@ ...@@ -208,18 +204,16 @@
<p><strong>返回:</strong> <p><strong>返回:</strong>
一个SANAS类的实例</p> 一个SANAS类的实例</p>
<p><strong>示例代码:</strong> <p><strong>示例代码:</strong>
<div class="codehilite"><pre><span></span><span class="kn">from</span> <span class="nn">paddleslim.nas</span> <span class="kn">import</span> <span class="n">SANAS</span> <div class="highlight"><pre><span></span>from paddleslim.nas import SANAS
<span class="n">config</span> <span class="o">=</span> <span class="p">[(</span><span class="s1">&#39;MobileNetV2Space&#39;</span><span class="p">)]</span> config = [(&#39;MobileNetV2Space&#39;)]
<span class="n">sanas</span> <span class="o">=</span> <span class="n">SANAS</span><span class="p">(</span><span class="n">config</span><span class="o">=</span><span class="n">config</span><span class="p">)</span> sanas = SANAS(config=config)
</pre></div></p> </pre></div></p>
<dl> <dl>
<dt>paddlesim.nas.SANAS.tokens2arch(tokens)</dt> <dt>paddlesim.nas.SANAS.tokens2arch(tokens)</dt>
<dd>通过一组token得到实际的模型结构,一般用来把搜索到最优的token转换为模型结构用来做最后的训练。</dd> <dd>通过一组token得到实际的模型结构,一般用来把搜索到最优的token转换为模型结构用来做最后的训练。</dd>
</dl> </dl>
<div class="admonition note"> <p>Note:<br>
<p class="admonition-title">Note</p> tokens是一个列表,token映射到搜索空间转换成相应的网络结构,一组token对应唯一的一个网络结构。</p>
<p>tokens是一个列表,token映射到搜索空间转换成相应的网络结构,一组token对应唯一的一个网络结构。</p>
</div>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>tokens(list):</strong> - 一组token。</li> <li><strong>tokens(list):</strong> - 一组token。</li>
...@@ -227,12 +221,12 @@ ...@@ -227,12 +221,12 @@
<p><strong>返回:</strong> <p><strong>返回:</strong>
根据传入的token得到一个模型结构实例。</p> 根据传入的token得到一个模型结构实例。</p>
<p><strong>示例代码:</strong> <p><strong>示例代码:</strong>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="nb">input</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;input&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span> input = fluid.data(name=&#39;input&#39;, shape=[None, 3, 32, 32], dtype=&#39;float32&#39;)
<span class="n">archs</span> <span class="o">=</span> <span class="n">sanas</span><span class="o">.</span><span class="n">token2arch</span><span class="p">(</span><span class="n">tokens</span><span class="p">)</span> archs = sanas.token2arch(tokens)
<span class="k">for</span> <span class="n">arch</span> <span class="ow">in</span> <span class="n">archs</span><span class="p">:</span> for arch in archs:
<span class="n">output</span> <span class="o">=</span> <span class="n">arch</span><span class="p">(</span><span class="nb">input</span><span class="p">)</span> output = arch(input)
<span class="nb">input</span> <span class="o">=</span> <span class="n">output</span> input = output
</pre></div></p> </pre></div></p>
<dl> <dl>
<dt>paddleslim.nas.SANAS.next_archs()</dt> <dt>paddleslim.nas.SANAS.next_archs()</dt>
...@@ -241,12 +235,12 @@ ...@@ -241,12 +235,12 @@
<p><strong>返回:</strong> <p><strong>返回:</strong>
返回模型结构实例的列表,形式为list。</p> 返回模型结构实例的列表,形式为list。</p>
<p><strong>示例代码:</strong> <p><strong>示例代码:</strong>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="nb">input</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;input&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span> input = fluid.data(name=&#39;input&#39;, shape=[None, 3, 32, 32], dtype=&#39;float32&#39;)
<span class="n">archs</span> <span class="o">=</span> <span class="n">sanas</span><span class="o">.</span><span class="n">next_archs</span><span class="p">()</span> archs = sanas.next_archs()
<span class="k">for</span> <span class="n">arch</span> <span class="ow">in</span> <span class="n">archs</span><span class="p">:</span> for arch in archs:
<span class="n">output</span> <span class="o">=</span> <span class="n">arch</span><span class="p">(</span><span class="nb">input</span><span class="p">)</span> output = arch(input)
<span class="nb">input</span> <span class="o">=</span> <span class="n">output</span> input = output
</pre></div></p> </pre></div></p>
<dl> <dl>
<dt>paddleslim.nas.SANAS.reward(score)</dt> <dt>paddleslim.nas.SANAS.reward(score)</dt>
......
...@@ -172,7 +172,7 @@ ...@@ -172,7 +172,7 @@
<li>剪枝与敏感度</li> <li>剪枝与敏感度</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/prune_api.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/prune_api.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -184,7 +184,7 @@ ...@@ -184,7 +184,7 @@
<h2 id="pruner">Pruner<a class="headerlink" href="#pruner" title="Permanent link">#</a></h2> <h2 id="pruner">Pruner<a class="headerlink" href="#pruner" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.prune.Pruner(criterion="l1_norm")<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L28">[源代码]</a></dt> <dt>paddleslim.prune.Pruner(criterion="l1_norm")<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L28">源代码</a></dt>
<dd> <dd>
<p>对卷积网络的通道进行一次剪裁。剪裁一个卷积层的通道,是指剪裁该卷积层输出的通道。卷积层的权重形状为<code>[output_channel, input_channel, kernel_size, kernel_size]</code>,通过剪裁该权重的第一纬度达到剪裁输出通道数的目的。</p> <p>对卷积网络的通道进行一次剪裁。剪裁一个卷积层的通道,是指剪裁该卷积层输出的通道。卷积层的权重形状为<code>[output_channel, input_channel, kernel_size, kernel_size]</code>,通过剪裁该权重的第一纬度达到剪裁输出通道数的目的。</p>
</dd> </dd>
...@@ -195,12 +195,12 @@ ...@@ -195,12 +195,12 @@
</ul> </ul>
<p><strong>返回:</strong> 一个Pruner类的实例</p> <p><strong>返回:</strong> 一个Pruner类的实例</p>
<p><strong>示例代码:</strong></p> <p><strong>示例代码:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">from</span> <span class="nn">paddleslim.prune</span> <span class="kn">import</span> <span class="n">Pruner</span> <div class="highlight"><pre><span></span>from paddleslim.prune import Pruner
<span class="n">pruner</span> <span class="o">=</span> <span class="n">Pruner</span><span class="p">()</span> pruner = Pruner()
</pre></div> </pre></div>
<dl> <dl>
<dt>paddleslim.prune.Pruner.prune(program, scope, params, ratios, place=None, lazy=False, only_graph=False, param_backup=False, param_shape_backup=False)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L36">[源代码]</a></dt> <dt>paddleslim.prune.Pruner.prune(program, scope, params, ratios, place=None, lazy=False, only_graph=False, param_backup=False, param_shape_backup=False)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/pruner.py#L36">源代码</a></dt>
<dd> <dd>
<p>对目标网络的一组卷积层的权重进行裁剪。</p> <p>对目标网络的一组卷积层的权重进行裁剪。</p>
</dd> </dd>
...@@ -211,20 +211,20 @@ ...@@ -211,20 +211,20 @@
<p><strong>program(paddle.fluid.Program)</strong> - 要裁剪的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></p> <p><strong>program(paddle.fluid.Program)</strong> - 要裁剪的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></p>
</li> </li>
<li> <li>
<p><strong>scope(paddle.fluid.Scope)</strong> - 要裁剪的权重所在的<code>scope</code>,Paddle中用<code>scope</code>实例存放模型参数和运行时变量的值。Scope中的参数值会被<code>inplace</code>的裁剪。更多介绍请参考<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/scope_guard_cn.html#scope-guard">scope_guard</a></p> <p><strong>scope(paddle.fluid.Scope)</strong> - 要裁剪的权重所在的<code>scope</code>,Paddle中用<code>scope</code>实例存放模型参数和运行时变量的值。Scope中的参数值会被<code>inplace</code>的裁剪。更多介绍请参考<a href="">Scope概念介绍</a></p>
</li> </li>
<li> <li>
<p><strong>params(list<str>)</strong> - 需要被裁剪的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称: <p><strong>params(list<str>)</strong> - 需要被裁剪的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:
<div class="codehilite"><pre><span></span><span class="k">for</span> <span class="nv">block</span> <span class="nv">in</span> <span class="nv">program</span>.<span class="nv">blocks</span>: <div class="highlight"><pre><span></span>for block in program.blocks:
<span class="k">for</span> <span class="nv">param</span> <span class="nv">in</span> <span class="nv">block</span>.<span class="nv">all_parameters</span><span class="ss">()</span>: for param in block.all_parameters():
<span class="nv">print</span><span class="ss">(</span><span class="s2">&quot;</span><span class="s">param: {}; shape: {}</span><span class="s2">&quot;</span>.<span class="nv">format</span><span class="ss">(</span><span class="nv">param</span>.<span class="nv">name</span>, <span class="nv">param</span>.<span class="nv">shape</span><span class="ss">))</span> print(&quot;param: {}; shape: {}&quot;.format(param.name, param.shape))
</pre></div></p> </pre></div></p>
</li> </li>
<li> <li>
<p><strong>ratios(list<float>)</strong> - 用于裁剪<code>params</code>的剪切率,类型为列表。该列表长度必须与<code>params</code>的长度一致。</p> <p><strong>ratios(list<float>)</strong> - 用于裁剪<code>params</code>的剪切率,类型为列表。该列表长度必须与<code>params</code>的长度一致。</p>
</li> </li>
<li> <li>
<p><strong>place(paddle.fluid.Place)</strong> - 待裁剪参数所在的设备位置,可以是<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/CUDAPlace_cn.html#cudaplace">CUDAPlace</a><a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/CPUPlace_cn.html#cpuplace">CPUPlace</a></p> <p><strong>place(paddle.fluid.Place)</strong> - 待裁剪参数所在的设备位置,可以是<code>CUDAPlace</code><code>CPUPlace</code><a href="">Place概念介绍</a></p>
</li> </li>
<li> <li>
<p><strong>lazy(bool)</strong> - <code>lazy</code>为True时,通过将指定通道的参数置零达到裁剪的目的,参数的<code>shape保持不变</code><code>lazy</code>为False时,直接将要裁的通道的参数删除,参数的<code>shape</code>会发生变化。</p> <p><strong>lazy(bool)</strong> - <code>lazy</code>为True时,通过将指定通道的参数置零达到裁剪的目的,参数的<code>shape保持不变</code><code>lazy</code>为False时,直接将要裁的通道的参数删除,参数的<code>shape</code>会发生变化。</p>
...@@ -253,82 +253,82 @@ ...@@ -253,82 +253,82 @@
</ul> </ul>
<p><strong>示例:</strong></p> <p><strong>示例:</strong></p>
<p>点击<a href="https://aistudio.baidu.com/aistudio/projectDetail/200786">AIStudio</a>执行以下示例代码。 <p>点击<a href="https://aistudio.baidu.com/aistudio/projectDetail/200786">AIStudio</a>执行以下示例代码。
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="kn">from</span> <span class="nn">paddle.fluid.param_attr</span> <span class="kn">import</span> <span class="n">ParamAttr</span> from paddle.fluid.param_attr import ParamAttr
<span class="kn">from</span> <span class="nn">paddleslim.prune</span> <span class="kn">import</span> <span class="n">Pruner</span> from paddleslim.prune import Pruner
<span class="k">def</span> <span class="nf">conv_bn_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> def conv_bn_layer(input,
<span class="n">num_filters</span><span class="p">,</span> num_filters,
<span class="n">filter_size</span><span class="p">,</span> filter_size,
<span class="n">name</span><span class="p">,</span> name,
<span class="n">stride</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> stride=1,
<span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> groups=1,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span> act=None):
<span class="n">conv</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span> conv = fluid.layers.conv2d(
<span class="nb">input</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> input=input,
<span class="n">num_filters</span><span class="o">=</span><span class="n">num_filters</span><span class="p">,</span> num_filters=num_filters,
<span class="n">filter_size</span><span class="o">=</span><span class="n">filter_size</span><span class="p">,</span> filter_size=filter_size,
<span class="n">stride</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> stride=stride,
<span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="n">filter_size</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="o">//</span> <span class="mi">2</span><span class="p">,</span> padding=(filter_size - 1) // 2,
<span class="n">groups</span><span class="o">=</span><span class="n">groups</span><span class="p">,</span> groups=groups,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> act=None,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_weights&quot;</span><span class="p">),</span> param_attr=ParamAttr(name=name + &quot;_weights&quot;),
<span class="n">bias_attr</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> bias_attr=False,
<span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_out&quot;</span><span class="p">)</span> name=name + &quot;_out&quot;)
<span class="n">bn_name</span> <span class="o">=</span> <span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_bn&quot;</span> bn_name = name + &quot;_bn&quot;
<span class="k">return</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">batch_norm</span><span class="p">(</span> return fluid.layers.batch_norm(
<span class="nb">input</span><span class="o">=</span><span class="n">conv</span><span class="p">,</span> input=conv,
<span class="n">act</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> act=act,
<span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_output&#39;</span><span class="p">,</span> name=bn_name + &#39;_output&#39;,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_scale&#39;</span><span class="p">),</span> param_attr=ParamAttr(name=bn_name + &#39;_scale&#39;),
<span class="n">bias_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_offset&#39;</span><span class="p">),</span> bias_attr=ParamAttr(bn_name + &#39;_offset&#39;),
<span class="n">moving_mean_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_mean&#39;</span><span class="p">,</span> moving_mean_name=bn_name + &#39;_mean&#39;,
<span class="n">moving_variance_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_variance&#39;</span><span class="p">,</span> <span class="p">)</span> moving_variance_name=bn_name + &#39;_variance&#39;, )
<span class="n">main_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> main_program = fluid.Program()
<span class="n">startup_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> startup_program = fluid.Program()
<span class="c1"># X X O X O</span> # X X O X O
<span class="c1"># conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6</span> # conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6
<span class="c1"># | ^ | ^</span> # | ^ | ^
<span class="c1"># |____________| |____________________|</span> # |____________| |____________________|
<span class="c1">#</span> #
<span class="c1"># X: prune output channels</span> # X: prune output channels
<span class="c1"># O: prune input channels</span> # O: prune input channels
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">startup_program</span><span class="p">):</span> with fluid.program_guard(main_program, startup_program):
<span class="nb">input</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;image&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">])</span> input = fluid.data(name=&quot;image&quot;, shape=[None, 3, 16, 16])
<span class="n">conv1</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv1&quot;</span><span class="p">)</span> conv1 = conv_bn_layer(input, 8, 3, &quot;conv1&quot;)
<span class="n">conv2</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv2&quot;</span><span class="p">)</span> conv2 = conv_bn_layer(conv1, 8, 3, &quot;conv2&quot;)
<span class="n">sum1</span> <span class="o">=</span> <span class="n">conv1</span> <span class="o">+</span> <span class="n">conv2</span> sum1 = conv1 + conv2
<span class="n">conv3</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv3&quot;</span><span class="p">)</span> conv3 = conv_bn_layer(sum1, 8, 3, &quot;conv3&quot;)
<span class="n">conv4</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv3</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv4&quot;</span><span class="p">)</span> conv4 = conv_bn_layer(conv3, 8, 3, &quot;conv4&quot;)
<span class="n">sum2</span> <span class="o">=</span> <span class="n">conv4</span> <span class="o">+</span> <span class="n">sum1</span> sum2 = conv4 + sum1
<span class="n">conv5</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum2</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv5&quot;</span><span class="p">)</span> conv5 = conv_bn_layer(sum2, 8, 3, &quot;conv5&quot;)
<span class="n">conv6</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv5</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv6&quot;</span><span class="p">)</span> conv6 = conv_bn_layer(conv5, 8, 3, &quot;conv6&quot;)
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> place = fluid.CPUPlace()
<span class="n">exe</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Executor</span><span class="p">(</span><span class="n">place</span><span class="p">)</span> exe = fluid.Executor(place)
<span class="n">scope</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Scope</span><span class="p">()</span> scope = fluid.Scope()
<span class="n">exe</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">startup_program</span><span class="p">,</span> <span class="n">scope</span><span class="o">=</span><span class="n">scope</span><span class="p">)</span> exe.run(startup_program, scope=scope)
<span class="n">pruner</span> <span class="o">=</span> <span class="n">Pruner</span><span class="p">()</span> pruner = Pruner()
<span class="n">main_program</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">pruner</span><span class="o">.</span><span class="n">prune</span><span class="p">(</span> main_program, _, _ = pruner.prune(
<span class="n">main_program</span><span class="p">,</span> main_program,
<span class="n">scope</span><span class="p">,</span> scope,
<span class="n">params</span><span class="o">=</span><span class="p">[</span><span class="s2">&quot;conv4_weights&quot;</span><span class="p">],</span> params=[&quot;conv4_weights&quot;],
<span class="n">ratios</span><span class="o">=</span><span class="p">[</span><span class="mf">0.5</span><span class="p">],</span> ratios=[0.5],
<span class="n">place</span><span class="o">=</span><span class="n">place</span><span class="p">,</span> place=place,
<span class="n">lazy</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> lazy=False,
<span class="n">only_graph</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> only_graph=False,
<span class="n">param_backup</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> param_backup=False,
<span class="n">param_shape_backup</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span> param_shape_backup=False)
<span class="k">for</span> <span class="n">param</span> <span class="ow">in</span> <span class="n">main_program</span><span class="o">.</span><span class="n">global_block</span><span class="p">()</span><span class="o">.</span><span class="n">all_parameters</span><span class="p">():</span> for param in main_program.global_block().all_parameters():
<span class="k">if</span> <span class="s2">&quot;weights&quot;</span> <span class="ow">in</span> <span class="n">param</span><span class="o">.</span><span class="n">name</span><span class="p">:</span> if &quot;weights&quot; in param.name:
<span class="k">print</span><span class="p">(</span><span class="s2">&quot;param name: {}; param shape: {}&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">param</span><span class="o">.</span><span class="n">name</span><span class="p">,</span> <span class="n">param</span><span class="o">.</span><span class="n">shape</span><span class="p">))</span> print(&quot;param name: {}; param shape: {}&quot;.format(param.name, param.shape))
</pre></div></p> </pre></div></p>
<hr /> <hr />
<h2 id="sensitivity">sensitivity<a class="headerlink" href="#sensitivity" title="Permanent link">#</a></h2> <h2 id="sensitivity">sensitivity<a class="headerlink" href="#sensitivity" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.prune.sensitivity(program, place, param_names, eval_func, sensitivities_file=None, pruned_ratios=None) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L34">[源代码]</a></dt> <dt>paddleslim.prune.sensitivity(program, place, param_names, eval_func, sensitivities_file=None, pruned_ratios=None) <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L34">源代码</a></dt>
<dd> <dd>
<p>计算网络中每个卷积层的敏感度。每个卷积层的敏感度信息统计方法为:依次剪掉当前卷积层不同比例的输出通道数,在测试集上计算剪裁后的精度损失。得到敏感度信息后,可以通过观察或其它方式确定每层卷积的剪裁率。</p> <p>计算网络中每个卷积层的敏感度。每个卷积层的敏感度信息统计方法为:依次剪掉当前卷积层不同比例的输出通道数,在测试集上计算剪裁后的精度损失。得到敏感度信息后,可以通过观察或其它方式确定每层卷积的剪裁率。</p>
</dd> </dd>
...@@ -339,15 +339,15 @@ ...@@ -339,15 +339,15 @@
<p><strong>program(paddle.fluid.Program)</strong> - 待评估的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></p> <p><strong>program(paddle.fluid.Program)</strong> - 待评估的目标网络。更多关于Program的介绍请参考:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Program_cn.html#program">Program概念介绍</a></p>
</li> </li>
<li> <li>
<p><strong>place(paddle.fluid.Place)</strong> - 待分析的参数所在的设备位置,可以是<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/CUDAPlace_cn.html#cudaplace">CUDAPlace</a><a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/CPUPlace_cn.html#cpuplace">CPUPlace</a></p> <p><strong>place(paddle.fluid.Place)</strong> - 待分析的参数所在的设备位置,可以是<code>CUDAPlace</code><code>CPUPlace</code><a href="">Place概念介绍</a></p>
</li> </li>
<li> <li>
<p><strong>param_names(list<str>)</strong> - 待分析的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:</p> <p><strong>param_names(list<str>)</strong> - 待分析的卷积层的参数的名称列表。可以通过以下方式查看模型中所有参数的名称:</p>
</li> </li>
</ul> </ul>
<div class="codehilite"><pre><span></span><span class="k">for</span> <span class="nv">block</span> <span class="nv">in</span> <span class="nv">program</span>.<span class="nv">blocks</span>: <div class="highlight"><pre><span></span>for block in program.blocks:
<span class="k">for</span> <span class="nv">param</span> <span class="nv">in</span> <span class="nv">block</span>.<span class="nv">all_parameters</span><span class="ss">()</span>: for param in block.all_parameters():
<span class="nv">print</span><span class="ss">(</span><span class="s2">&quot;</span><span class="s">param: {}; shape: {}</span><span class="s2">&quot;</span>.<span class="nv">format</span><span class="ss">(</span><span class="nv">param</span>.<span class="nv">name</span>, <span class="nv">param</span>.<span class="nv">shape</span><span class="ss">))</span> print(&quot;param: {}; shape: {}&quot;.format(param.name, param.shape))
</pre></div> </pre></div>
<ul> <ul>
...@@ -365,116 +365,116 @@ ...@@ -365,116 +365,116 @@
<ul> <ul>
<li><strong>sensitivities(dict)</strong> - 存放敏感度信息的dict,其格式为:</li> <li><strong>sensitivities(dict)</strong> - 存放敏感度信息的dict,其格式为:</li>
</ul> </ul>
<div class="codehilite"><pre><span></span><span class="err">{</span><span class="ss">&quot;weight_0&quot;</span><span class="p">:</span> <div class="highlight"><pre><span></span>{&quot;weight_0&quot;:
<span class="err">{</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">22</span><span class="p">,</span> {0.1: 0.22,
<span class="mi">0</span><span class="p">.</span><span class="mi">2</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">33</span> 0.2: 0.33
<span class="err">}</span><span class="p">,</span> },
<span class="ss">&quot;weight_1&quot;</span><span class="p">:</span> &quot;weight_1&quot;:
<span class="err">{</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">21</span><span class="p">,</span> {0.1: 0.21,
<span class="mi">0</span><span class="p">.</span><span class="mi">2</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">4</span> 0.2: 0.4
<span class="err">}</span> }
<span class="err">}</span> }
</pre></div> </pre></div>
<p>其中,<code>weight_0</code>是卷积层参数的名称,sensitivities['weight_0']的<code>value</code>为剪裁比例,<code>value</code>为精度损失的比例。</p> <p>其中,<code>weight_0</code>是卷积层参数的名称,sensitivities['weight_0']的<code>value</code>为剪裁比例,<code>value</code>为精度损失的比例。</p>
<p><strong>示例:</strong></p> <p><strong>示例:</strong></p>
<p>点击<a href="https://aistudio.baidu.com/aistudio/projectdetail/201401">AIStudio</a>运行以下示例代码。</p> <p>点击<a href="https://aistudio.baidu.com/aistudio/projectdetail/201401">AIStudio</a>运行以下示例代码。</p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle</span> <div class="highlight"><pre><span></span>import paddle
<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span> import numpy as np
<span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> import paddle.fluid as fluid
<span class="kn">from</span> <span class="nn">paddle.fluid.param_attr</span> <span class="kn">import</span> <span class="n">ParamAttr</span> from paddle.fluid.param_attr import ParamAttr
<span class="kn">from</span> <span class="nn">paddleslim.prune</span> <span class="kn">import</span> <span class="n">sensitivity</span> from paddleslim.prune import sensitivity
<span class="kn">import</span> <span class="nn">paddle.dataset.mnist</span> <span class="kn">as</span> <span class="nn">reader</span> import paddle.dataset.mnist as reader
<span class="k">def</span> <span class="nf">conv_bn_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> def conv_bn_layer(input,
<span class="n">num_filters</span><span class="p">,</span> num_filters,
<span class="n">filter_size</span><span class="p">,</span> filter_size,
<span class="n">name</span><span class="p">,</span> name,
<span class="n">stride</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> stride=1,
<span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> groups=1,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span> act=None):
<span class="n">conv</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span> conv = fluid.layers.conv2d(
<span class="nb">input</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> input=input,
<span class="n">num_filters</span><span class="o">=</span><span class="n">num_filters</span><span class="p">,</span> num_filters=num_filters,
<span class="n">filter_size</span><span class="o">=</span><span class="n">filter_size</span><span class="p">,</span> filter_size=filter_size,
<span class="n">stride</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> stride=stride,
<span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="n">filter_size</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="o">//</span> <span class="mi">2</span><span class="p">,</span> padding=(filter_size - 1) // 2,
<span class="n">groups</span><span class="o">=</span><span class="n">groups</span><span class="p">,</span> groups=groups,
<span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> act=None,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_weights&quot;</span><span class="p">),</span> param_attr=ParamAttr(name=name + &quot;_weights&quot;),
<span class="n">bias_attr</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> bias_attr=False,
<span class="n">name</span><span class="o">=</span><span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_out&quot;</span><span class="p">)</span> name=name + &quot;_out&quot;)
<span class="n">bn_name</span> <span class="o">=</span> <span class="n">name</span> <span class="o">+</span> <span class="s2">&quot;_bn&quot;</span> bn_name = name + &quot;_bn&quot;
<span class="k">return</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">batch_norm</span><span class="p">(</span> return fluid.layers.batch_norm(
<span class="nb">input</span><span class="o">=</span><span class="n">conv</span><span class="p">,</span> input=conv,
<span class="n">act</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> act=act,
<span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_output&#39;</span><span class="p">,</span> name=bn_name + &#39;_output&#39;,
<span class="n">param_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_scale&#39;</span><span class="p">),</span> param_attr=ParamAttr(name=bn_name + &#39;_scale&#39;),
<span class="n">bias_attr</span><span class="o">=</span><span class="n">ParamAttr</span><span class="p">(</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_offset&#39;</span><span class="p">),</span> bias_attr=ParamAttr(bn_name + &#39;_offset&#39;),
<span class="n">moving_mean_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_mean&#39;</span><span class="p">,</span> moving_mean_name=bn_name + &#39;_mean&#39;,
<span class="n">moving_variance_name</span><span class="o">=</span><span class="n">bn_name</span> <span class="o">+</span> <span class="s1">&#39;_variance&#39;</span><span class="p">,</span> <span class="p">)</span> moving_variance_name=bn_name + &#39;_variance&#39;, )
<span class="n">main_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> main_program = fluid.Program()
<span class="n">startup_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> startup_program = fluid.Program()
<span class="c1"># X X O X O</span> # X X O X O
<span class="c1"># conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6</span> # conv1--&gt;conv2--&gt;sum1--&gt;conv3--&gt;conv4--&gt;sum2--&gt;conv5--&gt;conv6
<span class="c1"># | ^ | ^</span> # | ^ | ^
<span class="c1"># |____________| |____________________|</span> # |____________| |____________________|
<span class="c1">#</span> #
<span class="c1"># X: prune output channels</span> # X: prune output channels
<span class="c1"># O: prune input channels</span> # O: prune input channels
<span class="n">image_shape</span> <span class="o">=</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">28</span><span class="p">,</span><span class="mi">28</span><span class="p">]</span> image_shape = [1,28,28]
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">startup_program</span><span class="p">):</span> with fluid.program_guard(main_program, startup_program):
<span class="n">image</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;image&#39;</span><span class="p">,</span> <span class="kp">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">]</span><span class="o">+</span><span class="n">image_shape</span><span class="p">,</span> <span class="kp">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span> image = fluid.data(name=&#39;image&#39;, shape=[None]+image_shape, dtype=&#39;float32&#39;)
<span class="n">label</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;label&#39;</span><span class="p">,</span> <span class="kp">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="kp">dtype</span><span class="o">=</span><span class="s1">&#39;int64&#39;</span><span class="p">)</span> label = fluid.data(name=&#39;label&#39;, shape=[None, 1], dtype=&#39;int64&#39;)
<span class="n">conv1</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv1&quot;</span><span class="p">)</span> conv1 = conv_bn_layer(image, 8, 3, &quot;conv1&quot;)
<span class="n">conv2</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv2&quot;</span><span class="p">)</span> conv2 = conv_bn_layer(conv1, 8, 3, &quot;conv2&quot;)
<span class="n">sum1</span> <span class="o">=</span> <span class="n">conv1</span> <span class="o">+</span> <span class="n">conv2</span> sum1 = conv1 + conv2
<span class="n">conv3</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv3&quot;</span><span class="p">)</span> conv3 = conv_bn_layer(sum1, 8, 3, &quot;conv3&quot;)
<span class="n">conv4</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv3</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv4&quot;</span><span class="p">)</span> conv4 = conv_bn_layer(conv3, 8, 3, &quot;conv4&quot;)
<span class="n">sum2</span> <span class="o">=</span> <span class="n">conv4</span> <span class="o">+</span> <span class="n">sum1</span> sum2 = conv4 + sum1
<span class="n">conv5</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">sum2</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv5&quot;</span><span class="p">)</span> conv5 = conv_bn_layer(sum2, 8, 3, &quot;conv5&quot;)
<span class="n">conv6</span> <span class="o">=</span> <span class="n">conv_bn_layer</span><span class="p">(</span><span class="n">conv5</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;conv6&quot;</span><span class="p">)</span> conv6 = conv_bn_layer(conv5, 8, 3, &quot;conv6&quot;)
<span class="n">out</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="n">conv6</span><span class="p">,</span> <span class="kp">size</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;softmax&quot;</span><span class="p">)</span> out = fluid.layers.fc(conv6, size=10, act=&quot;softmax&quot;)
<span class="c1"># cost = fluid.layers.cross_entropy(input=out, label=label)</span> # cost = fluid.layers.cross_entropy(input=out, label=label)
<span class="c1"># avg_cost = fluid.layers.mean(x=cost)</span> # avg_cost = fluid.layers.mean(x=cost)
<span class="n">acc_top1</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">accuracy</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">out</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">label</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1)
<span class="c1"># acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5)</span> # acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5)
<span class="kp">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> place = fluid.CPUPlace()
<span class="n">exe</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Executor</span><span class="p">(</span><span class="kp">place</span><span class="p">)</span> exe = fluid.Executor(place)
<span class="n">exe</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">startup_program</span><span class="p">)</span> exe.run(startup_program)
<span class="n">val_reader</span> <span class="o">=</span> <span class="n">paddle</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">reader</span><span class="o">.</span><span class="kp">test</span><span class="p">(),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span> val_reader = paddle.batch(reader.test(), batch_size=128)
<span class="n">val_feeder</span> <span class="o">=</span> <span class="n">feeder</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">DataFeeder</span><span class="p">(</span> val_feeder = feeder = fluid.DataFeeder(
<span class="p">[</span><span class="n">image</span><span class="p">,</span> <span class="n">label</span><span class="p">],</span> <span class="kp">place</span><span class="p">,</span> <span class="n">program</span><span class="o">=</span><span class="n">main_program</span><span class="p">)</span> [image, label], place, program=main_program)
<span class="k">def</span> <span class="nf">eval_func</span><span class="p">(</span><span class="n">program</span><span class="p">):</span> def eval_func(program):
<span class="n">acc_top1_ns</span> <span class="o">=</span> <span class="p">[]</span> acc_top1_ns = []
<span class="k">for</span> <span class="n">data</span> <span class="ow">in</span> <span class="n">val_reader</span><span class="p">():</span> for data in val_reader():
<span class="n">acc_top1_n</span> <span class="o">=</span> <span class="n">exe</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">program</span><span class="p">,</span> acc_top1_n = exe.run(program,
<span class="n">feed</span><span class="o">=</span><span class="n">val_feeder</span><span class="o">.</span><span class="n">feed</span><span class="p">(</span><span class="n">data</span><span class="p">),</span> feed=val_feeder.feed(data),
<span class="n">fetch_list</span><span class="o">=</span><span class="p">[</span><span class="n">acc_top1</span><span class="o">.</span><span class="n">name</span><span class="p">])</span> fetch_list=[acc_top1.name])
<span class="n">acc_top1_ns</span><span class="o">.</span><span class="kp">append</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="kp">mean</span><span class="p">(</span><span class="n">acc_top1_n</span><span class="p">))</span> acc_top1_ns.append(np.mean(acc_top1_n))
<span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="kp">mean</span><span class="p">(</span><span class="n">acc_top1_ns</span><span class="p">)</span> return np.mean(acc_top1_ns)
<span class="n">param_names</span> <span class="o">=</span> <span class="p">[]</span> param_names = []
<span class="k">for</span> <span class="n">param</span> <span class="ow">in</span> <span class="n">main_program</span><span class="o">.</span><span class="n">global_block</span><span class="p">()</span><span class="o">.</span><span class="n">all_parameters</span><span class="p">():</span> for param in main_program.global_block().all_parameters():
<span class="k">if</span> <span class="s2">&quot;weights&quot;</span> <span class="ow">in</span> <span class="n">param</span><span class="o">.</span><span class="n">name</span><span class="p">:</span> if &quot;weights&quot; in param.name:
<span class="n">param_names</span><span class="o">.</span><span class="kp">append</span><span class="p">(</span><span class="n">param</span><span class="o">.</span><span class="n">name</span><span class="p">)</span> param_names.append(param.name)
<span class="n">sensitivities</span> <span class="o">=</span> <span class="n">sensitivity</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> sensitivities = sensitivity(main_program,
<span class="kp">place</span><span class="p">,</span> place,
<span class="n">param_names</span><span class="p">,</span> param_names,
<span class="n">eval_func</span><span class="p">,</span> eval_func,
<span class="n">sensitivities_file</span><span class="o">=</span><span class="s2">&quot;./sensitive.data&quot;</span><span class="p">,</span> sensitivities_file=&quot;./sensitive.data&quot;,
<span class="n">pruned_ratios</span><span class="o">=</span><span class="p">[</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.2</span><span class="p">,</span> <span class="mf">0.3</span><span class="p">])</span> pruned_ratios=[0.1, 0.2, 0.3])
<span class="k">print</span><span class="p">(</span><span class="n">sensitivities</span><span class="p">)</span> print(sensitivities)
</pre></div> </pre></div>
<h2 id="merge_sensitive">merge_sensitive<a class="headerlink" href="#merge_sensitive" title="Permanent link">#</a></h2> <h2 id="merge_sensitive">merge_sensitive<a class="headerlink" href="#merge_sensitive" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.prune.merge_sensitive(sensitivities)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L161">[源代码]</a></dt> <dt>paddleslim.prune.merge_sensitive(sensitivities)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L161">源代码</a></dt>
<dd> <dd>
<p>合并多个敏感度信息。</p> <p>合并多个敏感度信息。</p>
</dd> </dd>
...@@ -487,22 +487,22 @@ ...@@ -487,22 +487,22 @@
<ul> <ul>
<li><strong>sensitivities(dict)</strong> - 合并后的敏感度信息。其格式为:</li> <li><strong>sensitivities(dict)</strong> - 合并后的敏感度信息。其格式为:</li>
</ul> </ul>
<div class="codehilite"><pre><span></span><span class="err">{</span><span class="ss">&quot;weight_0&quot;</span><span class="p">:</span> <div class="highlight"><pre><span></span>{&quot;weight_0&quot;:
<span class="err">{</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">22</span><span class="p">,</span> {0.1: 0.22,
<span class="mi">0</span><span class="p">.</span><span class="mi">2</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">33</span> 0.2: 0.33
<span class="err">}</span><span class="p">,</span> },
<span class="ss">&quot;weight_1&quot;</span><span class="p">:</span> &quot;weight_1&quot;:
<span class="err">{</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">21</span><span class="p">,</span> {0.1: 0.21,
<span class="mi">0</span><span class="p">.</span><span class="mi">2</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">4</span> 0.2: 0.4
<span class="err">}</span> }
<span class="err">}</span> }
</pre></div> </pre></div>
<p>其中,<code>weight_0</code>是卷积层参数的名称,sensitivities['weight_0']的<code>value</code>为剪裁比例,<code>value</code>为精度损失的比例。</p> <p>其中,<code>weight_0</code>是卷积层参数的名称,sensitivities['weight_0']的<code>value</code>为剪裁比例,<code>value</code>为精度损失的比例。</p>
<p>示例:</p> <p>示例:</p>
<h2 id="load_sensitivities">load_sensitivities<a class="headerlink" href="#load_sensitivities" title="Permanent link">#</a></h2> <h2 id="load_sensitivities">load_sensitivities<a class="headerlink" href="#load_sensitivities" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.prune.load_sensitivities(sensitivities_file)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L184">[源代码]</a></dt> <dt>paddleslim.prune.load_sensitivities(sensitivities_file)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L184">源代码</a></dt>
<dd> <dd>
<p>从文件中加载敏感度信息。</p> <p>从文件中加载敏感度信息。</p>
</dd> </dd>
...@@ -518,7 +518,7 @@ ...@@ -518,7 +518,7 @@
<p>示例:</p> <p>示例:</p>
<h2 id="get_ratios_by_loss">get_ratios_by_loss<a class="headerlink" href="#get_ratios_by_loss" title="Permanent link">#</a></h2> <h2 id="get_ratios_by_loss">get_ratios_by_loss<a class="headerlink" href="#get_ratios_by_loss" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.prune.get_ratios_by_loss(sensitivities, loss)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L206">[源代码]</a></dt> <dt>paddleslim.prune.get_ratios_by_loss(sensitivities, loss)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L206">源代码</a></dt>
<dd> <dd>
<p>根据敏感度和精度损失阈值计算出一组剪切率。对于参数<code>w</code>, 其剪裁率为使精度损失低于<code>loss</code>的最大剪裁率。</p> <p>根据敏感度和精度损失阈值计算出一组剪切率。对于参数<code>w</code>, 其剪裁率为使精度损失低于<code>loss</code>的最大剪裁率。</p>
</dd> </dd>
......
...@@ -172,7 +172,7 @@ ...@@ -172,7 +172,7 @@
<li>量化</li> <li>量化</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/quantization_api.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/quantization_api.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -184,29 +184,50 @@ ...@@ -184,29 +184,50 @@
<h2 id="_1">量化配置<a class="headerlink" href="#_1" title="Permanent link">#</a></h2> <h2 id="_1">量化配置<a class="headerlink" href="#_1" title="Permanent link">#</a></h2>
<p>通过字典配置量化参数</p> <p>通过字典配置量化参数</p>
<div class="codehilite"><pre><span></span><span class="nv">quant_config_default</span> <span class="o">=</span> { <div class="highlight"><pre><span></span>TENSORRT_OP_TYPES = [
<span class="s1">&#39;</span><span class="s">weight_quantize_type</span><span class="s1">&#39;</span>: <span class="s1">&#39;</span><span class="s">abs_max</span><span class="s1">&#39;</span>, &#39;mul&#39;, &#39;conv2d&#39;, &#39;pool2d&#39;, &#39;depthwise_conv2d&#39;, &#39;elementwise_add&#39;,
<span class="s1">&#39;</span><span class="s">activation_quantize_type</span><span class="s1">&#39;</span>: <span class="s1">&#39;</span><span class="s">abs_max</span><span class="s1">&#39;</span>, &#39;leaky_relu&#39;
<span class="s1">&#39;</span><span class="s">weight_bits</span><span class="s1">&#39;</span>: <span class="mi">8</span>, ]
<span class="s1">&#39;</span><span class="s">activation_bits</span><span class="s1">&#39;</span>: <span class="mi">8</span>, TRANSFORM_PASS_OP_TYPES = [&#39;conv2d&#39;, &#39;depthwise_conv2d&#39;, &#39;mul&#39;]
# <span class="nv">ops</span> <span class="nv">of</span> <span class="nv">name_scope</span> <span class="nv">in</span> <span class="nv">not_quant_pattern</span> <span class="nv">list</span>, <span class="nv">will</span> <span class="nv">not</span> <span class="nv">be</span> <span class="nv">quantized</span>
<span class="s1">&#39;</span><span class="s">not_quant_pattern</span><span class="s1">&#39;</span>: [<span class="s1">&#39;</span><span class="s">skip_quant</span><span class="s1">&#39;</span>], QUANT_DEQUANT_PASS_OP_TYPES = [
# <span class="nv">ops</span> <span class="nv">of</span> <span class="nv">type</span> <span class="nv">in</span> <span class="nv">quantize_op_types</span>, <span class="nv">will</span> <span class="nv">be</span> <span class="nv">quantized</span> &quot;pool2d&quot;, &quot;elementwise_add&quot;, &quot;concat&quot;, &quot;softmax&quot;, &quot;argmax&quot;, &quot;transpose&quot;,
<span class="s1">&#39;</span><span class="s">quantize_op_types</span><span class="s1">&#39;</span>: &quot;equal&quot;, &quot;gather&quot;, &quot;greater_equal&quot;, &quot;greater_than&quot;, &quot;less_equal&quot;,
[<span class="s1">&#39;</span><span class="s">conv2d</span><span class="s1">&#39;</span>, <span class="s1">&#39;</span><span class="s">depthwise_conv2d</span><span class="s1">&#39;</span>, <span class="s1">&#39;</span><span class="s">mul</span><span class="s1">&#39;</span>, <span class="s1">&#39;</span><span class="s">elementwise_add</span><span class="s1">&#39;</span>, <span class="s1">&#39;</span><span class="s">pool2d</span><span class="s1">&#39;</span>], &quot;less_than&quot;, &quot;mean&quot;, &quot;not_equal&quot;, &quot;reshape&quot;, &quot;reshape2&quot;,
# <span class="nv">data</span> <span class="nv">type</span> <span class="nv">after</span> <span class="nv">quantization</span>, <span class="nv">such</span> <span class="nv">as</span> <span class="s1">&#39;</span><span class="s">uint8</span><span class="s1">&#39;</span>, <span class="s1">&#39;</span><span class="s">int8</span><span class="s1">&#39;</span>, <span class="nv">etc</span>. <span class="nv">default</span> <span class="nv">is</span> <span class="s1">&#39;</span><span class="s">int8</span><span class="s1">&#39;</span> &quot;bilinear_interp&quot;, &quot;nearest_interp&quot;, &quot;trilinear_interp&quot;, &quot;slice&quot;,
<span class="s1">&#39;</span><span class="s">dtype</span><span class="s1">&#39;</span>: <span class="s1">&#39;</span><span class="s">int8</span><span class="s1">&#39;</span>, &quot;squeeze&quot;, &quot;elementwise_sub&quot;, &quot;relu&quot;, &quot;relu6&quot;, &quot;leaky_relu&quot;, &quot;tanh&quot;, &quot;swish&quot;
# <span class="nv">window</span> <span class="nv">size</span> <span class="k">for</span> <span class="s1">&#39;</span><span class="s">range_abs_max</span><span class="s1">&#39;</span> <span class="nv">quantization</span>. <span class="nv">defaulf</span> <span class="nv">is</span> <span class="mi">10000</span> ]
<span class="s1">&#39;</span><span class="s">window_size</span><span class="s1">&#39;</span>: <span class="mi">10000</span>,
# <span class="nv">The</span> <span class="nv">decay</span> <span class="nv">coefficient</span> <span class="nv">of</span> <span class="nv">moving</span> <span class="nv">average</span>, <span class="nv">default</span> <span class="nv">is</span> <span class="mi">0</span>.<span class="mi">9</span> _quant_config_default = {
<span class="s1">&#39;</span><span class="s">moving_rate</span><span class="s1">&#39;</span>: <span class="mi">0</span>.<span class="mi">9</span>, # weight quantize type, default is &#39;channel_wise_abs_max&#39;
&#39;weight_quantize_type&#39;: &#39;channel_wise_abs_max&#39;,
# activation quantize type, default is &#39;moving_average_abs_max&#39;
&#39;activation_quantize_type&#39;: &#39;moving_average_abs_max&#39;,
# weight quantize bit num, default is 8
&#39;weight_bits&#39;: 8,
# activation quantize bit num, default is 8
&#39;activation_bits&#39;: 8,
# ops of name_scope in not_quant_pattern list, will not be quantized
&#39;not_quant_pattern&#39;: [&#39;skip_quant&#39;],
# ops of type in quantize_op_types, will be quantized
&#39;quantize_op_types&#39;: [&#39;conv2d&#39;, &#39;depthwise_conv2d&#39;, &#39;mul&#39;],
# data type after quantization, such as &#39;uint8&#39;, &#39;int8&#39;, etc. default is &#39;int8&#39;
&#39;dtype&#39;: &#39;int8&#39;,
# window size for &#39;range_abs_max&#39; quantization. defaulf is 10000
&#39;window_size&#39;: 10000,
# The decay coefficient of moving average, default is 0.9
&#39;moving_rate&#39;: 0.9,
# if True, &#39;quantize_op_types&#39; will be TENSORRT_OP_TYPES
&#39;for_tensorrt&#39;: False,
# if True, &#39;quantoze_op_types&#39; will be TRANSFORM_PASS_OP_TYPES + QUANT_DEQUANT_PASS_OP_TYPES
&#39;is_full_quantize&#39;: False
} }
</pre></div> </pre></div>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>weight_quantize_type(str)</strong> - 参数量化方式。可选<code>'abs_max'</code>, <code>'channel_wise_abs_max'</code>, <code>'range_abs_max'</code>, <code>'moving_average_abs_max'</code> 默认<code>'abs_max'</code></li> <li><strong>weight_quantize_type(str)</strong> - 参数量化方式。可选<code>'abs_max'</code>, <code>'channel_wise_abs_max'</code>, <code>'range_abs_max'</code>, <code>'moving_average_abs_max'</code>如果使用<code>TensorRT</code>加载量化后的模型来预测,请使用<code>'channel_wise_abs_max'</code>。 默认<code>'channel_wise_abs_max'</code></li>
<li><strong>activation_quantize_type(str)</strong> - 激活量化方式,可选<code>'abs_max'</code>, <code>'range_abs_max'</code>, <code>'moving_average_abs_max'</code>,默认<code>'abs_max'</code></li> <li><strong>activation_quantize_type(str)</strong> - 激活量化方式,可选<code>'abs_max'</code>, <code>'range_abs_max'</code>, <code>'moving_average_abs_max'</code>。如果使用<code>TensorRT</code>加载量化后的模型来预测,请使用<code>'range_abs_max', 'moving_average_abs_max'</code>。,默认<code>'moving_average_abs_max'</code></li>
<li><strong>weight_bits(int)</strong> - 参数量化bit数,默认8, 推荐设为8。</li> <li><strong>weight_bits(int)</strong> - 参数量化bit数,默认8, 推荐设为8。</li>
<li><strong>activation_bits(int)</strong> - 激活量化bit数,默认8, 推荐设为8。</li> <li><strong>activation_bits(int)</strong> - 激活量化bit数,默认8, 推荐设为8。</li>
<li><strong>not_quant_pattern(str | list[str])</strong> - 所有<code>name_scope</code>包含<code>'not_quant_pattern'</code>字符串的<code>op</code>,都不量化, 设置方式请参考<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/name_scope_cn.html#name-scope"><em>fluid.name_scope</em></a></li> <li><strong>not_quant_pattern(str | list[str])</strong> - 所有<code>name_scope</code>包含<code>'not_quant_pattern'</code>字符串的<code>op</code>,都不量化, 设置方式请参考<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/name_scope_cn.html#name-scope"><em>fluid.name_scope</em></a></li>
...@@ -214,6 +235,14 @@ ...@@ -214,6 +235,14 @@
<li><strong>dtype(int8)</strong> - 量化后的参数类型,默认 <code>int8</code>, 目前仅支持<code>int8</code></li> <li><strong>dtype(int8)</strong> - 量化后的参数类型,默认 <code>int8</code>, 目前仅支持<code>int8</code></li>
<li><strong>window_size(int)</strong> - <code>'range_abs_max'</code>量化方式的<code>window size</code>,默认10000。</li> <li><strong>window_size(int)</strong> - <code>'range_abs_max'</code>量化方式的<code>window size</code>,默认10000。</li>
<li><strong>moving_rate(int)</strong> - <code>'moving_average_abs_max'</code>量化方式的衰减系数,默认 0.9。</li> <li><strong>moving_rate(int)</strong> - <code>'moving_average_abs_max'</code>量化方式的衰减系数,默认 0.9。</li>
<li><strong>for_tensorrt(bool)</strong> - 量化后的模型是否使用<code>TensorRT</code>进行预测。如果是的话,量化op类型为:<code>TENSORRT_OP_TYPES</code>。默认值为False.</li>
<li><strong>is_full_quantize(bool)</strong> - 是否量化所有可支持op类型。默认值为False.</li>
</ul>
<div class="admonition note">
<p class="admonition-title">注意事项</p>
</div>
<ul>
<li>目前<code>Paddle-Lite</code>有int8 kernel来加速的op只有 <code>['conv2d', 'depthwise_conv2d', 'mul']</code>, 其他op的int8 kernel将陆续支持。</li>
</ul> </ul>
<h2 id="quant_aware">quant_aware<a class="headerlink" href="#quant_aware" title="Permanent link">#</a></h2> <h2 id="quant_aware">quant_aware<a class="headerlink" href="#quant_aware" title="Permanent link">#</a></h2>
<dl> <dl>
...@@ -237,13 +266,13 @@ ...@@ -237,13 +266,13 @@
</ul> </ul>
<div class="admonition note"> <div class="admonition note">
<p class="admonition-title">注意事项</p> <p class="admonition-title">注意事项</p>
</div>
<ul> <ul>
<li>此接口会改变<code>program</code>结构,并且可能增加一些<code>persistable</code>的变量,所以加载模型参数时请注意和相应的<code>program</code>对应。</li> <li>此接口会改变<code>program</code>结构,并且可能增加一些<code>persistable</code>的变量,所以加载模型参数时请注意和相应的<code>program</code>对应。</li>
<li>此接口底层经历了<code>fluid.Program</code>-&gt; <code>fluid.framework.IrGraph</code>-&gt;<code>fluid.Program</code>的转变,在<code>fluid.framework.IrGraph</code>中没有<code>Parameter</code>的概念,<code>Variable</code>只有<code>persistable</code><code>not persistable</code>的区别,所以在保存和加载参数时,请使用<code>fluid.io.save_persistables</code><code>fluid.io.load_persistables</code>接口。</li> <li>此接口底层经历了<code>fluid.Program</code>-&gt; <code>fluid.framework.IrGraph</code>-&gt;<code>fluid.Program</code>的转变,在<code>fluid.framework.IrGraph</code>中没有<code>Parameter</code>的概念,<code>Variable</code>只有<code>persistable</code><code>not persistable</code>的区别,所以在保存和加载参数时,请使用<code>fluid.io.save_persistables</code><code>fluid.io.load_persistables</code>接口。</li>
<li>由于此接口会根据<code>program</code>的结构和量化配置来对<code>program</code>添加op,所以<code>Paddle</code>中一些通过<code>fuse op</code>来加速训练的策略不能使用。已知以下策略在使用量化时必须设为<code>False</code><code>fuse_all_reduce_ops, sync_batch_norm</code></li> <li>由于此接口会根据<code>program</code>的结构和量化配置来对<code>program</code>添加op,所以<code>Paddle</code>中一些通过<code>fuse op</code>来加速训练的策略不能使用。已知以下策略在使用量化时必须设为<code>False</code><code>fuse_all_reduce_ops, sync_batch_norm</code></li>
<li>如果传入的<code>program</code>中存在和任何op都没有连接的<code>Variable</code>,则会在量化的过程中被优化掉。</li> <li>如果传入的<code>program</code>中存在和任何op都没有连接的<code>Variable</code>,则会在量化的过程中被优化掉。</li>
</ul> </ul>
</div>
<h2 id="convert">convert<a class="headerlink" href="#convert" title="Permanent link">#</a></h2> <h2 id="convert">convert<a class="headerlink" href="#convert" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.quant.convert(program, place, config, scope=None, save_int8=False)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/quant/quanter.py">[源代码]</a></dt> <dt>paddleslim.quant.convert(program, place, config, scope=None, save_int8=False)<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/quant/quanter.py">[源代码]</a></dt>
...@@ -266,10 +295,10 @@ ...@@ -266,10 +295,10 @@
</ul> </ul>
<div class="admonition note"> <div class="admonition note">
<p class="admonition-title">注意事项</p> <p class="admonition-title">注意事项</p>
<p>因为该接口会对<code>op</code><code>Variable</code>做相应的删除和修改,所以此接口只能在训练完成之后调用。如果想转化训练的中间模型,可加载相应的参数之后再使用此接口。</p>
</div> </div>
<p>因为该接口会对<code>op</code><code>Variable</code>做相应的删除和修改,所以此接口只能在训练完成之后调用。如果想转化训练的中间模型,可加载相应的参数之后再使用此接口。</p>
<p><strong>代码示例</strong></p> <p><strong>代码示例</strong></p>
<div class="codehilite"><pre><span></span><span class="c1">#encoding=utf8</span> <div class="highlight"><pre><span></span><span class="c1">#encoding=utf8</span>
<span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.quant</span> <span class="kn">as</span> <span class="nn">quant</span> <span class="kn">import</span> <span class="nn">paddleslim.quant</span> <span class="kn">as</span> <span class="nn">quant</span>
...@@ -311,7 +340,7 @@ ...@@ -311,7 +340,7 @@
<p>更详细的用法请参考 <a href='https://github.com/PaddlePaddle/PaddleSlim/tree/develop/demo/quant/quant_aware'>量化训练demo</a></p> <p>更详细的用法请参考 <a href='https://github.com/PaddlePaddle/PaddleSlim/tree/develop/demo/quant/quant_aware'>量化训练demo</a></p>
<h2 id="quant_post">quant_post<a class="headerlink" href="#quant_post" title="Permanent link">#</a></h2> <h2 id="quant_post">quant_post<a class="headerlink" href="#quant_post" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.quant.quant_post(executor, model_dir, quantize_model_path,sample_generator, model_filename=None, params_filename=None, batch_size=16,batch_nums=None, scope=None, algo='KL', quantizable_op_type=["conv2d", "depthwise_conv2d", "mul"])<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/quant/quanter.py">[源代码]</a></dt> <dt>paddleslim.quant.quant_post(executor, model_dir, quantize_model_path,sample_generator, model_filename=None, params_filename=None, batch_size=16,batch_nums=None, scope=None, algo='KL', quantizable_op_type=["conv2d", "depthwise_conv2d", "mul"], is_full_quantize=False, is_use_cache_file=False, cache_dir="./temp_post_training")<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/quant/quanter.py">[源代码]</a></dt>
<dd> <dd>
<p>对保存在<code>${model_dir}</code>下的模型进行量化,使用<code>sample_generator</code>的数据进行参数校正。</p> <p>对保存在<code>${model_dir}</code>下的模型进行量化,使用<code>sample_generator</code>的数据进行参数校正。</p>
</dd> </dd>
...@@ -329,18 +358,24 @@ ...@@ -329,18 +358,24 @@
<li><strong>scope(fluid.Scope, optional)</strong> - 用来获取和写入<code>Variable</code>, 如果设置为<code>None</code>,则使用<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html"><em>fluid.global_scope()</em></a>. 默认值是<code>None</code>.</li> <li><strong>scope(fluid.Scope, optional)</strong> - 用来获取和写入<code>Variable</code>, 如果设置为<code>None</code>,则使用<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html"><em>fluid.global_scope()</em></a>. 默认值是<code>None</code>.</li>
<li><strong>algo(str)</strong> - 量化时使用的算法名称,可为<code>'KL'</code>或者<code>'direct'</code>。该参数仅针对激活值的量化,因为参数值的量化使用的方式为<code>'channel_wise_abs_max'</code>. 当<code>algo</code> 设置为<code>'direct'</code>时,使用校正数据的激活值的绝对值的最大值当作<code>Scale</code>值,当设置为<code>'KL'</code>时,则使用<code>KL</code>散度的方法来计算<code>Scale</code>值。默认值为<code>'KL'</code></li> <li><strong>algo(str)</strong> - 量化时使用的算法名称,可为<code>'KL'</code>或者<code>'direct'</code>。该参数仅针对激活值的量化,因为参数值的量化使用的方式为<code>'channel_wise_abs_max'</code>. 当<code>algo</code> 设置为<code>'direct'</code>时,使用校正数据的激活值的绝对值的最大值当作<code>Scale</code>值,当设置为<code>'KL'</code>时,则使用<code>KL</code>散度的方法来计算<code>Scale</code>值。默认值为<code>'KL'</code></li>
<li><strong>quantizable_op_type(list[str])</strong> - 需要量化的<code>op</code>类型列表。默认值为<code>["conv2d", "depthwise_conv2d", "mul"]</code></li> <li><strong>quantizable_op_type(list[str])</strong> - 需要量化的<code>op</code>类型列表。默认值为<code>["conv2d", "depthwise_conv2d", "mul"]</code></li>
<li><strong>is_full_quantize(bool)</strong> - 是否量化所有可支持的op类型。如果设置为False, 则按照 <code>'quantizable_op_type'</code> 的设置进行量化。</li>
<li><strong>is_use_cache_file(bool)</strong> - 是否使用硬盘对中间结果进行存储。如果为False, 则将中间结果存储在内存中。</li>
<li><strong>cache_dir(str)</strong> - 如果 <code>'is_use_cache_file'</code>为True, 则将中间结果存储在此参数设置的路径下。</li>
</ul> </ul>
<p><strong>返回</strong></p> <p><strong>返回</strong></p>
<p>无。</p> <p>无。</p>
<div class="admonition note"> <div class="admonition note">
<p class="admonition-title">注意事项</p> <p class="admonition-title">注意事项</p>
<p>因为该接口会收集校正数据的所有的激活值,所以使用的校正图片不能太多。<code>'KL'</code>散度的计算也比较耗时。</p>
</div> </div>
<ul>
<li>因为该接口会收集校正数据的所有的激活值,当校正图片比较多时,请设置<code>'is_use_cache_file'</code>为True, 将中间结果存储在硬盘中。另外,<code>'KL'</code>散度的计算比较耗时。</li>
<li>目前<code>Paddle-Lite</code>有int8 kernel来加速的op只有 <code>['conv2d', 'depthwise_conv2d', 'mul']</code>, 其他op的int8 kernel将陆续支持。</li>
</ul>
<p><strong>代码示例</strong></p> <p><strong>代码示例</strong></p>
<blockquote> <blockquote>
<p>注: 此示例不能直接运行,因为需要加载<code>${model_dir}</code>下的模型,所以不能直接运行。</p> <p>注: 此示例不能直接运行,因为需要加载<code>${model_dir}</code>下的模型,所以不能直接运行。</p>
</blockquote> </blockquote>
<p><div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <p><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddle.dataset.mnist</span> <span class="kn">as</span> <span class="nn">reader</span> <span class="kn">import</span> <span class="nn">paddle.dataset.mnist</span> <span class="kn">as</span> <span class="nn">reader</span>
<span class="kn">from</span> <span class="nn">paddleslim.quant</span> <span class="kn">import</span> <span class="n">quant_post</span> <span class="kn">from</span> <span class="nn">paddleslim.quant</span> <span class="kn">import</span> <span class="n">quant_post</span>
<span class="n">val_reader</span> <span class="o">=</span> <span class="n">reader</span><span class="o">.</span><span class="n">train</span><span class="p">()</span> <span class="n">val_reader</span> <span class="o">=</span> <span class="n">reader</span><span class="o">.</span><span class="n">train</span><span class="p">()</span>
...@@ -383,7 +418,7 @@ ...@@ -383,7 +418,7 @@
<p><strong>返回类型</strong></p> <p><strong>返回类型</strong></p>
<p><code>fluid.Program</code></p> <p><code>fluid.Program</code></p>
<p><strong>代码示例</strong> <p><strong>代码示例</strong>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.quant</span> <span class="kn">as</span> <span class="nn">quant</span> <span class="kn">import</span> <span class="nn">paddleslim.quant</span> <span class="kn">as</span> <span class="nn">quant</span>
<span class="n">train_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">train_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
......
...@@ -172,7 +172,7 @@ ...@@ -172,7 +172,7 @@
<li>知识蒸馏</li> <li>知识蒸馏</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/api/single_distiller_api.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/api/single_distiller_api.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -184,9 +184,9 @@ ...@@ -184,9 +184,9 @@
<h2 id="merge">merge<a class="headerlink" href="#merge" title="Permanent link">#</a></h2> <h2 id="merge">merge<a class="headerlink" href="#merge" title="Permanent link">#</a></h2>
<dl> <dl>
<dt>paddleslim.dist.merge(teacher_program, student_program, data_name_map, place, scope=fluid.global_scope(), name_prefix='teacher_') <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L19">[源代码]</a> </dt> <dt>paddleslim.dist.merge(teacher_program, student_program, data_name_map, place, scope=fluid.global_scope(), name_prefix='teacher_') <a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/dist/single_distiller.py#L19">[源代码]</a></dt>
<dd> <dd>
<p>merge将两个paddle program(teacher_program, student_program)融合为一个program,并将融合得到的program返回。在融合的program中,可以为其中合适的teacher特征图和student特征图添加蒸馏损失函数,从而达到用teacher模型的暗知识(Dark Knowledge)指导student模型学习的目的。</p> <p>merge将teacher_program融合到student_program中。在融合的program中,可以为其中合适的teacher特征图和student特征图添加蒸馏损失函数,从而达到用teacher模型的暗知识(Dark Knowledge)指导student模型学习的目的。</p>
</dd> </dd>
</dl> </dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
...@@ -198,13 +198,13 @@ ...@@ -198,13 +198,13 @@
<li><strong>scope</strong>(Scope)-该参数表示程序使用的变量作用域,如果不指定将使用默认的全局作用域。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/global_scope_cn.html#global-scope"><em>fluid.global_scope()</em></a></li> <li><strong>scope</strong>(Scope)-该参数表示程序使用的变量作用域,如果不指定将使用默认的全局作用域。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/global_scope_cn.html#global-scope"><em>fluid.global_scope()</em></a></li>
<li><strong>name_prefix</strong>(str)-merge操作将统一为teacher的<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_guides/low_level/program.html#variable"><em>Variables</em></a>添加的名称前缀name_prefix。默认值:'teacher_'</li> <li><strong>name_prefix</strong>(str)-merge操作将统一为teacher的<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_guides/low_level/program.html#variable"><em>Variables</em></a>添加的名称前缀name_prefix。默认值:'teacher_'</li>
</ul> </ul>
<p><strong>返回:</strong> 由student_program和teacher_program merge得到的program</p> <p><strong>返回:</strong> </p>
<div class="admonition note"> <div class="admonition note">
<p class="admonition-title">Note</p> <p class="admonition-title">Note</p>
<p><em>data_name_map</em><strong>teacher_var name到student_var name的映射</strong>,如果写反可能无法正确进行merge</p> <p><em>data_name_map</em><strong>teacher_var name到student_var name的映射</strong>,如果写反可能无法正确进行merge</p>
</div> </div>
<p><strong>使用示例:</strong></p> <p><strong>使用示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span> <span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
...@@ -220,7 +220,7 @@ ...@@ -220,7 +220,7 @@
<span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span> <span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span>
<span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span> <span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> <span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span>
<span class="hll"><span class="n">main_program</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="hll"><span class="n">dist</span><span class="o">.</span><span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span>
</span><span class="hll"> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> </span><span class="hll"> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
...@@ -241,7 +241,7 @@ ...@@ -241,7 +241,7 @@
</ul> </ul>
<p><strong>返回:</strong> 由teacher_var1, teacher_var2, student_var1, student_var2组合得到的fsp_loss</p> <p><strong>返回:</strong> 由teacher_var1, teacher_var2, student_var1, student_var2组合得到的fsp_loss</p>
<p><strong>使用示例:</strong></p> <p><strong>使用示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span> <span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
...@@ -257,8 +257,8 @@ ...@@ -257,8 +257,8 @@
<span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span> <span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span>
<span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span> <span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> <span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span>
<span class="n">main_program</span> <span class="o">=</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
<span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">fsp_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t1.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">fsp_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t1.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span>
</span><span class="hll"> <span class="s1">&#39;s1.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span> <span class="n">main_program</span><span class="p">)</span> </span><span class="hll"> <span class="s1">&#39;s1.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span> <span class="n">main_program</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
...@@ -272,13 +272,13 @@ ...@@ -272,13 +272,13 @@
</dl> </dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>teacher_var_name</strong>(str): teacher_var的名称. </li> <li><strong>teacher_var_name</strong>(str): teacher_var的名称.</li>
<li><strong>student_var_name</strong>(str): student_var的名称.</li> <li><strong>student_var_name</strong>(str): student_var的名称.</li>
<li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li> <li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li>
</ul> </ul>
<p><strong>返回:</strong> 由teacher_var, student_var组合得到的l2_loss</p> <p><strong>返回:</strong> 由teacher_var, student_var组合得到的l2_loss</p>
<p><strong>使用示例:</strong></p> <p><strong>使用示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span> <span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
...@@ -294,8 +294,8 @@ ...@@ -294,8 +294,8 @@
<span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span> <span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span>
<span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span> <span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> <span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span>
<span class="n">main_program</span> <span class="o">=</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
<span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">l2_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span> <span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">l2_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span>
</span><span class="hll"> <span class="n">main_program</span><span class="p">)</span> </span><span class="hll"> <span class="n">main_program</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
...@@ -309,15 +309,15 @@ ...@@ -309,15 +309,15 @@
</dl> </dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>teacher_var_name</strong>(str): teacher_var的名称. </li> <li><strong>teacher_var_name</strong>(str): teacher_var的名称.</li>
<li><strong>student_var_name</strong>(str): student_var的名称. </li> <li><strong>student_var_name</strong>(str): student_var的名称.</li>
<li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li> <li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li>
<li><strong>teacher_temperature</strong>(float): 对teacher_var进行soft操作的温度值,温度值越大得到的特征图越平滑 </li> <li><strong>teacher_temperature</strong>(float): 对teacher_var进行soft操作的温度值,温度值越大得到的特征图越平滑</li>
<li><strong>student_temperature</strong>(float): 对student_var进行soft操作的温度值,温度值越大得到的特征图越平滑 </li> <li><strong>student_temperature</strong>(float): 对student_var进行soft操作的温度值,温度值越大得到的特征图越平滑</li>
</ul> </ul>
<p><strong>返回:</strong> 由teacher_var, student_var组合得到的soft_label_loss</p> <p><strong>返回:</strong> 由teacher_var, student_var组合得到的soft_label_loss</p>
<p><strong>使用示例:</strong></p> <p><strong>使用示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span> <span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
...@@ -333,8 +333,8 @@ ...@@ -333,8 +333,8 @@
<span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span> <span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span>
<span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span> <span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> <span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span>
<span class="n">main_program</span> <span class="o">=</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
<span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">soft_label_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">soft_label_loss</span><span class="p">(</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span>
</span><span class="hll"> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span> <span class="n">main_program</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">)</span> </span><span class="hll"> <span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">,</span> <span class="n">main_program</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
...@@ -348,13 +348,13 @@ ...@@ -348,13 +348,13 @@
</dl> </dl>
<p><strong>参数:</strong></p> <p><strong>参数:</strong></p>
<ul> <ul>
<li><strong>loss_func</strong>(python function): 自定义的损失函数,输入为teacher var和student var,输出为自定义的loss </li> <li><strong>loss_func</strong>(python function): 自定义的损失函数,输入为teacher var和student var,输出为自定义的loss</li>
<li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li> <li><strong>program</strong>(Program): 用于蒸馏训练的fluid program。默认值:<a href="https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_cn/fluid_cn.html#default-main-program"><em>fluid.default_main_program()</em></a></li>
<li><strong>**kwargs</strong>: loss_func输入名与对应variable名称</li> <li><strong>**kwargs</strong>: loss_func输入名与对应variable名称</li>
</ul> </ul>
<p><strong>返回</strong>:自定义的损失函数loss</p> <p><strong>返回</strong>:自定义的损失函数loss</p>
<p><strong>使用示例:</strong></p> <p><strong>使用示例:</strong></p>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span>
<span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span> <span class="kn">import</span> <span class="nn">paddleslim.dist</span> <span class="kn">as</span> <span class="nn">dist</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
...@@ -370,13 +370,13 @@ ...@@ -370,13 +370,13 @@
<span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span> <span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;y&#39;</span><span class="p">:</span><span class="s1">&#39;x&#39;</span><span class="p">}</span>
<span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span> <span class="n">USE_GPU</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span> <span class="n">place</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CUDAPlace</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="k">if</span> <span class="n">USE_GPU</span> <span class="k">else</span> <span class="n">fluid</span><span class="o">.</span><span class="n">CPUPlace</span><span class="p">()</span>
<span class="n">main_program</span> <span class="o">=</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">adaptation_loss</span><span class="p">(</span><span class="n">t_var</span><span class="p">,</span> <span class="n">s_var</span><span class="p">):</span> <span class="k">def</span> <span class="nf">adaptation_loss</span><span class="p">(</span><span class="n">t_var</span><span class="p">,</span> <span class="n">s_var</span><span class="p">):</span>
<span class="n">teacher_channel</span> <span class="o">=</span> <span class="n">t_var</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="n">teacher_channel</span> <span class="o">=</span> <span class="n">t_var</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">s_hint</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span><span class="n">s_var</span><span class="p">,</span> <span class="n">teacher_channel</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span> <span class="n">s_hint</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d</span><span class="p">(</span><span class="n">s_var</span><span class="p">,</span> <span class="n">teacher_channel</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">hint_loss</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">s_hint</span> <span class="o">-</span> <span class="n">t_var</span><span class="p">))</span> <span class="n">hint_loss</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">s_hint</span> <span class="o">-</span> <span class="n">t_var</span><span class="p">))</span>
<span class="k">return</span> <span class="n">hint_loss</span> <span class="k">return</span> <span class="n">hint_loss</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">main_program</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">):</span>
<span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">loss</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">adaptation_loss</span><span class="p">,</span> <span class="hll"> <span class="n">distillation_loss</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">loss</span><span class="p">(</span><span class="n">main_program</span><span class="p">,</span> <span class="n">adaptation_loss</span><span class="p">,</span>
</span><span class="hll"> <span class="n">t_var</span><span class="o">=</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="n">s_var</span><span class="o">=</span><span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">)</span> </span><span class="hll"> <span class="n">t_var</span><span class="o">=</span><span class="s1">&#39;teacher_t2.tmp_1&#39;</span><span class="p">,</span> <span class="n">s_var</span><span class="o">=</span><span class="s1">&#39;s2.tmp_1&#39;</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
......
...@@ -168,7 +168,7 @@ ...@@ -168,7 +168,7 @@
<li>Home</li> <li>Home</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/index.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/index.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -211,15 +211,15 @@ ...@@ -211,15 +211,15 @@
<ul> <ul>
<li>安装develop版本</li> <li>安装develop版本</li>
</ul> </ul>
<div class="codehilite"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="p">.</span><span class="n">com</span><span class="o">/</span><span class="n">PaddlePaddle</span><span class="o">/</span><span class="n">PaddleSlim</span><span class="p">.</span><span class="n">git</span> <div class="highlight"><pre><span></span>git clone https://github.com/PaddlePaddle/PaddleSlim.git
<span class="n">cd</span> <span class="n">PaddleSlim</span> cd PaddleSlim
<span class="n">python</span> <span class="n">setup</span><span class="p">.</span><span class="n">py</span> <span class="n">install</span> python setup.py install
</pre></div> </pre></div>
<ul> <ul>
<li>安装官方发布的最新版本</li> <li>安装官方发布的最新版本</li>
</ul> </ul>
<div class="codehilite"><pre><span></span><span class="n">pip</span> <span class="n">install</span> <span class="n">paddleslim</span> <span class="o">-</span><span class="n">i</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">pypi</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="k">simple</span> <div class="highlight"><pre><span></span>pip install paddleslim -i https://pypi.org/simple
</pre></div> </pre></div>
<ul> <ul>
...@@ -289,5 +289,5 @@ ...@@ -289,5 +289,5 @@
<!-- <!--
MkDocs version : 1.0.4 MkDocs version : 1.0.4
Build Date UTC : 2020-01-16 05:32:44 Build Date UTC : 2020-01-16 06:38:06
--> -->
...@@ -58,7 +58,7 @@ ...@@ -58,7 +58,7 @@
<a class="current" href="./">模型库</a> <a class="current" href="./">模型库</a>
<ul class="subnav"> <ul class="subnav">
<li class="toctree-l2"><a href="#1">1. 图分类</a></li> <li class="toctree-l2"><a href="#1">1. 图分类</a></li>
<ul> <ul>
...@@ -190,7 +190,7 @@ ...@@ -190,7 +190,7 @@
<li>模型库</li> <li>模型库</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/model_zoo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/model_zoo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -200,7 +200,7 @@ ...@@ -200,7 +200,7 @@
<div role="main"> <div role="main">
<div class="section"> <div class="section">
<h2 id="1">1. 图分类<a class="headerlink" href="#1" title="Permanent link">#</a></h2> <h2 id="1">1. 图分类<a class="headerlink" href="#1" title="Permanent link">#</a></h2>
<p>数据集:ImageNet1000类</p> <p>数据集:ImageNet1000类</p>
<h3 id="11">1.1 量化<a class="headerlink" href="#11" title="Permanent link">#</a></h3> <h3 id="11">1.1 量化<a class="headerlink" href="#11" title="Permanent link">#</a></h3>
<table> <table>
...@@ -216,7 +216,7 @@ ...@@ -216,7 +216,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">MobileNetV1</td> <td align="center">MobileNetV1</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">70.99%/89.68%</td> <td align="center">70.99%/89.68%</td>
<td align="center">xx</td> <td align="center">xx</td>
<td align="center"><a href="">下载链接</a></td> <td align="center"><a href="">下载链接</a></td>
...@@ -237,7 +237,7 @@ ...@@ -237,7 +237,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">MobileNetV2</td> <td align="center">MobileNetV2</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">72.15%/90.65%</td> <td align="center">72.15%/90.65%</td>
<td align="center">xx</td> <td align="center">xx</td>
<td align="center"><a href="">下载链接</a></td> <td align="center"><a href="">下载链接</a></td>
...@@ -258,7 +258,7 @@ ...@@ -258,7 +258,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">ResNet50</td> <td align="center">ResNet50</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">76.50%/93.00%</td> <td align="center">76.50%/93.00%</td>
<td align="center">xx</td> <td align="center">xx</td>
<td align="center"><a href="">下载链接</a></td> <td align="center"><a href="">下载链接</a></td>
...@@ -294,7 +294,7 @@ ...@@ -294,7 +294,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">MobileNetV1</td> <td align="center">MobileNetV1</td>
<td align="center">baseline</td> <td align="center">Baseline</td>
<td align="center">70.99%/89.68%</td> <td align="center">70.99%/89.68%</td>
<td align="center">17</td> <td align="center">17</td>
<td align="center">1.11</td> <td align="center">1.11</td>
...@@ -326,7 +326,7 @@ ...@@ -326,7 +326,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">MobileNetV2</td> <td align="center">MobileNetV2</td>
<td align="center">baseline</td> <td align="center">-</td>
<td align="center">72.15%/90.65%</td> <td align="center">72.15%/90.65%</td>
<td align="center">15</td> <td align="center">15</td>
<td align="center">0.59</td> <td align="center">0.59</td>
...@@ -342,7 +342,7 @@ ...@@ -342,7 +342,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">ResNet34</td> <td align="center">ResNet34</td>
<td align="center">baseline</td> <td align="center">-</td>
<td align="center">72.15%/90.65%</td> <td align="center">72.15%/90.65%</td>
<td align="center">84</td> <td align="center">84</td>
<td align="center">7.36</td> <td align="center">7.36</td>
...@@ -460,7 +460,7 @@ ...@@ -460,7 +460,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">MobileNet-V1-YOLOv3</td> <td align="center">MobileNet-V1-YOLOv3</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">29.3</td> <td align="center">29.3</td>
...@@ -493,7 +493,7 @@ ...@@ -493,7 +493,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">R50-dcn-YOLOv3 obj365_pretrain</td> <td align="center">R50-dcn-YOLOv3 obj365_pretrain</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">41.4</td> <td align="center">41.4</td>
...@@ -542,7 +542,7 @@ ...@@ -542,7 +542,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">BlazeFace</td> <td align="center">BlazeFace</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">640</td> <td align="center">640</td>
<td align="center">0.915/0.892/0.797</td> <td align="center">0.915/0.892/0.797</td>
...@@ -569,7 +569,7 @@ ...@@ -569,7 +569,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">BlazeFace-Lite</td> <td align="center">BlazeFace-Lite</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">640</td> <td align="center">640</td>
<td align="center">0.909/0.885/0.781</td> <td align="center">0.909/0.885/0.781</td>
...@@ -596,7 +596,7 @@ ...@@ -596,7 +596,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">BlazeFace-NAS</td> <td align="center">BlazeFace-NAS</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">640</td> <td align="center">640</td>
<td align="center">0.837/0.807/0.658</td> <td align="center">0.837/0.807/0.658</td>
...@@ -643,7 +643,7 @@ ...@@ -643,7 +643,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">MobileNet-V1-YOLOv3</td> <td align="center">MobileNet-V1-YOLOv3</td>
<td align="center">baseline</td> <td align="center">Baseline</td>
<td align="center">Pascal VOC</td> <td align="center">Pascal VOC</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">76.2</td> <td align="center">76.2</td>
...@@ -667,7 +667,7 @@ ...@@ -667,7 +667,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">MobileNet-V1-YOLOv3</td> <td align="center">MobileNet-V1-YOLOv3</td>
<td align="center">baseline</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">29.3</td> <td align="center">29.3</td>
...@@ -691,7 +691,7 @@ ...@@ -691,7 +691,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">R50-dcn-YOLOv3</td> <td align="center">R50-dcn-YOLOv3</td>
<td align="center">baseline</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">39.1</td> <td align="center">39.1</td>
...@@ -727,7 +727,7 @@ ...@@ -727,7 +727,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">R50-dcn-YOLOv3 obj365_pretrain</td> <td align="center">R50-dcn-YOLOv3 obj365_pretrain</td>
<td align="center">baseline</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">41.4</td> <td align="center">41.4</td>
...@@ -782,7 +782,7 @@ ...@@ -782,7 +782,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">MobileNet-V1-YOLOv3</td> <td align="center">MobileNet-V1-YOLOv3</td>
<td align="center">student</td> <td align="center">-</td>
<td align="center">Pascal VOC</td> <td align="center">Pascal VOC</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">76.2</td> <td align="center">76.2</td>
...@@ -793,7 +793,7 @@ ...@@ -793,7 +793,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">ResNet34-YOLOv3</td> <td align="center">ResNet34-YOLOv3</td>
<td align="center">teacher</td> <td align="center">-</td>
<td align="center">Pascal VOC</td> <td align="center">Pascal VOC</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">82.6</td> <td align="center">82.6</td>
...@@ -815,7 +815,7 @@ ...@@ -815,7 +815,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">MobileNet-V1-YOLOv3</td> <td align="center">MobileNet-V1-YOLOv3</td>
<td align="center">student</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">29.3</td> <td align="center">29.3</td>
...@@ -826,7 +826,7 @@ ...@@ -826,7 +826,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">ResNet34-YOLOv3</td> <td align="center">ResNet34-YOLOv3</td>
<td align="center">teacher</td> <td align="center">-</td>
<td align="center">COCO</td> <td align="center">COCO</td>
<td align="center">8</td> <td align="center">8</td>
<td align="center">36.2</td> <td align="center">36.2</td>
...@@ -864,7 +864,7 @@ ...@@ -864,7 +864,7 @@
<tbody> <tbody>
<tr> <tr>
<td align="center">DeepLabv3+/MobileNetv1</td> <td align="center">DeepLabv3+/MobileNetv1</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">63.26</td> <td align="center">63.26</td>
<td align="center">xx</td> <td align="center">xx</td>
<td align="center"><a href="">下载链接</a></td> <td align="center"><a href="">下载链接</a></td>
...@@ -885,7 +885,7 @@ ...@@ -885,7 +885,7 @@
</tr> </tr>
<tr> <tr>
<td align="center">DeepLabv3+/MobileNetv2</td> <td align="center">DeepLabv3+/MobileNetv2</td>
<td align="center">FP32 baseline</td> <td align="center">-</td>
<td align="center">69.81</td> <td align="center">69.81</td>
<td align="center">xx</td> <td align="center">xx</td>
<td align="center"><a href="">下载链接</a></td> <td align="center"><a href="">下载链接</a></td>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -177,7 +177,7 @@ ...@@ -177,7 +177,7 @@
<li>搜索空间</li> <li>搜索空间</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/search_space.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/search_space.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -243,7 +243,7 @@ ...@@ -243,7 +243,7 @@
&emsp; 2. token中每个数字的搜索列表长度(<code>range_table</code>函数),tokens中每个token的索引范围。<br> &emsp; 2. token中每个数字的搜索列表长度(<code>range_table</code>函数),tokens中每个token的索引范围。<br>
&emsp; 3. 根据token产生模型结构(<code>token2arch</code>函数),根据搜索到的tokens列表产生模型结构。 <br></p> &emsp; 3. 根据token产生模型结构(<code>token2arch</code>函数),根据搜索到的tokens列表产生模型结构。 <br></p>
<p>以新增reset block为例说明如何构造自己的search space。自定义的search space不能和已有的search space同名。</p> <p>以新增reset block为例说明如何构造自己的search space。自定义的search space不能和已有的search space同名。</p>
<div class="codehilite"><pre><span></span><span class="c1">### 引入搜索空间基类函数和search space的注册类函数</span> <div class="highlight"><pre><span></span><span class="c1">### 引入搜索空间基类函数和search space的注册类函数</span>
<span class="kn">from</span> <span class="nn">.search_space_base</span> <span class="kn">import</span> <span class="n">SearchSpaceBase</span> <span class="kn">from</span> <span class="nn">.search_space_base</span> <span class="kn">import</span> <span class="n">SearchSpaceBase</span>
<span class="kn">from</span> <span class="nn">.search_space_registry</span> <span class="kn">import</span> <span class="n">SEARCHSPACE</span> <span class="kn">from</span> <span class="nn">.search_space_registry</span> <span class="kn">import</span> <span class="n">SEARCHSPACE</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
......
无法预览此类型文件
...@@ -172,7 +172,7 @@ ...@@ -172,7 +172,7 @@
<li>硬件延时评估表</li> <li>硬件延时评估表</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/table_latency.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/table_latency.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -208,7 +208,7 @@ ...@@ -208,7 +208,7 @@
<p>操作信息字段之间以逗号分割。操作信息与延迟信息之间以制表符分割。</p> <p>操作信息字段之间以逗号分割。操作信息与延迟信息之间以制表符分割。</p>
<h3 id="conv2d">conv2d<a class="headerlink" href="#conv2d" title="Permanent link">#</a></h3> <h3 id="conv2d">conv2d<a class="headerlink" href="#conv2d" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">flag_bias</span><span class="p">,</span><span class="n">flag_relu</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="p">,</span><span class="n">c_out</span><span class="p">,</span><span class="n">groups</span><span class="p">,</span><span class="n">kernel</span><span class="p">,</span><span class="n">padding</span><span class="p">,</span><span class="n">stride</span><span class="p">,</span><span class="n">dilation</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,flag_bias,flag_relu,n_in,c_in,h_in,w_in,c_out,groups,kernel,padding,stride,dilation\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
...@@ -230,7 +230,7 @@ ...@@ -230,7 +230,7 @@
</ul> </ul>
<h3 id="activation">activation<a class="headerlink" href="#activation" title="Permanent link">#</a></h3> <h3 id="activation">activation<a class="headerlink" href="#activation" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,n_in,c_in,h_in,w_in\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
...@@ -244,7 +244,7 @@ ...@@ -244,7 +244,7 @@
</ul> </ul>
<h3 id="batch_norm">batch_norm<a class="headerlink" href="#batch_norm" title="Permanent link">#</a></h3> <h3 id="batch_norm">batch_norm<a class="headerlink" href="#batch_norm" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">active_type</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,active_type,n_in,c_in,h_in,w_in\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
...@@ -259,7 +259,7 @@ ...@@ -259,7 +259,7 @@
</ul> </ul>
<h3 id="eltwise">eltwise<a class="headerlink" href="#eltwise" title="Permanent link">#</a></h3> <h3 id="eltwise">eltwise<a class="headerlink" href="#eltwise" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,n_in,c_in,h_in,w_in\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
...@@ -273,7 +273,7 @@ ...@@ -273,7 +273,7 @@
</ul> </ul>
<h3 id="pooling">pooling<a class="headerlink" href="#pooling" title="Permanent link">#</a></h3> <h3 id="pooling">pooling<a class="headerlink" href="#pooling" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">flag_global_pooling</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="p">,</span><span class="n">kernel</span><span class="p">,</span><span class="n">padding</span><span class="p">,</span><span class="n">stride</span><span class="p">,</span><span class="n">ceil_mode</span><span class="p">,</span><span class="n">pool_type</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,flag_global_pooling,n_in,c_in,h_in,w_in,kernel,padding,stride,ceil_mode,pool_type\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
...@@ -293,7 +293,7 @@ ...@@ -293,7 +293,7 @@
</ul> </ul>
<h3 id="softmax">softmax<a class="headerlink" href="#softmax" title="Permanent link">#</a></h3> <h3 id="softmax">softmax<a class="headerlink" href="#softmax" title="Permanent link">#</a></h3>
<p><strong>格式</strong></p> <p><strong>格式</strong></p>
<div class="codehilite"><pre><span></span><span class="n">op_type</span><span class="p">,</span><span class="n">axis</span><span class="p">,</span><span class="n">n_in</span><span class="p">,</span><span class="n">c_in</span><span class="p">,</span><span class="n">h_in</span><span class="p">,</span><span class="n">w_in</span><span class="err">\</span><span class="n">tlatency</span> <div class="highlight"><pre><span></span>op_type,axis,n_in,c_in,h_in,w_in\tlatency
</pre></div> </pre></div>
<p><strong>字段解释</strong></p> <p><strong>字段解释</strong></p>
......
...@@ -150,7 +150,7 @@ ...@@ -150,7 +150,7 @@
<li>Demo guide</li> <li>Demo guide</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/demo_guide.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/demo_guide.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
......
...@@ -177,7 +177,7 @@ ...@@ -177,7 +177,7 @@
<li>知识蒸馏</li> <li>知识蒸馏</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/distillation_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/distillation_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -194,7 +194,7 @@ ...@@ -194,7 +194,7 @@
<p>一般情况下,模型参数量越多,结构越复杂,其性能越好,但运算量和资源消耗也越大。<strong>知识蒸馏</strong> 就是一种将大模型学习到的有用信息(Dark Knowledge)压缩进更小更快的模型,而获得可以匹敌大模型结果的方法。</p> <p>一般情况下,模型参数量越多,结构越复杂,其性能越好,但运算量和资源消耗也越大。<strong>知识蒸馏</strong> 就是一种将大模型学习到的有用信息(Dark Knowledge)压缩进更小更快的模型,而获得可以匹敌大模型结果的方法。</p>
<p>在本示例中精度较高的大模型被称为teacher,精度稍逊但速度更快的小模型被称为student。</p> <p>在本示例中精度较高的大模型被称为teacher,精度稍逊但速度更快的小模型被称为student。</p>
<h3 id="1-student_program">1. 定义student_program<a class="headerlink" href="#1-student_program" title="Permanent link">#</a></h3> <h3 id="1-student_program">1. 定义student_program<a class="headerlink" href="#1-student_program" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <div class="highlight"><pre><span></span><span class="n">student_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="n">student_startup</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">student_startup</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">,</span> <span class="n">student_startup</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">,</span> <span class="n">student_startup</span><span class="p">):</span>
<span class="n">image</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span> <span class="n">image</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span>
...@@ -210,7 +210,7 @@ ...@@ -210,7 +210,7 @@
<h3 id="2-teacher_program">2. 定义teacher_program<a class="headerlink" href="#2-teacher_program" title="Permanent link">#</a></h3> <h3 id="2-teacher_program">2. 定义teacher_program<a class="headerlink" href="#2-teacher_program" title="Permanent link">#</a></h3>
<p>在定义好<code>teacher_program</code>后,可以一并加载训练好的pretrained_model。</p> <p>在定义好<code>teacher_program</code>后,可以一并加载训练好的pretrained_model。</p>
<p><code>teacher_program</code>内需要加上<code>with fluid.unique_name.guard():</code>,保证teacher的变量命名不被<code>student_program</code>影响,从而能够正确地加载预训练参数。</p> <p><code>teacher_program</code>内需要加上<code>with fluid.unique_name.guard():</code>,保证teacher的变量命名不被<code>student_program</code>影响,从而能够正确地加载预训练参数。</p>
<div class="codehilite"><pre><span></span><span class="n">teacher_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <div class="highlight"><pre><span></span><span class="n">teacher_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="n">teacher_startup</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> <span class="n">teacher_startup</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">teacher_startup</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">teacher_startup</span><span class="p">):</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">unique_name</span><span class="o">.</span><span class="n">guard</span><span class="p">():</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">unique_name</span><span class="o">.</span><span class="n">guard</span><span class="p">():</span>
...@@ -232,7 +232,7 @@ ...@@ -232,7 +232,7 @@
<h3 id="3">3.选择特征图<a class="headerlink" href="#3" title="Permanent link">#</a></h3> <h3 id="3">3.选择特征图<a class="headerlink" href="#3" title="Permanent link">#</a></h3>
<p>定义好<code>student_program</code><code>teacher_program</code>后,我们需要从中两两对应地挑选出若干个特征图,留待后续为其添加知识蒸馏损失函数。</p> <p>定义好<code>student_program</code><code>teacher_program</code>后,我们需要从中两两对应地挑选出若干个特征图,留待后续为其添加知识蒸馏损失函数。</p>
<div class="codehilite"><pre><span></span><span class="c1"># get all student variables</span> <div class="highlight"><pre><span></span><span class="c1"># get all student variables</span>
<span class="n">student_vars</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">student_vars</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">v</span> <span class="ow">in</span> <span class="n">student_program</span><span class="o">.</span><span class="n">list_vars</span><span class="p">():</span> <span class="k">for</span> <span class="n">v</span> <span class="ow">in</span> <span class="n">student_program</span><span class="o">.</span><span class="n">list_vars</span><span class="p">():</span>
<span class="k">try</span><span class="p">:</span> <span class="k">try</span><span class="p">:</span>
...@@ -255,14 +255,14 @@ ...@@ -255,14 +255,14 @@
<h3 id="4-programmerge">4. 合并Program(merge)<a class="headerlink" href="#4-programmerge" title="Permanent link">#</a></h3> <h3 id="4-programmerge">4. 合并Program(merge)<a class="headerlink" href="#4-programmerge" title="Permanent link">#</a></h3>
<p>PaddlePaddle使用Program来描述计算图,为了同时计算student和teacher两个Program,这里需要将其两者合并(merge)为一个Program。</p> <p>PaddlePaddle使用Program来描述计算图,为了同时计算student和teacher两个Program,这里需要将其两者合并(merge)为一个Program。</p>
<p>merge过程操作较多,具体细节请参考<a href="https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/#merge">merge API文档</a></p> <p>merge过程操作较多,具体细节请参考<a href="https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/#merge">merge API文档</a></p>
<div class="codehilite"><pre><span></span><span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;data&#39;</span><span class="p">:</span> <span class="s1">&#39;image&#39;</span><span class="p">}</span> <div class="highlight"><pre><span></span><span class="n">data_name_map</span> <span class="o">=</span> <span class="p">{</span><span class="s1">&#39;data&#39;</span><span class="p">:</span> <span class="s1">&#39;image&#39;</span><span class="p">}</span>
<span class="n">student_program</span> <span class="o">=</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span> <span class="n">merge</span><span class="p">(</span><span class="n">teacher_program</span><span class="p">,</span> <span class="n">student_program</span><span class="p">,</span> <span class="n">data_name_map</span><span class="p">,</span> <span class="n">place</span><span class="p">)</span>
</pre></div> </pre></div>
<h3 id="5loss">5.添加蒸馏loss<a class="headerlink" href="#5loss" title="Permanent link">#</a></h3> <h3 id="5loss">5.添加蒸馏loss<a class="headerlink" href="#5loss" title="Permanent link">#</a></h3>
<p>在添加蒸馏loss的过程中,可能还会引入部分变量(Variable),为了避免命名重复这里可以使用<code>with fluid.name_scope("distill"):</code>为新引入的变量加一个命名作用域。</p> <p>在添加蒸馏loss的过程中,可能还会引入部分变量(Variable),为了避免命名重复这里可以使用<code>with fluid.name_scope("distill"):</code>为新引入的变量加一个命名作用域。</p>
<p>另外需要注意的是,merge过程为<code>teacher_program</code>的变量统一加了名称前缀,默认是<code>"teacher_"</code>, 这里在添加<code>l2_loss</code>时也要为teacher的变量加上这个前缀。</p> <p>另外需要注意的是,merge过程为<code>teacher_program</code>的变量统一加了名称前缀,默认是<code>"teacher_"</code>, 这里在添加<code>l2_loss</code>时也要为teacher的变量加上这个前缀。</p>
<div class="codehilite"><pre><span></span><span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">,</span> <span class="n">student_startup</span><span class="p">):</span> <div class="highlight"><pre><span></span><span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">student_program</span><span class="p">,</span> <span class="n">student_startup</span><span class="p">):</span>
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s2">&quot;distill&quot;</span><span class="p">):</span> <span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s2">&quot;distill&quot;</span><span class="p">):</span>
<span class="n">distill_loss</span> <span class="o">=</span> <span class="n">l2_loss</span><span class="p">(</span><span class="s1">&#39;teacher_bn5c_branch2b.output.1.tmp_3&#39;</span><span class="p">,</span> <span class="n">distill_loss</span> <span class="o">=</span> <span class="n">l2_loss</span><span class="p">(</span><span class="s1">&#39;teacher_bn5c_branch2b.output.1.tmp_3&#39;</span><span class="p">,</span>
<span class="s1">&#39;depthwise_conv2d_11.tmp_0&#39;</span><span class="p">,</span> <span class="n">student_program</span><span class="p">)</span> <span class="s1">&#39;depthwise_conv2d_11.tmp_0&#39;</span><span class="p">,</span> <span class="n">student_program</span><span class="p">)</span>
......
...@@ -166,7 +166,7 @@ ...@@ -166,7 +166,7 @@
<li>SA搜索</li> <li>SA搜索</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/nas_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/nas_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -181,57 +181,57 @@ ...@@ -181,57 +181,57 @@
<h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2> <h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2>
<p>请参考。</p> <p>请参考。</p>
<h3 id="1">1. 配置搜索空间<a class="headerlink" href="#1" title="Permanent link">#</a></h3> <h3 id="1">1. 配置搜索空间<a class="headerlink" href="#1" title="Permanent link">#</a></h3>
<p>详细的搜索空间配置可以参考<a href="https://paddlepaddle.github.io/PaddleSlim/api/nas_api/">神经网络搜索API文档</a> <p>详细的搜索空间配置可以参考<a href='../../../paddleslim/nas/nas_api.md'>神经网络搜索API文档</a>
<div class="codehilite"><pre><span></span><span class="n">config</span> <span class="o">=</span> <span class="p">[(</span><span class="s1">&#39;MobileNetV2Space&#39;</span><span class="p">)]</span> <div class="highlight"><pre><span></span>config = [(&#39;MobileNetV2Space&#39;)]
</pre></div></p> </pre></div></p>
<h3 id="2-sanas">2. 利用搜索空间初始化SANAS实例<a class="headerlink" href="#2-sanas" title="Permanent link">#</a></h3> <h3 id="2-sanas">2. 利用搜索空间初始化SANAS实例<a class="headerlink" href="#2-sanas" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="kn">from</span> <span class="nn">paddleslim.nas</span> <span class="kn">import</span> <span class="n">SANAS</span> <div class="highlight"><pre><span></span>from paddleslim.nas import SANAS
<span class="n">sa_nas</span> <span class="o">=</span> <span class="n">SANAS</span><span class="p">(</span> sa_nas = SANAS(
<span class="n">config</span><span class="p">,</span> config,
<span class="n">server_addr</span><span class="o">=</span><span class="p">(</span><span class="s2">&quot;&quot;</span><span class="p">,</span> <span class="mi">8881</span><span class="p">),</span> server_addr=(&quot;&quot;, 8881),
<span class="n">init_temperature</span><span class="o">=</span><span class="mf">10.24</span><span class="p">,</span> init_temperature=10.24,
<span class="n">reduce_rate</span><span class="o">=</span><span class="mf">0.85</span><span class="p">,</span> reduce_rate=0.85,
<span class="n">search_steps</span><span class="o">=</span><span class="mi">300</span><span class="p">,</span> search_steps=300,
<span class="n">is_server</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> is_server=True)
</pre></div> </pre></div>
<h3 id="3-nas">3. 根据实例化的NAS得到当前的网络结构<a class="headerlink" href="#3-nas" title="Permanent link">#</a></h3> <h3 id="3-nas">3. 根据实例化的NAS得到当前的网络结构<a class="headerlink" href="#3-nas" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">archs</span> <span class="o">=</span> <span class="n">sa_nas</span><span class="p">.</span><span class="n">next_archs</span><span class="p">()</span> <div class="highlight"><pre><span></span>archs = sa_nas.next_archs()
</pre></div> </pre></div>
<h3 id="4-program">4. 根据得到的网络结构和输入构造训练和测试program<a class="headerlink" href="#4-program" title="Permanent link">#</a></h3> <h3 id="4-program">4. 根据得到的网络结构和输入构造训练和测试program<a class="headerlink" href="#4-program" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="kn">import</span> <span class="nn">paddle.fluid</span> <span class="kn">as</span> <span class="nn">fluid</span> <div class="highlight"><pre><span></span>import paddle.fluid as fluid
<span class="n">train_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> train_program = fluid.Program()
<span class="n">test_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> test_program = fluid.Program()
<span class="n">startup_program</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Program</span><span class="p">()</span> startup_program = fluid.Program()
<span class="k">with</span> <span class="n">fluid</span><span class="o">.</span><span class="n">program_guard</span><span class="p">(</span><span class="n">train_program</span><span class="p">,</span> <span class="n">startup_program</span><span class="p">):</span> with fluid.program_guard(train_program, startup_program):
<span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;data&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span> data = fluid.data(name=&#39;data&#39;, shape=[None, 3, 32, 32], dtype=&#39;float32&#39;)
<span class="n">label</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;label&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;int64&#39;</span><span class="p">)</span> label = fluid.data(name=&#39;label&#39;, shape=[None, 1], dtype=&#39;int64&#39;)
<span class="k">for</span> <span class="n">arch</span> <span class="ow">in</span> <span class="n">archs</span><span class="p">:</span> for arch in archs:
<span class="n">data</span> <span class="o">=</span> <span class="n">arch</span><span class="p">(</span><span class="n">data</span><span class="p">)</span> data = arch(data)
<span class="n">output</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="mi">10</span><span class="p">)</span> output = fluid.layers.fc(data, 10)
<span class="n">softmax_out</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">output</span><span class="p">,</span> <span class="n">use_cudnn</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span> softmax_out = fluid.layers.softmax(input=output, use_cudnn=False)
<span class="n">cost</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">cross_entropy</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">softmax_out</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">label</span><span class="p">)</span> cost = fluid.layers.cross_entropy(input=softmax_out, label=label)
<span class="n">avg_cost</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">cost</span><span class="p">)</span> avg_cost = fluid.layers.mean(cost)
<span class="n">acc_top1</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">accuracy</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">softmax_out</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">label</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> acc_top1 = fluid.layers.accuracy(input=softmax_out, label=label, k=1)
<span class="n">test_program</span> <span class="o">=</span> <span class="n">train_program</span><span class="o">.</span><span class="n">clone</span><span class="p">(</span><span class="n">for_test</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> test_program = train_program.clone(for_test=True)
<span class="n">sgd</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">optimizer</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="mf">1e-3</span><span class="p">)</span> sgd = fluid.optimizer.SGD(learning_rate=1e-3)
<span class="n">sgd</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">avg_cost</span><span class="p">)</span> sgd.minimize(avg_cost)
</pre></div> </pre></div>
<h3 id="5-program">5. 根据构造的训练program添加限制条件<a class="headerlink" href="#5-program" title="Permanent link">#</a></h3> <h3 id="5-program">5. 根据构造的训练program添加限制条件<a class="headerlink" href="#5-program" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="kn">from</span> <span class="nn">paddleslim.analysis</span> <span class="kn">import</span> <span class="n">flops</span> <div class="highlight"><pre><span></span>from paddleslim.analysis import flops
<span class="k">if</span> <span class="n">flops</span><span class="p">(</span><span class="n">train_program</span><span class="p">)</span> <span class="o">&gt;</span> <span class="mi">321208544</span><span class="p">:</span> if flops(train_program) &gt; 321208544:
<span class="k">continue</span> continue
</pre></div> </pre></div>
<h3 id="6-score">6. 回传score<a class="headerlink" href="#6-score" title="Permanent link">#</a></h3> <h3 id="6-score">6. 回传score<a class="headerlink" href="#6-score" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">sa_nas</span><span class="p">.</span><span class="n">reward</span><span class="p">(</span><span class="n">score</span><span class="p">)</span> <div class="highlight"><pre><span></span>sa_nas.reward(score)
</pre></div> </pre></div>
</div> </div>
......
...@@ -150,7 +150,7 @@ ...@@ -150,7 +150,7 @@
<li>卷积通道剪裁示例</li> <li>卷积通道剪裁示例</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/pruning_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/pruning_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -173,15 +173,15 @@ ...@@ -173,15 +173,15 @@
<p>该示例使用了<code>paddleslim.Pruner</code>工具类,用户接口使用介绍请参考:<a href="https://paddlepaddle.github.io/PaddleSlim/api/prune_api/">API文档</a></p> <p>该示例使用了<code>paddleslim.Pruner</code>工具类,用户接口使用介绍请参考:<a href="https://paddlepaddle.github.io/PaddleSlim/api/prune_api/">API文档</a></p>
<h2 id="_3">确定待裁参数<a class="headerlink" href="#_3" title="Permanent link">#</a></h2> <h2 id="_3">确定待裁参数<a class="headerlink" href="#_3" title="Permanent link">#</a></h2>
<p>不同模型的参数命名不同,在剪裁前需要确定待裁卷积层的参数名称。可通过以下方法列出所有参数名:</p> <p>不同模型的参数命名不同,在剪裁前需要确定待裁卷积层的参数名称。可通过以下方法列出所有参数名:</p>
<div class="codehilite"><pre><span></span><span class="k">for</span> <span class="nv">param</span> <span class="nv">in</span> <span class="nv">program</span>.<span class="nv">global_block</span><span class="ss">()</span>.<span class="nv">all_parameters</span><span class="ss">()</span>: <div class="highlight"><pre><span></span>for param in program.global_block().all_parameters():
<span class="nv">print</span><span class="ss">(</span><span class="s2">&quot;</span><span class="s">param name: {}; shape: {}</span><span class="s2">&quot;</span>.<span class="nv">format</span><span class="ss">(</span><span class="nv">param</span>.<span class="nv">name</span>, <span class="nv">param</span>.<span class="nv">shape</span><span class="ss">))</span> print(&quot;param name: {}; shape: {}&quot;.format(param.name, param.shape))
</pre></div> </pre></div>
<p><code>train.py</code>脚本中,提供了<code>get_pruned_params</code>方法,根据用户设置的选项<code>--model</code>确定要裁剪的参数。</p> <p><code>train.py</code>脚本中,提供了<code>get_pruned_params</code>方法,根据用户设置的选项<code>--model</code>确定要裁剪的参数。</p>
<h2 id="_4">启动裁剪任务<a class="headerlink" href="#_4" title="Permanent link">#</a></h2> <h2 id="_4">启动裁剪任务<a class="headerlink" href="#_4" title="Permanent link">#</a></h2>
<p>通过以下命令启动裁剪任务:</p> <p>通过以下命令启动裁剪任务:</p>
<div class="codehilite"><pre><span></span><span class="n">export</span> <span class="n">CUDA_VISIBLE_DEVICES</span><span class="o">=</span><span class="mi">0</span> <div class="highlight"><pre><span></span>export CUDA_VISIBLE_DEVICES=0
<span class="n">python</span> <span class="n">train</span><span class="p">.</span><span class="n">py</span> python train.py
</pre></div> </pre></div>
<p>执行<code>python train.py --help</code>查看更多选项。</p> <p>执行<code>python train.py --help</code>查看更多选项。</p>
......
...@@ -168,7 +168,7 @@ ...@@ -168,7 +168,7 @@
<li>量化训练</li> <li>量化训练</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/quant_aware_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/quant_aware_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -181,64 +181,64 @@ ...@@ -181,64 +181,64 @@
<h1 id="_1">在线量化示例<a class="headerlink" href="#_1" title="Permanent link">#</a></h1> <h1 id="_1">在线量化示例<a class="headerlink" href="#_1" title="Permanent link">#</a></h1>
<p>本示例介绍如何使用在线量化接口,来对训练好的分类模型进行量化, 可以减少模型的存储空间和显存占用。</p> <p>本示例介绍如何使用在线量化接口,来对训练好的分类模型进行量化, 可以减少模型的存储空间和显存占用。</p>
<h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2> <h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2>
<p>请参考 <a href="https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/">量化API文档</a></p> <p>请参考 <a href='../../../paddleslim/quant/quantization_api_doc.md'>量化API文档</a></p>
<h2 id="_3">分类模型的离线量化流程<a class="headerlink" href="#_3" title="Permanent link">#</a></h2> <h2 id="_3">分类模型的离线量化流程<a class="headerlink" href="#_3" title="Permanent link">#</a></h2>
<h3 id="1">1. 配置量化参数<a class="headerlink" href="#1" title="Permanent link">#</a></h3> <h3 id="1">1. 配置量化参数<a class="headerlink" href="#1" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">quant_config</span> <span class="o">=</span> <span class="err">{</span> <div class="highlight"><pre><span></span>quant_config = {
<span class="s1">&#39;weight_quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> &#39;weight_quantize_type&#39;: &#39;abs_max&#39;,
<span class="s1">&#39;activation_quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;moving_average_abs_max&#39;</span><span class="p">,</span> &#39;activation_quantize_type&#39;: &#39;moving_average_abs_max&#39;,
<span class="s1">&#39;weight_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> &#39;weight_bits&#39;: 8,
<span class="s1">&#39;activation_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> &#39;activation_bits&#39;: 8,
<span class="s1">&#39;not_quant_pattern&#39;</span><span class="p">:</span> <span class="p">[</span><span class="s1">&#39;skip_quant&#39;</span><span class="p">],</span> &#39;not_quant_pattern&#39;: [&#39;skip_quant&#39;],
<span class="s1">&#39;quantize_op_types&#39;</span><span class="p">:</span> <span class="p">[</span><span class="s1">&#39;conv2d&#39;</span><span class="p">,</span> <span class="s1">&#39;depthwise_conv2d&#39;</span><span class="p">,</span> <span class="s1">&#39;mul&#39;</span><span class="p">],</span> &#39;quantize_op_types&#39;: [&#39;conv2d&#39;, &#39;depthwise_conv2d&#39;, &#39;mul&#39;],
<span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="p">,</span> &#39;dtype&#39;: &#39;int8&#39;,
<span class="s1">&#39;window_size&#39;</span><span class="p">:</span> <span class="mi">10000</span><span class="p">,</span> &#39;window_size&#39;: 10000,
<span class="s1">&#39;moving_rate&#39;</span><span class="p">:</span> <span class="mi">0</span><span class="p">.</span><span class="mi">9</span><span class="p">,</span> &#39;moving_rate&#39;: 0.9,
<span class="s1">&#39;quant_weight_only&#39;</span><span class="p">:</span> <span class="k">False</span> &#39;quant_weight_only&#39;: False
<span class="err">}</span> }
</pre></div> </pre></div>
<h3 id="2-programop">2. 对训练和测试program插入可训练量化op<a class="headerlink" href="#2-programop" title="Permanent link">#</a></h3> <h3 id="2-programop">2. 对训练和测试program插入可训练量化op<a class="headerlink" href="#2-programop" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">val_program</span> <span class="o">=</span> <span class="n">quant_aware</span><span class="p">(</span><span class="n">val_program</span><span class="p">,</span> <span class="n">place</span><span class="p">,</span> <span class="n">quant_config</span><span class="p">,</span> <span class="k">scope</span><span class="o">=</span><span class="k">None</span><span class="p">,</span> <span class="n">for_test</span><span class="o">=</span><span class="k">True</span><span class="p">)</span> <div class="highlight"><pre><span></span>val_program = quant_aware(val_program, place, quant_config, scope=None, for_test=True)
<span class="n">compiled_train_prog</span> <span class="o">=</span> <span class="n">quant_aware</span><span class="p">(</span><span class="n">train_prog</span><span class="p">,</span> <span class="n">place</span><span class="p">,</span> <span class="n">quant_config</span><span class="p">,</span> <span class="k">scope</span><span class="o">=</span><span class="k">None</span><span class="p">,</span> <span class="n">for_test</span><span class="o">=</span><span class="k">False</span><span class="p">)</span> compiled_train_prog = quant_aware(train_prog, place, quant_config, scope=None, for_test=False)
</pre></div> </pre></div>
<h3 id="3build">3.关掉指定build策略<a class="headerlink" href="#3build" title="Permanent link">#</a></h3> <h3 id="3build">3.关掉指定build策略<a class="headerlink" href="#3build" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">build_strategy</span> <span class="o">=</span> <span class="n">fluid</span><span class="p">.</span><span class="n">BuildStrategy</span><span class="p">()</span> <div class="highlight"><pre><span></span>build_strategy = fluid.BuildStrategy()
<span class="n">build_strategy</span><span class="p">.</span><span class="n">fuse_all_reduce_ops</span> <span class="o">=</span> <span class="k">False</span> build_strategy.fuse_all_reduce_ops = False
<span class="n">build_strategy</span><span class="p">.</span><span class="n">sync_batch_norm</span> <span class="o">=</span> <span class="k">False</span> build_strategy.sync_batch_norm = False
<span class="n">exec_strategy</span> <span class="o">=</span> <span class="n">fluid</span><span class="p">.</span><span class="n">ExecutionStrategy</span><span class="p">()</span> exec_strategy = fluid.ExecutionStrategy()
<span class="n">compiled_train_prog</span> <span class="o">=</span> <span class="n">compiled_train_prog</span><span class="p">.</span><span class="n">with_data_parallel</span><span class="p">(</span> compiled_train_prog = compiled_train_prog.with_data_parallel(
<span class="n">loss_name</span><span class="o">=</span><span class="n">avg_cost</span><span class="p">.</span><span class="n">name</span><span class="p">,</span> loss_name=avg_cost.name,
<span class="n">build_strategy</span><span class="o">=</span><span class="n">build_strategy</span><span class="p">,</span> build_strategy=build_strategy,
<span class="n">exec_strategy</span><span class="o">=</span><span class="n">exec_strategy</span><span class="p">)</span> exec_strategy=exec_strategy)
</pre></div> </pre></div>
<h3 id="4-freeze-program">4. freeze program<a class="headerlink" href="#4-freeze-program" title="Permanent link">#</a></h3> <h3 id="4-freeze-program">4. freeze program<a class="headerlink" href="#4-freeze-program" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="n">float_program</span><span class="p">,</span> <span class="n">int8_program</span> <span class="o">=</span> <span class="k">convert</span><span class="p">(</span><span class="n">val_program</span><span class="p">,</span> <div class="highlight"><pre><span></span>float_program, int8_program = convert(val_program,
<span class="n">place</span><span class="p">,</span> place,
<span class="n">quant_config</span><span class="p">,</span> quant_config,
<span class="k">scope</span><span class="o">=</span><span class="k">None</span><span class="p">,</span> scope=None,
<span class="n">save_int8</span><span class="o">=</span><span class="k">True</span><span class="p">)</span> save_int8=True)
</pre></div> </pre></div>
<h3 id="5">5.保存预测模型<a class="headerlink" href="#5" title="Permanent link">#</a></h3> <h3 id="5">5.保存预测模型<a class="headerlink" href="#5" title="Permanent link">#</a></h3>
<div class="codehilite"><pre><span></span><span class="nv">fluid</span>.<span class="nv">io</span>.<span class="nv">save_inference_model</span><span class="ss">(</span> <div class="highlight"><pre><span></span>fluid.io.save_inference_model(
<span class="k">dirname</span><span class="o">=</span><span class="nv">float_path</span>, dirname=float_path,
<span class="nv">feeded_var_names</span><span class="o">=</span>[<span class="nv">image</span>.<span class="nv">name</span>], feeded_var_names=[image.name],
<span class="nv">target_vars</span><span class="o">=</span>[<span class="nv">out</span>], <span class="nv">executor</span><span class="o">=</span><span class="nv">exe</span>, target_vars=[out], executor=exe,
<span class="nv">main_program</span><span class="o">=</span><span class="nv">float_program</span>, main_program=float_program,
<span class="nv">model_filename</span><span class="o">=</span><span class="nv">float_path</span> <span class="o">+</span> <span class="s1">&#39;</span><span class="s">/model</span><span class="s1">&#39;</span>, model_filename=float_path + &#39;/model&#39;,
<span class="nv">params_filename</span><span class="o">=</span><span class="nv">float_path</span> <span class="o">+</span> <span class="s1">&#39;</span><span class="s">/params</span><span class="s1">&#39;</span><span class="ss">)</span> params_filename=float_path + &#39;/params&#39;)
<span class="nv">fluid</span>.<span class="nv">io</span>.<span class="nv">save_inference_model</span><span class="ss">(</span> fluid.io.save_inference_model(
<span class="k">dirname</span><span class="o">=</span><span class="nv">int8_path</span>, dirname=int8_path,
<span class="nv">feeded_var_names</span><span class="o">=</span>[<span class="nv">image</span>.<span class="nv">name</span>], feeded_var_names=[image.name],
<span class="nv">target_vars</span><span class="o">=</span>[<span class="nv">out</span>], <span class="nv">executor</span><span class="o">=</span><span class="nv">exe</span>, target_vars=[out], executor=exe,
<span class="nv">main_program</span><span class="o">=</span><span class="nv">int8_program</span>, main_program=int8_program,
<span class="nv">model_filename</span><span class="o">=</span><span class="nv">int8_path</span> <span class="o">+</span> <span class="s1">&#39;</span><span class="s">/model</span><span class="s1">&#39;</span>, model_filename=int8_path + &#39;/model&#39;,
<span class="nv">params_filename</span><span class="o">=</span><span class="nv">int8_path</span> <span class="o">+</span> <span class="s1">&#39;</span><span class="s">/params</span><span class="s1">&#39;</span><span class="ss">)</span> params_filename=int8_path + &#39;/params&#39;)
</pre></div> </pre></div>
</div> </div>
......
...@@ -168,7 +168,7 @@ ...@@ -168,7 +168,7 @@
<li>Embedding量化</li> <li>Embedding量化</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/quant_embedding_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/quant_embedding_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -179,25 +179,25 @@ ...@@ -179,25 +179,25 @@
<div class="section"> <div class="section">
<h1 id="embedding">Embedding量化示例<a class="headerlink" href="#embedding" title="Permanent link">#</a></h1> <h1 id="embedding">Embedding量化示例<a class="headerlink" href="#embedding" title="Permanent link">#</a></h1>
<p>本示例介绍如何使用Embedding量化的接口 <a href="https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/">paddleslim.quant.quant_embedding</a><code>quant_embedding</code>接口将网络中的Embedding参数从<code>float32</code>类型量化到 <code>8-bit</code>整数类型,在几乎不损失模型精度的情况下减少模型的存储空间和显存占用。</p> <p>本示例介绍如何使用Embedding量化的接口 <a href="">paddleslim.quant.quant_embedding</a><code>quant_embedding</code>接口将网络中的Embedding参数从<code>float32</code>类型量化到 <code>8-bit</code>整数类型,在几乎不损失模型精度的情况下减少模型的存储空间和显存占用。</p>
<p>接口介绍请参考 <a href="https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/">量化API文档</a></p> <p>接口介绍请参考 <a href='../../../paddleslim/quant/quantization_api_doc.md'>量化API文档</a></p>
<p>该接口对program的修改:</p> <p>该接口对program的修改:</p>
<p>量化前:</p> <p>量化前:</p>
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/PaddleSlim/develop/demo/quant/quant_embedding/image/before.png" height=200 width=100 hspace='10'/> <br /> <img src="./image/before.png" height=200 width=100 hspace='10'/> <br />
<strong>图1:量化前的模型结构</strong> <strong>图1:量化前的模型结构</strong>
</p> </p>
<p>量化后:</p> <p>量化后:</p>
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/PaddleSlim/develop/demo/quant/quant_embedding/image/after.png" height=300 width=300 hspace='10'/> <br /> <img src="./image/after.png" height=300 width=300 hspace='10'/> <br />
<strong>图2: 量化后的模型结构</strong> <strong>图2: 量化后的模型结构</strong>
</p> </p>
<p>以下将以 <code>基于skip-gram的word2vector模型</code> 为例来说明如何使用<code>quant_embedding</code>接口。首先介绍 <code>基于skip-gram的word2vector模型</code> 的正常训练和测试流程。</p> <p>以下将以 <code>基于skip-gram的word2vector模型</code> 为例来说明如何使用<code>quant_embedding</code>接口。首先介绍 <code>基于skip-gram的word2vector模型</code> 的正常训练和测试流程。</p>
<h2 id="skip-gramword2vector">基于skip-gram的word2vector模型<a class="headerlink" href="#skip-gramword2vector" title="Permanent link">#</a></h2> <h2 id="skip-gramword2vector">基于skip-gram的word2vector模型<a class="headerlink" href="#skip-gramword2vector" title="Permanent link">#</a></h2>
<p>以下是本例的简要目录结构及说明:</p> <p>以下是本例的简要目录结构及说明:</p>
<div class="codehilite"><pre><span></span>. <div class="highlight"><pre><span></span>.
├── cluster_train.py # 分布式训练函数 ├── cluster_train.py # 分布式训练函数
├── cluster_train.sh # 本地模拟多机脚本 ├── cluster_train.sh # 本地模拟多机脚本
├── train.py # 训练函数 ├── train.py # 训练函数
...@@ -214,21 +214,21 @@ ...@@ -214,21 +214,21 @@
<p>同时推荐用户参考<a href="https://aistudio.baidu.com/aistudio/projectDetail/124377"> IPython Notebook demo</a></p> <p>同时推荐用户参考<a href="https://aistudio.baidu.com/aistudio/projectDetail/124377"> IPython Notebook demo</a></p>
<h3 id="_2">数据下载<a class="headerlink" href="#_2" title="Permanent link">#</a></h3> <h3 id="_2">数据下载<a class="headerlink" href="#_2" title="Permanent link">#</a></h3>
<p>全量数据集使用的是来自1 Billion Word Language Model Benchmark的(<a href="http://www.statmt.org/lm-benchmark">http://www.statmt.org/lm-benchmark</a>) 的数据集.</p> <p>全量数据集使用的是来自1 Billion Word Language Model Benchmark的(<a href="http://www.statmt.org/lm-benchmark">http://www.statmt.org/lm-benchmark</a>) 的数据集.</p>
<div class="codehilite"><pre><span></span>mkdir data <div class="highlight"><pre><span></span>mkdir data
wget http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz wget http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz
tar xzvf <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output.tar.gz tar xzvf <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output.tar.gz
mv <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/ data/ mv <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/ data/
</pre></div> </pre></div>
<p>备用数据地址下载命令如下</p> <p>备用数据地址下载命令如下</p>
<div class="codehilite"><pre><span></span>mkdir data <div class="highlight"><pre><span></span>mkdir data
wget https://paddlerec.bj.bcebos.com/word2vec/1-billion-word-language-modeling-benchmark-r13output.tar wget https://paddlerec.bj.bcebos.com/word2vec/1-billion-word-language-modeling-benchmark-r13output.tar
tar xvf <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output.tar tar xvf <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output.tar
mv <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/ data/ mv <span class="m">1</span>-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/ data/
</pre></div> </pre></div>
<p>为了方便快速验证,我们也提供了经典的text8样例数据集,包含1700w个词。 下载命令如下</p> <p>为了方便快速验证,我们也提供了经典的text8样例数据集,包含1700w个词。 下载命令如下</p>
<div class="codehilite"><pre><span></span>mkdir data <div class="highlight"><pre><span></span>mkdir data
wget https://paddlerec.bj.bcebos.com/word2vec/text.tar wget https://paddlerec.bj.bcebos.com/word2vec/text.tar
tar xvf text.tar tar xvf text.tar
mv text data/ mv text data/
...@@ -238,119 +238,119 @@ mv text data/ ...@@ -238,119 +238,119 @@ mv text data/
<p>以样例数据集为例进行预处理。全量数据集注意解压后以training-monolingual.tokenized.shuffled 目录为预处理目录,和样例数据集的text目录并列。</p> <p>以样例数据集为例进行预处理。全量数据集注意解压后以training-monolingual.tokenized.shuffled 目录为预处理目录,和样例数据集的text目录并列。</p>
<p>词典格式: 词&lt;空格&gt;词频。注意低频词用'UNK'表示</p> <p>词典格式: 词&lt;空格&gt;词频。注意低频词用'UNK'表示</p>
<p>可以按格式自建词典,如果自建词典跳过第一步。 <p>可以按格式自建词典,如果自建词典跳过第一步。
<div class="codehilite"><pre><span></span><span class="n">the</span> <span class="mi">1061396</span> <div class="highlight"><pre><span></span>the 1061396
<span class="k">of</span> <span class="mi">593677</span> of 593677
<span class="k">and</span> <span class="mi">416629</span> and 416629
<span class="n">one</span> <span class="mi">411764</span> one 411764
<span class="k">in</span> <span class="mi">372201</span> in 372201
<span class="n">a</span> <span class="mi">325873</span> a 325873
<span class="o">&lt;</span><span class="n">UNK</span><span class="o">&gt;</span> <span class="mi">324608</span> &lt;UNK&gt; 324608
<span class="k">to</span> <span class="mi">316376</span> to 316376
<span class="n">zero</span> <span class="mi">264975</span> zero 264975
<span class="n">nine</span> <span class="mi">250430</span> nine 250430
</pre></div></p> </pre></div></p>
<p>第一步根据英文语料生成词典,中文语料可以通过修改text_strip方法自定义处理方法。</p> <p>第一步根据英文语料生成词典,中文语料可以通过修改text_strip方法自定义处理方法。</p>
<div class="codehilite"><pre><span></span>python preprocess.py --build_dict --build_dict_corpus_dir data/text/ --dict_path data/test_build_dict <div class="highlight"><pre><span></span>python preprocess.py --build_dict --build_dict_corpus_dir data/text/ --dict_path data/test_build_dict
</pre></div> </pre></div>
<p>第二步根据词典将文本转成id, 同时进行downsample,按照概率过滤常见词, 同时生成word和id映射的文件,文件名为词典+"<em>word_to_id</em>"。</p> <p>第二步根据词典将文本转成id, 同时进行downsample,按照概率过滤常见词, 同时生成word和id映射的文件,文件名为词典+"<em>word_to_id</em>"。</p>
<div class="codehilite"><pre><span></span>python preprocess.py --filter_corpus --dict_path data/test_build_dict --input_corpus_dir data/text --output_corpus_dir data/convert_text8 --min_count <span class="m">5</span> --downsample <span class="m">0</span>.001 <div class="highlight"><pre><span></span>python preprocess.py --filter_corpus --dict_path data/test_build_dict --input_corpus_dir data/text --output_corpus_dir data/convert_text8 --min_count <span class="m">5</span> --downsample <span class="m">0</span>.001
</pre></div> </pre></div>
<h3 id="_4">训练<a class="headerlink" href="#_4" title="Permanent link">#</a></h3> <h3 id="_4">训练<a class="headerlink" href="#_4" title="Permanent link">#</a></h3>
<p>具体的参数配置可运行</p> <p>具体的参数配置可运行</p>
<div class="codehilite"><pre><span></span>python train.py -h <div class="highlight"><pre><span></span>python train.py -h
</pre></div> </pre></div>
<p>单机多线程训练 <p>单机多线程训练
<div class="codehilite"><pre><span></span><span class="nv">OPENBLAS_NUM_THREADS</span><span class="o">=</span><span class="m">1</span> <span class="nv">CPU_NUM</span><span class="o">=</span><span class="m">5</span> python train.py --train_data_dir data/convert_text8 --dict_path data/test_build_dict --num_passes <span class="m">10</span> --batch_size <span class="m">100</span> --model_output_dir v1_cpu5_b100_lr1dir --base_lr <span class="m">1</span>.0 --print_batch <span class="m">1000</span> --with_speed --is_sparse <div class="highlight"><pre><span></span><span class="nv">OPENBLAS_NUM_THREADS</span><span class="o">=</span><span class="m">1</span> <span class="nv">CPU_NUM</span><span class="o">=</span><span class="m">5</span> python train.py --train_data_dir data/convert_text8 --dict_path data/test_build_dict --num_passes <span class="m">10</span> --batch_size <span class="m">100</span> --model_output_dir v1_cpu5_b100_lr1dir --base_lr <span class="m">1</span>.0 --print_batch <span class="m">1000</span> --with_speed --is_sparse
</pre></div></p> </pre></div></p>
<p>本地单机模拟多机训练</p> <p>本地单机模拟多机训练</p>
<div class="codehilite"><pre><span></span>sh cluster_train.sh <div class="highlight"><pre><span></span>sh cluster_train.sh
</pre></div> </pre></div>
<p>本示例中按照单机多线程训练的命令进行训练,训练完毕后,可看到在当前文件夹下保存模型的路径为: <code>v1_cpu5_b100_lr1dir</code>, 运行 <code>ls v1_cpu5_b100_lr1dir</code>可看到该文件夹下保存了训练的10个epoch的模型文件。 <p>本示例中按照单机多线程训练的命令进行训练,训练完毕后,可看到在当前文件夹下保存模型的路径为: <code>v1_cpu5_b100_lr1dir</code>, 运行 <code>ls v1_cpu5_b100_lr1dir</code>可看到该文件夹下保存了训练的10个epoch的模型文件。
<div class="codehilite"><pre><span></span><span class="n">pass</span><span class="o">-</span><span class="mi">0</span> <span class="n">pass</span><span class="o">-</span><span class="mi">1</span> <span class="n">pass</span><span class="o">-</span><span class="mi">2</span> <span class="n">pass</span><span class="o">-</span><span class="mi">3</span> <span class="n">pass</span><span class="o">-</span><span class="mi">4</span> <span class="n">pass</span><span class="o">-</span><span class="mi">5</span> <span class="n">pass</span><span class="o">-</span><span class="mi">6</span> <span class="n">pass</span><span class="o">-</span><span class="mi">7</span> <span class="n">pass</span><span class="o">-</span><span class="mi">8</span> <span class="n">pass</span><span class="o">-</span><span class="mi">9</span> <div class="highlight"><pre><span></span>pass-0 pass-1 pass-2 pass-3 pass-4 pass-5 pass-6 pass-7 pass-8 pass-9
</pre></div></p> </pre></div></p>
<h3 id="_5">预测<a class="headerlink" href="#_5" title="Permanent link">#</a></h3> <h3 id="_5">预测<a class="headerlink" href="#_5" title="Permanent link">#</a></h3>
<p>测试集下载命令如下</p> <p>测试集下载命令如下</p>
<div class="codehilite"><pre><span></span><span class="c1">#全量数据集测试集</span> <div class="highlight"><pre><span></span><span class="c1">#全量数据集测试集</span>
wget https://paddlerec.bj.bcebos.com/word2vec/test_dir.tar wget https://paddlerec.bj.bcebos.com/word2vec/test_dir.tar
<span class="c1">#样本数据集测试集</span> <span class="c1">#样本数据集测试集</span>
wget https://paddlerec.bj.bcebos.com/word2vec/test_mid_dir.tar wget https://paddlerec.bj.bcebos.com/word2vec/test_mid_dir.tar
</pre></div> </pre></div>
<p>预测命令,注意词典名称需要加后缀"<em>word_to_id</em>", 此文件是预处理阶段生成的。 <p>预测命令,注意词典名称需要加后缀"<em>word_to_id</em>", 此文件是预处理阶段生成的。
<div class="codehilite"><pre><span></span>python infer.py --infer_epoch --test_dir data/test_mid_dir --dict_path data/test_build_dict_word_to_id_ --batch_size <span class="m">20000</span> --model_dir v1_cpu5_b100_lr1dir/ --start_index <span class="m">0</span> --last_index <span class="m">9</span> <div class="highlight"><pre><span></span>python infer.py --infer_epoch --test_dir data/test_mid_dir --dict_path data/test_build_dict_word_to_id_ --batch_size <span class="m">20000</span> --model_dir v1_cpu5_b100_lr1dir/ --start_index <span class="m">0</span> --last_index <span class="m">9</span>
</pre></div> </pre></div>
运行该预测命令, 可看到如下输出 运行该预测命令, 可看到如下输出
<div class="codehilite"><pre><span></span><span class="p">(</span><span class="s1">&#39;start index: &#39;</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="s1">&#39; last_index:&#39;</span><span class="p">,</span> <span class="mi">9</span><span class="p">)</span> <div class="highlight"><pre><span></span>(&#39;start index: &#39;, 0, &#39; last_index:&#39;, 9)
<span class="p">(</span><span class="s1">&#39;vocab_size:&#39;</span><span class="p">,</span> <span class="mi">63642</span><span class="p">)</span> (&#39;vocab_size:&#39;, 63642)
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">249</span> step:1 249
<span class="n">epoch</span><span class="p">:</span><span class="mi">0</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">014</span> epoch:0 acc:0.014
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">590</span> step:1 590
<span class="n">epoch</span><span class="p">:</span><span class="mi">1</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">033</span> epoch:1 acc:0.033
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">982</span> step:1 982
<span class="n">epoch</span><span class="p">:</span><span class="mi">2</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">055</span> epoch:2 acc:0.055
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1338</span> step:1 1338
<span class="n">epoch</span><span class="p">:</span><span class="mi">3</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">075</span> epoch:3 acc:0.075
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1653</span> step:1 1653
<span class="n">epoch</span><span class="p">:</span><span class="mi">4</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">093</span> epoch:4 acc:0.093
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1914</span> step:1 1914
<span class="n">epoch</span><span class="p">:</span><span class="mi">5</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">107</span> epoch:5 acc:0.107
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2204</span> step:1 2204
<span class="n">epoch</span><span class="p">:</span><span class="mi">6</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">124</span> epoch:6 acc:0.124
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2416</span> step:1 2416
<span class="n">epoch</span><span class="p">:</span><span class="mi">7</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">136</span> epoch:7 acc:0.136
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2606</span> step:1 2606
<span class="n">epoch</span><span class="p">:</span><span class="mi">8</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">146</span> epoch:8 acc:0.146
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2722</span> step:1 2722
<span class="n">epoch</span><span class="p">:</span><span class="mi">9</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">153</span> epoch:9 acc:0.153
</pre></div></p> </pre></div></p>
<h2 id="skip-gramword2vector_1">量化<code>基于skip-gram的word2vector模型</code><a class="headerlink" href="#skip-gramword2vector_1" title="Permanent link">#</a></h2> <h2 id="skip-gramword2vector_1">量化<code>基于skip-gram的word2vector模型</code><a class="headerlink" href="#skip-gramword2vector_1" title="Permanent link">#</a></h2>
<p>量化配置为: <p>量化配置为:
<div class="codehilite"><pre><span></span><span class="n">config</span> <span class="o">=</span> <span class="err">{</span> <div class="highlight"><pre><span></span>config = {
<span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> &#39;params_name&#39;: &#39;emb&#39;,
<span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span> &#39;quantize_type&#39;: &#39;abs_max&#39;
<span class="err">}</span> }
</pre></div></p> </pre></div></p>
<p>运行命令为:</p> <p>运行命令为:</p>
<div class="codehilite"><pre><span></span>python infer.py --infer_epoch --test_dir data/test_mid_dir --dict_path data/test_build_dict_word_to_id_ --batch_size <span class="m">20000</span> --model_dir v1_cpu5_b100_lr1dir/ --start_index <span class="m">0</span> --last_index <span class="m">9</span> --emb_quant True <div class="highlight"><pre><span></span>python infer.py --infer_epoch --test_dir data/test_mid_dir --dict_path data/test_build_dict_word_to_id_ --batch_size <span class="m">20000</span> --model_dir v1_cpu5_b100_lr1dir/ --start_index <span class="m">0</span> --last_index <span class="m">9</span> --emb_quant True
</pre></div> </pre></div>
<p>运行输出为:</p> <p>运行输出为:</p>
<div class="codehilite"><pre><span></span><span class="p">(</span><span class="s1">&#39;start index: &#39;</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="s1">&#39; last_index:&#39;</span><span class="p">,</span> <span class="mi">9</span><span class="p">)</span> <div class="highlight"><pre><span></span>(&#39;start index: &#39;, 0, &#39; last_index:&#39;, 9)
<span class="p">(</span><span class="s1">&#39;vocab_size:&#39;</span><span class="p">,</span> <span class="mi">63642</span><span class="p">)</span> (&#39;vocab_size:&#39;, 63642)
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">253</span> step:1 253
<span class="n">epoch</span><span class="p">:</span><span class="mi">0</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">014</span> epoch:0 acc:0.014
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">586</span> step:1 586
<span class="n">epoch</span><span class="p">:</span><span class="mi">1</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">033</span> epoch:1 acc:0.033
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">970</span> step:1 970
<span class="n">epoch</span><span class="p">:</span><span class="mi">2</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">054</span> epoch:2 acc:0.054
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1364</span> step:1 1364
<span class="n">epoch</span><span class="p">:</span><span class="mi">3</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">077</span> epoch:3 acc:0.077
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1642</span> step:1 1642
<span class="n">epoch</span><span class="p">:</span><span class="mi">4</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">092</span> epoch:4 acc:0.092
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">1936</span> step:1 1936
<span class="n">epoch</span><span class="p">:</span><span class="mi">5</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">109</span> epoch:5 acc:0.109
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2216</span> step:1 2216
<span class="n">epoch</span><span class="p">:</span><span class="mi">6</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">124</span> epoch:6 acc:0.124
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2419</span> step:1 2419
<span class="n">epoch</span><span class="p">:</span><span class="mi">7</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">136</span> epoch:7 acc:0.136
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2603</span> step:1 2603
<span class="n">epoch</span><span class="p">:</span><span class="mi">8</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">146</span> epoch:8 acc:0.146
<span class="n">quant_embedding</span> <span class="n">config</span> <span class="err">{</span><span class="s1">&#39;quantize_type&#39;</span><span class="p">:</span> <span class="s1">&#39;abs_max&#39;</span><span class="p">,</span> <span class="s1">&#39;params_name&#39;</span><span class="p">:</span> <span class="s1">&#39;emb&#39;</span><span class="p">,</span> <span class="s1">&#39;quantize_bits&#39;</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="s1">&#39;dtype&#39;</span><span class="p">:</span> <span class="s1">&#39;int8&#39;</span><span class="err">}</span> quant_embedding config {&#39;quantize_type&#39;: &#39;abs_max&#39;, &#39;params_name&#39;: &#39;emb&#39;, &#39;quantize_bits&#39;: 8, &#39;dtype&#39;: &#39;int8&#39;}
<span class="n">step</span><span class="p">:</span><span class="mi">1</span> <span class="mi">2719</span> step:1 2719
<span class="n">epoch</span><span class="p">:</span><span class="mi">9</span> <span class="n">acc</span><span class="p">:</span><span class="mi">0</span><span class="p">.</span><span class="mi">153</span> epoch:9 acc:0.153
</pre></div> </pre></div>
<p>量化后的模型保存在<code>./output_quant</code>中,可看到量化后的参数<code>'emb.int8'</code>的大小为3.9M, 在<code>./v1_cpu5_b100_lr1dir</code>中可看到量化前的参数<code>'emb'</code>的大小为16M。</p> <p>量化后的模型保存在<code>./output_quant</code>中,可看到量化后的参数<code>'emb.int8'</code>的大小为3.9M, 在<code>./v1_cpu5_b100_lr1dir</code>中可看到量化前的参数<code>'emb'</code>的大小为16M。</p>
......
...@@ -168,7 +168,7 @@ ...@@ -168,7 +168,7 @@
<li>离线量化</li> <li>离线量化</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/quant_post_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/quant_post_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -181,7 +181,7 @@ ...@@ -181,7 +181,7 @@
<h1 id="_1">离线量化示例<a class="headerlink" href="#_1" title="Permanent link">#</a></h1> <h1 id="_1">离线量化示例<a class="headerlink" href="#_1" title="Permanent link">#</a></h1>
<p>本示例介绍如何使用离线量化接口<code>paddleslim.quant.quant_post</code>来对训练好的分类模型进行离线量化, 该接口无需对模型进行训练就可得到量化模型,减少模型的存储空间和显存占用。</p> <p>本示例介绍如何使用离线量化接口<code>paddleslim.quant.quant_post</code>来对训练好的分类模型进行离线量化, 该接口无需对模型进行训练就可得到量化模型,减少模型的存储空间和显存占用。</p>
<h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2> <h2 id="_2">接口介绍<a class="headerlink" href="#_2" title="Permanent link">#</a></h2>
<p>请参考 <a href="https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/">量化API文档</a></p> <p>请参考 <a href='../../../paddleslim/quant/quantization_api_doc.md'>量化API文档</a></p>
<h2 id="_3">分类模型的离线量化流程<a class="headerlink" href="#_3" title="Permanent link">#</a></h2> <h2 id="_3">分类模型的离线量化流程<a class="headerlink" href="#_3" title="Permanent link">#</a></h2>
<h3 id="_4">准备数据<a class="headerlink" href="#_4" title="Permanent link">#</a></h3> <h3 id="_4">准备数据<a class="headerlink" href="#_4" title="Permanent link">#</a></h3>
<p>在当前文件夹下创建<code>data</code>文件夹,将<code>imagenet</code>数据集解压在<code>data</code>文件夹下,解压后<code>data</code>文件夹下应包含以下文件: <p>在当前文件夹下创建<code>data</code>文件夹,将<code>imagenet</code>数据集解压在<code>data</code>文件夹下,解压后<code>data</code>文件夹下应包含以下文件:
...@@ -195,12 +195,12 @@ ...@@ -195,12 +195,12 @@
<p>在当前文件夹下创建<code>'pretrain'</code>文件夹,将<code>mobilenetv1</code>模型在该文件夹下解压,解压后的目录为<code>pretrain/MobileNetV1_pretrained</code></p> <p>在当前文件夹下创建<code>'pretrain'</code>文件夹,将<code>mobilenetv1</code>模型在该文件夹下解压,解压后的目录为<code>pretrain/MobileNetV1_pretrained</code></p>
<h3 id="_6">导出模型<a class="headerlink" href="#_6" title="Permanent link">#</a></h3> <h3 id="_6">导出模型<a class="headerlink" href="#_6" title="Permanent link">#</a></h3>
<p>通过运行以下命令可将模型转化为离线量化接口可用的模型: <p>通过运行以下命令可将模型转化为离线量化接口可用的模型:
<div class="codehilite"><pre><span></span><span class="n">python</span> <span class="n">export_model</span><span class="p">.</span><span class="n">py</span> <span class="c1">--model &quot;MobileNet&quot; --pretrained_model ./pretrain/MobileNetV1_pretrained --data imagenet</span> <div class="highlight"><pre><span></span>python export_model.py --model &quot;MobileNet&quot; --pretrained_model ./pretrain/MobileNetV1_pretrained --data imagenet
</pre></div> </pre></div>
转化之后的模型存储在<code>inference_model/MobileNet/</code>文件夹下,可看到该文件夹下有<code>'model'</code>, <code>'weights'</code>两个文件。</p> 转化之后的模型存储在<code>inference_model/MobileNet/</code>文件夹下,可看到该文件夹下有<code>'model'</code>, <code>'weights'</code>两个文件。</p>
<h3 id="_7">离线量化<a class="headerlink" href="#_7" title="Permanent link">#</a></h3> <h3 id="_7">离线量化<a class="headerlink" href="#_7" title="Permanent link">#</a></h3>
<p>接下来对导出的模型文件进行离线量化,离线量化的脚本为<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/demo/quant/quant_post/quant_post.py">quant_post.py</a>,脚本中使用接口<code>paddleslim.quant.quant_post</code>对模型进行离线量化。运行命令为: <p>接下来对导出的模型文件进行离线量化,离线量化的脚本为<a href="./quant_post.py">quant_post.py</a>,脚本中使用接口<code>paddleslim.quant.quant_post</code>对模型进行离线量化。运行命令为:
<div class="codehilite"><pre><span></span><span class="n">python</span> <span class="n">quant_post</span><span class="p">.</span><span class="n">py</span> <span class="c1">--model_path ./inference_model/MobileNet --save_path ./quant_model_train/MobileNet --model_filename model --params_filename weights</span> <div class="highlight"><pre><span></span>python quant_post.py --model_path ./inference_model/MobileNet --save_path ./quant_model_train/MobileNet --model_filename model --params_filename weights
</pre></div></p> </pre></div></p>
<ul> <ul>
<li><code>model_path</code>: 需要量化的模型坐在的文件夹</li> <li><code>model_path</code>: 需要量化的模型坐在的文件夹</li>
...@@ -213,19 +213,19 @@ ...@@ -213,19 +213,19 @@
<p>使用的量化算法为<code>'KL'</code>, 使用训练集中的160张图片进行量化参数的校正。</p> <p>使用的量化算法为<code>'KL'</code>, 使用训练集中的160张图片进行量化参数的校正。</p>
</blockquote> </blockquote>
<h3 id="_8">测试精度<a class="headerlink" href="#_8" title="Permanent link">#</a></h3> <h3 id="_8">测试精度<a class="headerlink" href="#_8" title="Permanent link">#</a></h3>
<p>使用<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/demo/quant/quant_post/eval.py">eval.py</a>脚本对量化前后的模型进行测试,得到模型的分类精度进行对比。</p> <p>使用<a href="./eval.py">eval.py</a>脚本对量化前后的模型进行测试,得到模型的分类精度进行对比。</p>
<p>首先测试量化前的模型的精度,运行以下命令: <p>首先测试量化前的模型的精度,运行以下命令:
<div class="codehilite"><pre><span></span><span class="n">python</span> <span class="n">eval</span><span class="p">.</span><span class="n">py</span> <span class="c1">--model_path ./inference_model/MobileNet --model_name model --params_name weights</span> <div class="highlight"><pre><span></span>python eval.py --model_path ./inference_model/MobileNet --model_name model --params_name weights
</pre></div> </pre></div>
精度输出为: 精度输出为:
<div class="codehilite"><pre><span></span><span class="n">top1_acc</span><span class="o">/</span><span class="n">top5_acc</span><span class="o">=</span> <span class="p">[</span><span class="mi">0</span><span class="p">.</span><span class="mi">70913923</span> <span class="mi">0</span><span class="p">.</span><span class="mi">89548034</span><span class="p">]</span> <div class="highlight"><pre><span></span>top1_acc/top5_acc= [0.70913923 0.89548034]
</pre></div></p> </pre></div></p>
<p>使用以下命令测试离线量化后的模型的精度:</p> <p>使用以下命令测试离线量化后的模型的精度:</p>
<div class="codehilite"><pre><span></span><span class="n">python</span> <span class="n">eval</span><span class="p">.</span><span class="n">py</span> <span class="c1">--model_path ./quant_model_train/MobileNet</span> <div class="highlight"><pre><span></span>python eval.py --model_path ./quant_model_train/MobileNet
</pre></div> </pre></div>
<p>精度输出为 <p>精度输出为
<div class="codehilite"><pre><span></span><span class="n">top1_acc</span><span class="o">/</span><span class="n">top5_acc</span><span class="o">=</span> <span class="p">[</span><span class="mi">0</span><span class="p">.</span><span class="mi">70141864</span> <span class="mi">0</span><span class="p">.</span><span class="mi">89086477</span><span class="p">]</span> <div class="highlight"><pre><span></span>top1_acc/top5_acc= [0.70141864 0.89086477]
</pre></div> </pre></div>
从以上精度对比可以看出,对<code>mobilenet</code><code>imagenet</code>上的分类模型进行离线量化后 <code>top1</code>精度损失为<code>0.77%</code><code>top5</code>精度损失为<code>0.46%</code>. </p> 从以上精度对比可以看出,对<code>mobilenet</code><code>imagenet</code>上的分类模型进行离线量化后 <code>top1</code>精度损失为<code>0.77%</code><code>top5</code>精度损失为<code>0.46%</code>. </p>
......
...@@ -150,7 +150,7 @@ ...@@ -150,7 +150,7 @@
<li>Sensitivity demo</li> <li>Sensitivity demo</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/docs/tutorials/sensitivity_demo.md" <a href="https://github.com/PaddlePaddle/PaddleSlim/edit/master/docs/tutorials/sensitivity_demo.md"
class="icon icon-github"> Edit on GitHub</a> class="icon icon-github"> Edit on GitHub</a>
</li> </li>
...@@ -176,8 +176,8 @@ ...@@ -176,8 +176,8 @@
</ul> </ul>
<h2 id="2">2. 运行示例<a class="headerlink" href="#2" title="Permanent link">#</a></h2> <h2 id="2">2. 运行示例<a class="headerlink" href="#2" title="Permanent link">#</a></h2>
<p>在路径<code>PaddleSlim/demo/sensitive</code>下执行以下代码运行示例:</p> <p>在路径<code>PaddleSlim/demo/sensitive</code>下执行以下代码运行示例:</p>
<div class="codehilite"><pre><span></span><span class="n">export</span> <span class="n">CUDA_VISIBLE_DEVICES</span><span class="o">=</span><span class="mi">0</span> <div class="highlight"><pre><span></span>export CUDA_VISIBLE_DEVICES=0
<span class="n">python</span> <span class="n">train</span><span class="p">.</span><span class="n">py</span> <span class="c1">--model &quot;MobileNetV1&quot;</span> python train.py --model &quot;MobileNetV1&quot;
</pre></div> </pre></div>
<p>通过<code>python train.py --help</code>查看更多选项。</p> <p>通过<code>python train.py --help</code>查看更多选项。</p>
...@@ -187,34 +187,34 @@ ...@@ -187,34 +187,34 @@
<p>调用<code>paddleslim.prune.sensitivity</code>接口计算敏感度。敏感度信息会追加到<code>sensitivities_file</code>选项所指定的文件中,如果需要重新计算敏感度,需要先删除<code>sensitivities_file</code>文件。</p> <p>调用<code>paddleslim.prune.sensitivity</code>接口计算敏感度。敏感度信息会追加到<code>sensitivities_file</code>选项所指定的文件中,如果需要重新计算敏感度,需要先删除<code>sensitivities_file</code>文件。</p>
<p>如果模型评估速度较慢,可以通过多进程的方式加速敏感度计算过程。比如在进程1中设置<code>pruned_ratios=[0.1, 0.2, 0.3, 0.4]</code>,并将敏感度信息存放在文件<code>sensitivities_0.data</code>中,然后在进程2中设置<code>pruned_ratios=[0.5, 0.6, 0.7]</code>,并将敏感度信息存储在文件<code>sensitivities_1.data</code>中。这样每个进程只会计算指定剪切率下的敏感度信息。多进程可以运行在单机多卡,或多机多卡。</p> <p>如果模型评估速度较慢,可以通过多进程的方式加速敏感度计算过程。比如在进程1中设置<code>pruned_ratios=[0.1, 0.2, 0.3, 0.4]</code>,并将敏感度信息存放在文件<code>sensitivities_0.data</code>中,然后在进程2中设置<code>pruned_ratios=[0.5, 0.6, 0.7]</code>,并将敏感度信息存储在文件<code>sensitivities_1.data</code>中。这样每个进程只会计算指定剪切率下的敏感度信息。多进程可以运行在单机多卡,或多机多卡。</p>
<p>代码如下:</p> <p>代码如下:</p>
<div class="codehilite"><pre><span></span><span class="o">#</span> <span class="err">进程</span><span class="mi">1</span> <div class="highlight"><pre><span></span># 进程1
<span class="n">sensitivity</span><span class="p">(</span> sensitivity(
<span class="n">val_program</span><span class="p">,</span> val_program,
<span class="n">place</span><span class="p">,</span> place,
<span class="n">params</span><span class="p">,</span> params,
<span class="n">test</span><span class="p">,</span> test,
<span class="n">sensitivities_file</span><span class="o">=</span><span class="ss">&quot;sensitivities_0.data&quot;</span><span class="p">,</span> sensitivities_file=&quot;sensitivities_0.data&quot;,
<span class="n">pruned_ratios</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">3</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">4</span><span class="p">])</span> pruned_ratios=[0.1, 0.2, 0.3, 0.4])
</pre></div> </pre></div>
<div class="codehilite"><pre><span></span><span class="o">#</span> <span class="err">进程</span><span class="mi">2</span> <div class="highlight"><pre><span></span># 进程2
<span class="n">sensitivity</span><span class="p">(</span> sensitivity(
<span class="n">val_program</span><span class="p">,</span> val_program,
<span class="n">place</span><span class="p">,</span> place,
<span class="n">params</span><span class="p">,</span> params,
<span class="n">test</span><span class="p">,</span> test,
<span class="n">sensitivities_file</span><span class="o">=</span><span class="ss">&quot;sensitivities_1.data&quot;</span><span class="p">,</span> sensitivities_file=&quot;sensitivities_1.data&quot;,
<span class="n">pruned_ratios</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">6</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">7</span><span class="p">])</span> pruned_ratios=[0.5, 0.6, 0.7])
</pre></div> </pre></div>
<h3 id="32">3.2 合并敏感度<a class="headerlink" href="#32" title="Permanent link">#</a></h3> <h3 id="32">3.2 合并敏感度<a class="headerlink" href="#32" title="Permanent link">#</a></h3>
<p>如果用户通过上一节多进程的方式生成了多个存储敏感度信息的文件,可以通过<code>paddleslim.prune.merge_sensitive</code>将其合并,合并后的敏感度信息存储在一个<code>dict</code>中。代码如下:</p> <p>如果用户通过上一节多进程的方式生成了多个存储敏感度信息的文件,可以通过<code>paddleslim.prune.merge_sensitive</code>将其合并,合并后的敏感度信息存储在一个<code>dict</code>中。代码如下:</p>
<div class="codehilite"><pre><span></span><span class="n">sens</span> <span class="o">=</span> <span class="n">merge_sensitive</span><span class="p">([</span><span class="ss">&quot;./sensitivities_0.data&quot;</span><span class="p">,</span> <span class="ss">&quot;./sensitivities_1.data&quot;</span><span class="p">])</span> <div class="highlight"><pre><span></span>sens = merge_sensitive([&quot;./sensitivities_0.data&quot;, &quot;./sensitivities_1.data&quot;])
</pre></div> </pre></div>
<h3 id="33">3.3 计算剪裁率<a class="headerlink" href="#33" title="Permanent link">#</a></h3> <h3 id="33">3.3 计算剪裁率<a class="headerlink" href="#33" title="Permanent link">#</a></h3>
<p>调用<code>paddleslim.prune.get_ratios_by_loss</code>接口计算一组剪裁率。</p> <p>调用<code>paddleslim.prune.get_ratios_by_loss</code>接口计算一组剪裁率。</p>
<div class="codehilite"><pre><span></span><span class="n">ratios</span> <span class="o">=</span> <span class="n">get_ratios_by_loss</span><span class="p">(</span><span class="n">sens</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">01</span><span class="p">)</span> <div class="highlight"><pre><span></span>ratios = get_ratios_by_loss(sens, 0.01)
</pre></div> </pre></div>
<p>其中,<code>0.01</code>为一个阈值,对于任意卷积层,其剪裁率为使精度损失低于阈值<code>0.01</code>的最大剪裁率。</p> <p>其中,<code>0.01</code>为一个阈值,对于任意卷积层,其剪裁率为使精度损失低于阈值<code>0.01</code>的最大剪裁率。</p>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册