提交 9a976f4c 编写于 作者: T Travis CI

Deploy to GitHub Pages: 75d0c790

上级 5d0d8dcc
...@@ -179,40 +179,104 @@ init_attr={ ...@@ -179,40 +179,104 @@ init_attr={
`optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message. `optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message.
## Layer Functions ## Layer Function
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers. A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
### Data Layer Layer functions take `Variable` and configuration parameters as its input and return the output variable(s).
For example, `FullyConnected` take one or more variable as its input. The input could be input data or another layer's output. There are many configuration options for a `FullyConnected` layer, such as layer size, activation, parameter names, initialization strategies of parameters, and so on. The `FullyConnected` layer will return an output variable.
### Necessity for reusing code between layer functions
There are a lot of code that can be reused. Such as
* Give the default value of configuration. e.g., default initialize strategy for parameters is uniform random with `min = -1.0`, `max = 1.0`. and default initialize strategy for bias is to fill zero.
* Append the activation operator.
* Create a temporary variable.
* Create parameter.
* Generate a unique name.
* Add a bias.
* ...
A mechanism to reuse code between layer functions is necessary. It will be around [150 lines of code](https://github.com/PaddlePaddle/Paddle/pull/4724/files#diff-823b27e07e93914ada859232ae23f846R12) if we write a `FullyConnected` layer without any helper functions.
### Comparision between global functions and helper class
The `FullyConnected` layer will be as follow when we provide global functions:
```python ```python
def data_layer(name, type, column_name): def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
block = the_current_program.glolal_block() if name is None:
var = block.create_global_var( name = unique_name("fc")
name=name, input = multiple_input(input)
shape=[None] + type.dims(), param_attr = default_param_attr(param_attr)
dtype=type.dtype) param_attr = multiple_param_attr(param_attr, len(input))
block.prepend_operator(block,
type="Feed", # mul
inputs = None, mul_results = []
outputs = [var], for ipt, attr in zip(input, param_attr):
{column_name: column_name}) shape = ipt.shape[1:] + [size]
return var w = g_program.global_block().create_parameter(shape, ipt.dtype, name, attr)
tmp = create_tmp_var(name)
g_program.current_block().append_op("mul", {ipt, w}, {tmp})
mul_results.append(tmp)
# add sum
...
# add bias
...
# add activation
...
return out
``` ```
The input to the feed operator is a special variable in the global scope, which is the output of [Python readers](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md). We can provide many helpers functions for layer developers. However, there are several disadvantages for global helper functions:
1. We need a namespace for these methods, then layer developers can quickly figure out what method they can use.
2. Global functions will force layer developers to pass its parameter time by time.
### FC Layer So we provide a helper class, `LayerHelper`, to share code between layer functions. The `FullyConnected` Layer will be as follow.
```python ```python
def fc_layer(input, size, ...): def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
block = program.current_block() helper = LayerHelper(locals()) # pass all parameter to LayerHelper
w = block.create_parameter(...)
b = block.create_parameter(...) mul_results = []
out = block.create_var() for ipt, param in helper.iter_multiple_input_and_param():
op = block.append_operator("FC", X=input, W=w, b=b, out=out) w = helper.create_parameter(shape=ipt.shape[1:] + [size], dtype = ipt.dtype)
out.writer = op tmp = helper.create_tmp_variable()
return out helper.append_op('mul', {ipt, w}, {tmp})
mul_results.append(tmp)
pre_bias = helper.add_sum(mul_results)
pre_activation = helper.add_bias(pre_bias)
return helper.add_activation(pre_activation)
```
We not only use the fewer lines of code to write `fc_layer` but also make the code clearer to understand. At the same time, layer developers can figure out what function they can invoke by typing `helper.` in a python editor.
### Implementation of layer helper
We just keep all parameters of a layer function as a dictionary in layer helper as a private data member. Every method of layer helper will look up the dictionary after it is invoked. In that way, we can implement a layer helper for all layer functions even some layer does not contain some operator. For example, The `activation` is used by the FullyConnected layer or convolution layers, but a cross-entropy layer does not use it. The example code of `add_activation` are:
```python
class LayerHelper(object):
def __init__(self, **kwargs): # kwargs is short for `keyword arguments`
self.kwargs = kwargs
def add_activation(self, input_var):
act = self.kwargs.get("act", None) # default value is None
if act is None: # do nothing if no act
return input_var
tmp = self.create_tmp_var(self)
self.append_op(type=act, input=input_var, output=tmp)
return tmp
``` ```
## Optimizer ## Optimizer
......
...@@ -340,37 +340,91 @@ ...@@ -340,37 +340,91 @@
<p><code class="docutils literal"><span class="pre">optimize_op_attrs</span></code> is not in the <code class="docutils literal"><span class="pre">VarDesc</span></code> message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator&#8217;s <code class="docutils literal"><span class="pre">OpDesc</span></code>, and will be in the <code class="docutils literal"><span class="pre">OpDesc</span></code> message.</p> <p><code class="docutils literal"><span class="pre">optimize_op_attrs</span></code> is not in the <code class="docutils literal"><span class="pre">VarDesc</span></code> message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator&#8217;s <code class="docutils literal"><span class="pre">OpDesc</span></code>, and will be in the <code class="docutils literal"><span class="pre">OpDesc</span></code> message.</p>
</div> </div>
</div> </div>
<div class="section" id="layer-functions"> <div class="section" id="layer-function">
<span id="layer-functions"></span><h2>Layer Functions<a class="headerlink" href="#layer-functions" title="Permalink to this headline"></a></h2> <span id="layer-function"></span><h2>Layer Function<a class="headerlink" href="#layer-function" title="Permalink to this headline"></a></h2>
<p>A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.</p> <p>A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.</p>
<div class="section" id="data-layer"> <p>Layer functions take <code class="docutils literal"><span class="pre">Variable</span></code> and configuration parameters as its input and return the output variable(s).</p>
<span id="data-layer"></span><h3>Data Layer<a class="headerlink" href="#data-layer" title="Permalink to this headline"></a></h3> <p>For example, <code class="docutils literal"><span class="pre">FullyConnected</span></code> take one or more variable as its input. The input could be input data or another layer&#8217;s output. There are many configuration options for a <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer, such as layer size, activation, parameter names, initialization strategies of parameters, and so on. The <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer will return an output variable.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">data_layer</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="nb">type</span><span class="p">,</span> <span class="n">column_name</span><span class="p">):</span> <div class="section" id="necessity-for-reusing-code-between-layer-functions">
<span class="n">block</span> <span class="o">=</span> <span class="n">the_current_program</span><span class="o">.</span><span class="n">glolal_block</span><span class="p">()</span> <span id="necessity-for-reusing-code-between-layer-functions"></span><h3>Necessity for reusing code between layer functions<a class="headerlink" href="#necessity-for-reusing-code-between-layer-functions" title="Permalink to this headline"></a></h3>
<span class="n">var</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_global_var</span><span class="p">(</span> <p>There are a lot of code that can be reused. Such as</p>
<span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <ul class="simple">
<span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">]</span> <span class="o">+</span> <span class="nb">type</span><span class="o">.</span><span class="n">dims</span><span class="p">(),</span> <li>Give the default value of configuration. e.g., default initialize strategy for parameters is uniform random with <code class="docutils literal"><span class="pre">min</span> <span class="pre">=</span> <span class="pre">-1.0</span></code>, <code class="docutils literal"><span class="pre">max</span> <span class="pre">=</span> <span class="pre">1.0</span></code>. and default initialize strategy for bias is to fill zero.</li>
<span class="n">dtype</span><span class="o">=</span><span class="nb">type</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span> <li>Append the activation operator.</li>
<span class="n">block</span><span class="o">.</span><span class="n">prepend_operator</span><span class="p">(</span><span class="n">block</span><span class="p">,</span> <li>Create a temporary variable.</li>
<span class="nb">type</span><span class="o">=</span><span class="s2">&quot;Feed&quot;</span><span class="p">,</span> <li>Create parameter.</li>
<span class="n">inputs</span> <span class="o">=</span> <span class="bp">None</span><span class="p">,</span> <li>Generate a unique name.</li>
<span class="n">outputs</span> <span class="o">=</span> <span class="p">[</span><span class="n">var</span><span class="p">],</span> <li>Add a bias.</li>
<span class="p">{</span><span class="n">column_name</span><span class="p">:</span> <span class="n">column_name</span><span class="p">})</span> <li>...</li>
<span class="k">return</span> <span class="n">var</span> </ul>
<p>A mechanism to reuse code between layer functions is necessary. It will be around <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/pull/4724/files#diff-823b27e07e93914ada859232ae23f846R12">150 lines of code</a> if we write a <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer without any helper functions.</p>
</div>
<div class="section" id="comparision-between-global-functions-and-helper-class">
<span id="comparision-between-global-functions-and-helper-class"></span><h3>Comparision between global functions and helper class<a class="headerlink" href="#comparision-between-global-functions-and-helper-class" title="Permalink to this headline"></a></h3>
<p>The <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer will be as follow when we provide global functions:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">param_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">bias_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="k">if</span> <span class="n">name</span> <span class="ow">is</span> <span class="bp">None</span><span class="p">:</span>
<span class="n">name</span> <span class="o">=</span> <span class="n">unique_name</span><span class="p">(</span><span class="s2">&quot;fc&quot;</span><span class="p">)</span>
<span class="nb">input</span> <span class="o">=</span> <span class="n">multiple_input</span><span class="p">(</span><span class="nb">input</span><span class="p">)</span>
<span class="n">param_attr</span> <span class="o">=</span> <span class="n">default_param_attr</span><span class="p">(</span><span class="n">param_attr</span><span class="p">)</span>
<span class="n">param_attr</span> <span class="o">=</span> <span class="n">multiple_param_attr</span><span class="p">(</span><span class="n">param_attr</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="nb">input</span><span class="p">))</span>
<span class="c1"># mul</span>
<span class="n">mul_results</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">ipt</span><span class="p">,</span> <span class="n">attr</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">param_attr</span><span class="p">):</span>
<span class="n">shape</span> <span class="o">=</span> <span class="n">ipt</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span> <span class="o">+</span> <span class="p">[</span><span class="n">size</span><span class="p">]</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">g_program</span><span class="o">.</span><span class="n">global_block</span><span class="p">()</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="n">shape</span><span class="p">,</span> <span class="n">ipt</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span> <span class="n">name</span><span class="p">,</span> <span class="n">attr</span><span class="p">)</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="n">create_tmp_var</span><span class="p">(</span><span class="n">name</span><span class="p">)</span>
<span class="n">g_program</span><span class="o">.</span><span class="n">current_block</span><span class="p">()</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="s2">&quot;mul&quot;</span><span class="p">,</span> <span class="p">{</span><span class="n">ipt</span><span class="p">,</span> <span class="n">w</span><span class="p">},</span> <span class="p">{</span><span class="n">tmp</span><span class="p">})</span>
<span class="n">mul_results</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">tmp</span><span class="p">)</span>
<span class="c1"># add sum</span>
<span class="o">...</span>
<span class="c1"># add bias</span>
<span class="o">...</span>
<span class="c1"># add activation</span>
<span class="o">...</span>
<span class="k">return</span> <span class="n">out</span>
</pre></div> </pre></div>
</div> </div>
<p>The input to the feed operator is a special variable in the global scope, which is the output of <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md">Python readers</a>.</p> <p>We can provide many helpers functions for layer developers. However, there are several disadvantages for global helper functions:</p>
<ol class="simple">
<li>We need a namespace for these methods, then layer developers can quickly figure out what method they can use.</li>
<li>Global functions will force layer developers to pass its parameter time by time.</li>
</ol>
<p>So we provide a helper class, <code class="docutils literal"><span class="pre">LayerHelper</span></code>, to share code between layer functions. The <code class="docutils literal"><span class="pre">FullyConnected</span></code> Layer will be as follow.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">param_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">bias_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="n">helper</span> <span class="o">=</span> <span class="n">LayerHelper</span><span class="p">(</span><span class="nb">locals</span><span class="p">())</span> <span class="c1"># pass all parameter to LayerHelper</span>
<span class="n">mul_results</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">ipt</span><span class="p">,</span> <span class="n">param</span> <span class="ow">in</span> <span class="n">helper</span><span class="o">.</span><span class="n">iter_multiple_input_and_param</span><span class="p">():</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="n">ipt</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span> <span class="o">+</span> <span class="p">[</span><span class="n">size</span><span class="p">],</span> <span class="n">dtype</span> <span class="o">=</span> <span class="n">ipt</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">create_tmp_variable</span><span class="p">()</span>
<span class="n">helper</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="s1">&#39;mul&#39;</span><span class="p">,</span> <span class="p">{</span><span class="n">ipt</span><span class="p">,</span> <span class="n">w</span><span class="p">},</span> <span class="p">{</span><span class="n">tmp</span><span class="p">})</span>
<span class="n">mul_results</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">tmp</span><span class="p">)</span>
<span class="n">pre_bias</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_sum</span><span class="p">(</span><span class="n">mul_results</span><span class="p">)</span>
<span class="n">pre_activation</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_bias</span><span class="p">(</span><span class="n">pre_bias</span><span class="p">)</span>
<span class="k">return</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_activation</span><span class="p">(</span><span class="n">pre_activation</span><span class="p">)</span>
</pre></div>
</div> </div>
<div class="section" id="fc-layer"> <p>We not only use the fewer lines of code to write <code class="docutils literal"><span class="pre">fc_layer</span></code> but also make the code clearer to understand. At the same time, layer developers can figure out what function they can invoke by typing <code class="docutils literal"><span class="pre">helper.</span></code> in a python editor.</p>
<span id="fc-layer"></span><h3>FC Layer<a class="headerlink" href="#fc-layer" title="Permalink to this headline"></a></h3> </div>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="o">...</span><span class="p">):</span> <div class="section" id="implementation-of-layer-helper">
<span class="n">block</span> <span class="o">=</span> <span class="n">program</span><span class="o">.</span><span class="n">current_block</span><span class="p">()</span> <span id="implementation-of-layer-helper"></span><h3>Implementation of layer helper<a class="headerlink" href="#implementation-of-layer-helper" title="Permalink to this headline"></a></h3>
<span class="n">w</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="o">...</span><span class="p">)</span> <p>We just keep all parameters of a layer function as a dictionary in layer helper as a private data member. Every method of layer helper will look up the dictionary after it is invoked. In that way, we can implement a layer helper for all layer functions even some layer does not contain some operator. For example, The <code class="docutils literal"><span class="pre">activation</span></code> is used by the FullyConnected layer or convolution layers, but a cross-entropy layer does not use it. The example code of <code class="docutils literal"><span class="pre">add_activation</span></code> are:</p>
<span class="n">b</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="o">...</span><span class="p">)</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">LayerHelper</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_var</span><span class="p">()</span> <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="c1"># kwargs is short for `keyword arguments`</span>
<span class="n">op</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">append_operator</span><span class="p">(</span><span class="s2">&quot;FC&quot;</span><span class="p">,</span> <span class="n">X</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> <span class="n">W</span><span class="o">=</span><span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">b</span><span class="p">,</span> <span class="n">out</span><span class="o">=</span><span class="n">out</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">kwargs</span> <span class="o">=</span> <span class="n">kwargs</span>
<span class="n">out</span><span class="o">.</span><span class="n">writer</span> <span class="o">=</span> <span class="n">op</span>
<span class="k">return</span> <span class="n">out</span> <span class="k">def</span> <span class="nf">add_activation</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_var</span><span class="p">):</span>
<span class="n">act</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">kwargs</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;act&quot;</span><span class="p">,</span> <span class="bp">None</span><span class="p">)</span> <span class="c1"># default value is None</span>
<span class="k">if</span> <span class="n">act</span> <span class="ow">is</span> <span class="bp">None</span><span class="p">:</span> <span class="c1"># do nothing if no act</span>
<span class="k">return</span> <span class="n">input_var</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">create_tmp_var</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="nb">type</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> <span class="nb">input</span><span class="o">=</span><span class="n">input_var</span><span class="p">,</span> <span class="n">output</span><span class="o">=</span><span class="n">tmp</span><span class="p">)</span>
<span class="k">return</span> <span class="n">tmp</span>
</pre></div> </pre></div>
</div> </div>
</div> </div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -179,40 +179,104 @@ init_attr={ ...@@ -179,40 +179,104 @@ init_attr={
`optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message. `optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message.
## Layer Functions ## Layer Function
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers. A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
### Data Layer Layer functions take `Variable` and configuration parameters as its input and return the output variable(s).
For example, `FullyConnected` take one or more variable as its input. The input could be input data or another layer's output. There are many configuration options for a `FullyConnected` layer, such as layer size, activation, parameter names, initialization strategies of parameters, and so on. The `FullyConnected` layer will return an output variable.
### Necessity for reusing code between layer functions
There are a lot of code that can be reused. Such as
* Give the default value of configuration. e.g., default initialize strategy for parameters is uniform random with `min = -1.0`, `max = 1.0`. and default initialize strategy for bias is to fill zero.
* Append the activation operator.
* Create a temporary variable.
* Create parameter.
* Generate a unique name.
* Add a bias.
* ...
A mechanism to reuse code between layer functions is necessary. It will be around [150 lines of code](https://github.com/PaddlePaddle/Paddle/pull/4724/files#diff-823b27e07e93914ada859232ae23f846R12) if we write a `FullyConnected` layer without any helper functions.
### Comparision between global functions and helper class
The `FullyConnected` layer will be as follow when we provide global functions:
```python ```python
def data_layer(name, type, column_name): def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
block = the_current_program.glolal_block() if name is None:
var = block.create_global_var( name = unique_name("fc")
name=name, input = multiple_input(input)
shape=[None] + type.dims(), param_attr = default_param_attr(param_attr)
dtype=type.dtype) param_attr = multiple_param_attr(param_attr, len(input))
block.prepend_operator(block,
type="Feed", # mul
inputs = None, mul_results = []
outputs = [var], for ipt, attr in zip(input, param_attr):
{column_name: column_name}) shape = ipt.shape[1:] + [size]
return var w = g_program.global_block().create_parameter(shape, ipt.dtype, name, attr)
tmp = create_tmp_var(name)
g_program.current_block().append_op("mul", {ipt, w}, {tmp})
mul_results.append(tmp)
# add sum
...
# add bias
...
# add activation
...
return out
``` ```
The input to the feed operator is a special variable in the global scope, which is the output of [Python readers](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md). We can provide many helpers functions for layer developers. However, there are several disadvantages for global helper functions:
1. We need a namespace for these methods, then layer developers can quickly figure out what method they can use.
2. Global functions will force layer developers to pass its parameter time by time.
### FC Layer So we provide a helper class, `LayerHelper`, to share code between layer functions. The `FullyConnected` Layer will be as follow.
```python ```python
def fc_layer(input, size, ...): def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
block = program.current_block() helper = LayerHelper(locals()) # pass all parameter to LayerHelper
w = block.create_parameter(...)
b = block.create_parameter(...) mul_results = []
out = block.create_var() for ipt, param in helper.iter_multiple_input_and_param():
op = block.append_operator("FC", X=input, W=w, b=b, out=out) w = helper.create_parameter(shape=ipt.shape[1:] + [size], dtype = ipt.dtype)
out.writer = op tmp = helper.create_tmp_variable()
return out helper.append_op('mul', {ipt, w}, {tmp})
mul_results.append(tmp)
pre_bias = helper.add_sum(mul_results)
pre_activation = helper.add_bias(pre_bias)
return helper.add_activation(pre_activation)
```
We not only use the fewer lines of code to write `fc_layer` but also make the code clearer to understand. At the same time, layer developers can figure out what function they can invoke by typing `helper.` in a python editor.
### Implementation of layer helper
We just keep all parameters of a layer function as a dictionary in layer helper as a private data member. Every method of layer helper will look up the dictionary after it is invoked. In that way, we can implement a layer helper for all layer functions even some layer does not contain some operator. For example, The `activation` is used by the FullyConnected layer or convolution layers, but a cross-entropy layer does not use it. The example code of `add_activation` are:
```python
class LayerHelper(object):
def __init__(self, **kwargs): # kwargs is short for `keyword arguments`
self.kwargs = kwargs
def add_activation(self, input_var):
act = self.kwargs.get("act", None) # default value is None
if act is None: # do nothing if no act
return input_var
tmp = self.create_tmp_var(self)
self.append_op(type=act, input=input_var, output=tmp)
return tmp
``` ```
## Optimizer ## Optimizer
......
...@@ -354,37 +354,91 @@ ...@@ -354,37 +354,91 @@
<p><code class="docutils literal"><span class="pre">optimize_op_attrs</span></code> is not in the <code class="docutils literal"><span class="pre">VarDesc</span></code> message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator&#8217;s <code class="docutils literal"><span class="pre">OpDesc</span></code>, and will be in the <code class="docutils literal"><span class="pre">OpDesc</span></code> message.</p> <p><code class="docutils literal"><span class="pre">optimize_op_attrs</span></code> is not in the <code class="docutils literal"><span class="pre">VarDesc</span></code> message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator&#8217;s <code class="docutils literal"><span class="pre">OpDesc</span></code>, and will be in the <code class="docutils literal"><span class="pre">OpDesc</span></code> message.</p>
</div> </div>
</div> </div>
<div class="section" id="layer-functions"> <div class="section" id="layer-function">
<span id="layer-functions"></span><h2>Layer Functions<a class="headerlink" href="#layer-functions" title="永久链接至标题"></a></h2> <span id="layer-function"></span><h2>Layer Function<a class="headerlink" href="#layer-function" title="永久链接至标题"></a></h2>
<p>A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.</p> <p>A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.</p>
<div class="section" id="data-layer"> <p>Layer functions take <code class="docutils literal"><span class="pre">Variable</span></code> and configuration parameters as its input and return the output variable(s).</p>
<span id="data-layer"></span><h3>Data Layer<a class="headerlink" href="#data-layer" title="永久链接至标题"></a></h3> <p>For example, <code class="docutils literal"><span class="pre">FullyConnected</span></code> take one or more variable as its input. The input could be input data or another layer&#8217;s output. There are many configuration options for a <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer, such as layer size, activation, parameter names, initialization strategies of parameters, and so on. The <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer will return an output variable.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">data_layer</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="nb">type</span><span class="p">,</span> <span class="n">column_name</span><span class="p">):</span> <div class="section" id="necessity-for-reusing-code-between-layer-functions">
<span class="n">block</span> <span class="o">=</span> <span class="n">the_current_program</span><span class="o">.</span><span class="n">glolal_block</span><span class="p">()</span> <span id="necessity-for-reusing-code-between-layer-functions"></span><h3>Necessity for reusing code between layer functions<a class="headerlink" href="#necessity-for-reusing-code-between-layer-functions" title="永久链接至标题"></a></h3>
<span class="n">var</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_global_var</span><span class="p">(</span> <p>There are a lot of code that can be reused. Such as</p>
<span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <ul class="simple">
<span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="bp">None</span><span class="p">]</span> <span class="o">+</span> <span class="nb">type</span><span class="o">.</span><span class="n">dims</span><span class="p">(),</span> <li>Give the default value of configuration. e.g., default initialize strategy for parameters is uniform random with <code class="docutils literal"><span class="pre">min</span> <span class="pre">=</span> <span class="pre">-1.0</span></code>, <code class="docutils literal"><span class="pre">max</span> <span class="pre">=</span> <span class="pre">1.0</span></code>. and default initialize strategy for bias is to fill zero.</li>
<span class="n">dtype</span><span class="o">=</span><span class="nb">type</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span> <li>Append the activation operator.</li>
<span class="n">block</span><span class="o">.</span><span class="n">prepend_operator</span><span class="p">(</span><span class="n">block</span><span class="p">,</span> <li>Create a temporary variable.</li>
<span class="nb">type</span><span class="o">=</span><span class="s2">&quot;Feed&quot;</span><span class="p">,</span> <li>Create parameter.</li>
<span class="n">inputs</span> <span class="o">=</span> <span class="bp">None</span><span class="p">,</span> <li>Generate a unique name.</li>
<span class="n">outputs</span> <span class="o">=</span> <span class="p">[</span><span class="n">var</span><span class="p">],</span> <li>Add a bias.</li>
<span class="p">{</span><span class="n">column_name</span><span class="p">:</span> <span class="n">column_name</span><span class="p">})</span> <li>...</li>
<span class="k">return</span> <span class="n">var</span> </ul>
<p>A mechanism to reuse code between layer functions is necessary. It will be around <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/pull/4724/files#diff-823b27e07e93914ada859232ae23f846R12">150 lines of code</a> if we write a <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer without any helper functions.</p>
</div>
<div class="section" id="comparision-between-global-functions-and-helper-class">
<span id="comparision-between-global-functions-and-helper-class"></span><h3>Comparision between global functions and helper class<a class="headerlink" href="#comparision-between-global-functions-and-helper-class" title="永久链接至标题"></a></h3>
<p>The <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer will be as follow when we provide global functions:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">param_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">bias_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="k">if</span> <span class="n">name</span> <span class="ow">is</span> <span class="bp">None</span><span class="p">:</span>
<span class="n">name</span> <span class="o">=</span> <span class="n">unique_name</span><span class="p">(</span><span class="s2">&quot;fc&quot;</span><span class="p">)</span>
<span class="nb">input</span> <span class="o">=</span> <span class="n">multiple_input</span><span class="p">(</span><span class="nb">input</span><span class="p">)</span>
<span class="n">param_attr</span> <span class="o">=</span> <span class="n">default_param_attr</span><span class="p">(</span><span class="n">param_attr</span><span class="p">)</span>
<span class="n">param_attr</span> <span class="o">=</span> <span class="n">multiple_param_attr</span><span class="p">(</span><span class="n">param_attr</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="nb">input</span><span class="p">))</span>
<span class="c1"># mul</span>
<span class="n">mul_results</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">ipt</span><span class="p">,</span> <span class="n">attr</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">param_attr</span><span class="p">):</span>
<span class="n">shape</span> <span class="o">=</span> <span class="n">ipt</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span> <span class="o">+</span> <span class="p">[</span><span class="n">size</span><span class="p">]</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">g_program</span><span class="o">.</span><span class="n">global_block</span><span class="p">()</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="n">shape</span><span class="p">,</span> <span class="n">ipt</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span> <span class="n">name</span><span class="p">,</span> <span class="n">attr</span><span class="p">)</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="n">create_tmp_var</span><span class="p">(</span><span class="n">name</span><span class="p">)</span>
<span class="n">g_program</span><span class="o">.</span><span class="n">current_block</span><span class="p">()</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="s2">&quot;mul&quot;</span><span class="p">,</span> <span class="p">{</span><span class="n">ipt</span><span class="p">,</span> <span class="n">w</span><span class="p">},</span> <span class="p">{</span><span class="n">tmp</span><span class="p">})</span>
<span class="n">mul_results</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">tmp</span><span class="p">)</span>
<span class="c1"># add sum</span>
<span class="o">...</span>
<span class="c1"># add bias</span>
<span class="o">...</span>
<span class="c1"># add activation</span>
<span class="o">...</span>
<span class="k">return</span> <span class="n">out</span>
</pre></div> </pre></div>
</div> </div>
<p>The input to the feed operator is a special variable in the global scope, which is the output of <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md">Python readers</a>.</p> <p>We can provide many helpers functions for layer developers. However, there are several disadvantages for global helper functions:</p>
<ol class="simple">
<li>We need a namespace for these methods, then layer developers can quickly figure out what method they can use.</li>
<li>Global functions will force layer developers to pass its parameter time by time.</li>
</ol>
<p>So we provide a helper class, <code class="docutils literal"><span class="pre">LayerHelper</span></code>, to share code between layer functions. The <code class="docutils literal"><span class="pre">FullyConnected</span></code> Layer will be as follow.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">param_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">bias_attr</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="n">helper</span> <span class="o">=</span> <span class="n">LayerHelper</span><span class="p">(</span><span class="nb">locals</span><span class="p">())</span> <span class="c1"># pass all parameter to LayerHelper</span>
<span class="n">mul_results</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">ipt</span><span class="p">,</span> <span class="n">param</span> <span class="ow">in</span> <span class="n">helper</span><span class="o">.</span><span class="n">iter_multiple_input_and_param</span><span class="p">():</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="n">ipt</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span> <span class="o">+</span> <span class="p">[</span><span class="n">size</span><span class="p">],</span> <span class="n">dtype</span> <span class="o">=</span> <span class="n">ipt</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">create_tmp_variable</span><span class="p">()</span>
<span class="n">helper</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="s1">&#39;mul&#39;</span><span class="p">,</span> <span class="p">{</span><span class="n">ipt</span><span class="p">,</span> <span class="n">w</span><span class="p">},</span> <span class="p">{</span><span class="n">tmp</span><span class="p">})</span>
<span class="n">mul_results</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">tmp</span><span class="p">)</span>
<span class="n">pre_bias</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_sum</span><span class="p">(</span><span class="n">mul_results</span><span class="p">)</span>
<span class="n">pre_activation</span> <span class="o">=</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_bias</span><span class="p">(</span><span class="n">pre_bias</span><span class="p">)</span>
<span class="k">return</span> <span class="n">helper</span><span class="o">.</span><span class="n">add_activation</span><span class="p">(</span><span class="n">pre_activation</span><span class="p">)</span>
</pre></div>
</div> </div>
<div class="section" id="fc-layer"> <p>We not only use the fewer lines of code to write <code class="docutils literal"><span class="pre">fc_layer</span></code> but also make the code clearer to understand. At the same time, layer developers can figure out what function they can invoke by typing <code class="docutils literal"><span class="pre">helper.</span></code> in a python editor.</p>
<span id="fc-layer"></span><h3>FC Layer<a class="headerlink" href="#fc-layer" title="永久链接至标题"></a></h3> </div>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">fc_layer</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="o">...</span><span class="p">):</span> <div class="section" id="implementation-of-layer-helper">
<span class="n">block</span> <span class="o">=</span> <span class="n">program</span><span class="o">.</span><span class="n">current_block</span><span class="p">()</span> <span id="implementation-of-layer-helper"></span><h3>Implementation of layer helper<a class="headerlink" href="#implementation-of-layer-helper" title="永久链接至标题"></a></h3>
<span class="n">w</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="o">...</span><span class="p">)</span> <p>We just keep all parameters of a layer function as a dictionary in layer helper as a private data member. Every method of layer helper will look up the dictionary after it is invoked. In that way, we can implement a layer helper for all layer functions even some layer does not contain some operator. For example, The <code class="docutils literal"><span class="pre">activation</span></code> is used by the FullyConnected layer or convolution layers, but a cross-entropy layer does not use it. The example code of <code class="docutils literal"><span class="pre">add_activation</span></code> are:</p>
<span class="n">b</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_parameter</span><span class="p">(</span><span class="o">...</span><span class="p">)</span> <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">LayerHelper</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">create_var</span><span class="p">()</span> <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="c1"># kwargs is short for `keyword arguments`</span>
<span class="n">op</span> <span class="o">=</span> <span class="n">block</span><span class="o">.</span><span class="n">append_operator</span><span class="p">(</span><span class="s2">&quot;FC&quot;</span><span class="p">,</span> <span class="n">X</span><span class="o">=</span><span class="nb">input</span><span class="p">,</span> <span class="n">W</span><span class="o">=</span><span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">b</span><span class="p">,</span> <span class="n">out</span><span class="o">=</span><span class="n">out</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">kwargs</span> <span class="o">=</span> <span class="n">kwargs</span>
<span class="n">out</span><span class="o">.</span><span class="n">writer</span> <span class="o">=</span> <span class="n">op</span>
<span class="k">return</span> <span class="n">out</span> <span class="k">def</span> <span class="nf">add_activation</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_var</span><span class="p">):</span>
<span class="n">act</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">kwargs</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;act&quot;</span><span class="p">,</span> <span class="bp">None</span><span class="p">)</span> <span class="c1"># default value is None</span>
<span class="k">if</span> <span class="n">act</span> <span class="ow">is</span> <span class="bp">None</span><span class="p">:</span> <span class="c1"># do nothing if no act</span>
<span class="k">return</span> <span class="n">input_var</span>
<span class="n">tmp</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">create_tmp_var</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">append_op</span><span class="p">(</span><span class="nb">type</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> <span class="nb">input</span><span class="o">=</span><span class="n">input_var</span><span class="p">,</span> <span class="n">output</span><span class="o">=</span><span class="n">tmp</span><span class="p">)</span>
<span class="k">return</span> <span class="n">tmp</span>
</pre></div> </pre></div>
</div> </div>
</div> </div>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册