提交 c3e3a836 编写于 作者: T Travis CI

Deploy to GitHub Pages: a1cfc325

上级 296a760e
......@@ -222,8 +222,7 @@
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><blockquote>
<div><p><strong>Fully Connected Layer</strong></p>
<dd><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input
......@@ -235,11 +234,8 @@ created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p>
<div class="math">
\[Out = Act\left({\sum_{i=0}^{N-1}W_iX_i + b}\]</div>
</div></blockquote>
<p>ight)</p>
<blockquote>
<div><p>In the above equation:</p>
\[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
<p>In the above equation:</p>
<ul class="simple">
<li><span class="math">\(N\)</span>: Number of the input.</li>
<li><span class="math">\(X_i\)</span>: The input tensor.</li>
......@@ -248,13 +244,15 @@ it will be applied to the output as well.</p>
<li><span class="math">\(Act\)</span>: The activation funtion.</li>
<li><span class="math">\(Out\)</span>: The output tensor.</li>
</ul>
<dl class="docutils">
<dt>Args:</dt>
<dd><p class="first">input(Variable|list): The input tensor(s) to the fully connected layer.
size(int): The number of output units in the fully connected layer.
num_flatten_dims(int): The fc layer can accept an input tensor with more</p>
<blockquote>
<div>than two dimensions. If this happens, the
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable|list</em>) &#8211; The input tensor(s) to the fully connected layer.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; The number of output units in the fully connected layer.</li>
<li><strong>num_flatten_dims</strong> (<em>int</em>) &#8211; The fc layer can accept an input tensor with more
than two dimensions. If this happens, the
multidimensional tensor will first be flattened
into a 2-dimensional matrix. The parameter
<cite>num_flatten_dims</cite> determines how the input tensor
......@@ -268,37 +266,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor
with a shape [2, 3, 4, 5, 6], and
<cite>x_num_col_dims</cite> = 3. Then, the flattened matrix
will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
By default, <cite>x_num_col_dims</cite> is set to 1.</div></blockquote>
<dl class="docutils">
<dt>param_attr(ParamAttr|list): The parameter attribute for learnable</dt>
<dd>parameters/weights of the fully connected
layer.</dd>
<dt>param_initializer(ParamAttr|list): The initializer used for the</dt>
<dd>weight/parameter. If set None,
XavierInitializer() will be used.</dd>
<dt>bias_attr(ParamAttr|list): The parameter attribute for the bias parameter</dt>
<dd>for this layer. If set None, no bias will be
added to the output units.</dd>
<dt>bias_initializer(ParamAttr|list): The initializer used for the bias.</dt>
<dd>If set None, then ConstantInitializer()
will be used.</dd>
<dt>act(str): Activation to be applied to the output of the fully connected</dt>
<dd>layer.</dd>
</dl>
<p class="last">name(str): Name/alias of the fully connected layer.</p>
</dd>
<dt>Returns:</dt>
<dd>Variable: The output tensor variable.</dd>
<dt>Raises:</dt>
<dd>ValueError: If rank of the input tensor is less than 2.</dd>
<dt>Examples:</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
By default, <cite>x_num_col_dims</cite> is set to 1.</li>
<li><strong>param_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for learnable
parameters/weights of the fully connected
layer.</li>
<li><strong>param_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the
weight/parameter. If set None,
XavierInitializer() will be used.</li>
<li><strong>bias_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for the bias parameter
for this layer. If set None, no bias will be
added to the output units.</li>
<li><strong>bias_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the bias.
If set None, then ConstantInitializer()
will be used.</li>
<li><strong>act</strong> (<em>str</em>) &#8211; Activation to be applied to the output of the fully connected
layer.</li>
<li><strong>name</strong> (<em>str</em>) &#8211; Name/alias of the fully connected layer.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">The output tensor variable.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first">Variable</p>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If rank of the input tensor is less than 2.</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span>
</pre></div>
</div>
</dd>
</dl>
</div></blockquote>
</dd></dl>
</div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -235,8 +235,7 @@
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><blockquote>
<div><p><strong>Fully Connected Layer</strong></p>
<dd><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input
......@@ -248,11 +247,8 @@ created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p>
<div class="math">
\[Out = Act\left({\sum_{i=0}^{N-1}W_iX_i + b}\]</div>
</div></blockquote>
<p>ight)</p>
<blockquote>
<div><p>In the above equation:</p>
\[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
<p>In the above equation:</p>
<ul class="simple">
<li><span class="math">\(N\)</span>: Number of the input.</li>
<li><span class="math">\(X_i\)</span>: The input tensor.</li>
......@@ -261,13 +257,15 @@ it will be applied to the output as well.</p>
<li><span class="math">\(Act\)</span>: The activation funtion.</li>
<li><span class="math">\(Out\)</span>: The output tensor.</li>
</ul>
<dl class="docutils">
<dt>Args:</dt>
<dd><p class="first">input(Variable|list): The input tensor(s) to the fully connected layer.
size(int): The number of output units in the fully connected layer.
num_flatten_dims(int): The fc layer can accept an input tensor with more</p>
<blockquote>
<div>than two dimensions. If this happens, the
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable|list</em>) &#8211; The input tensor(s) to the fully connected layer.</li>
<li><strong>size</strong> (<em>int</em>) &#8211; The number of output units in the fully connected layer.</li>
<li><strong>num_flatten_dims</strong> (<em>int</em>) &#8211; The fc layer can accept an input tensor with more
than two dimensions. If this happens, the
multidimensional tensor will first be flattened
into a 2-dimensional matrix. The parameter
<cite>num_flatten_dims</cite> determines how the input tensor
......@@ -281,37 +279,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor
with a shape [2, 3, 4, 5, 6], and
<cite>x_num_col_dims</cite> = 3. Then, the flattened matrix
will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
By default, <cite>x_num_col_dims</cite> is set to 1.</div></blockquote>
<dl class="docutils">
<dt>param_attr(ParamAttr|list): The parameter attribute for learnable</dt>
<dd>parameters/weights of the fully connected
layer.</dd>
<dt>param_initializer(ParamAttr|list): The initializer used for the</dt>
<dd>weight/parameter. If set None,
XavierInitializer() will be used.</dd>
<dt>bias_attr(ParamAttr|list): The parameter attribute for the bias parameter</dt>
<dd>for this layer. If set None, no bias will be
added to the output units.</dd>
<dt>bias_initializer(ParamAttr|list): The initializer used for the bias.</dt>
<dd>If set None, then ConstantInitializer()
will be used.</dd>
<dt>act(str): Activation to be applied to the output of the fully connected</dt>
<dd>layer.</dd>
</dl>
<p class="last">name(str): Name/alias of the fully connected layer.</p>
</dd>
<dt>Returns:</dt>
<dd>Variable: The output tensor variable.</dd>
<dt>Raises:</dt>
<dd>ValueError: If rank of the input tensor is less than 2.</dd>
<dt>Examples:</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
By default, <cite>x_num_col_dims</cite> is set to 1.</li>
<li><strong>param_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for learnable
parameters/weights of the fully connected
layer.</li>
<li><strong>param_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the
weight/parameter. If set None,
XavierInitializer() will be used.</li>
<li><strong>bias_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for the bias parameter
for this layer. If set None, no bias will be
added to the output units.</li>
<li><strong>bias_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the bias.
If set None, then ConstantInitializer()
will be used.</li>
<li><strong>act</strong> (<em>str</em>) &#8211; Activation to be applied to the output of the fully connected
layer.</li>
<li><strong>name</strong> (<em>str</em>) &#8211; Name/alias of the fully connected layer.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">The output tensor variable.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first">Variable</p>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If rank of the input tensor is less than 2.</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span>
</pre></div>
</div>
</dd>
</dl>
</div></blockquote>
</dd></dl>
</div>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册