提交 c3e3a836 编写于 作者: T Travis CI

Deploy to GitHub Pages: a1cfc325

上级 296a760e
...@@ -222,8 +222,7 @@ ...@@ -222,8 +222,7 @@
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><blockquote> <dd><p><strong>Fully Connected Layer</strong></p>
<div><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It <p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input tensor, which represents a fully connected weight matrix from each input
...@@ -235,11 +234,8 @@ created and added to the output. Finally, if activation is not None, ...@@ -235,11 +234,8 @@ created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p> it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p> <p>This process can be formulated as follows:</p>
<div class="math"> <div class="math">
\[Out = Act\left({\sum_{i=0}^{N-1}W_iX_i + b}\]</div> \[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
</div></blockquote> <p>In the above equation:</p>
<p>ight)</p>
<blockquote>
<div><p>In the above equation:</p>
<ul class="simple"> <ul class="simple">
<li><span class="math">\(N\)</span>: Number of the input.</li> <li><span class="math">\(N\)</span>: Number of the input.</li>
<li><span class="math">\(X_i\)</span>: The input tensor.</li> <li><span class="math">\(X_i\)</span>: The input tensor.</li>
...@@ -248,13 +244,15 @@ it will be applied to the output as well.</p> ...@@ -248,13 +244,15 @@ it will be applied to the output as well.</p>
<li><span class="math">\(Act\)</span>: The activation funtion.</li> <li><span class="math">\(Act\)</span>: The activation funtion.</li>
<li><span class="math">\(Out\)</span>: The output tensor.</li> <li><span class="math">\(Out\)</span>: The output tensor.</li>
</ul> </ul>
<dl class="docutils"> <table class="docutils field-list" frame="void" rules="none">
<dt>Args:</dt> <col class="field-name" />
<dd><p class="first">input(Variable|list): The input tensor(s) to the fully connected layer. <col class="field-body" />
size(int): The number of output units in the fully connected layer. <tbody valign="top">
num_flatten_dims(int): The fc layer can accept an input tensor with more</p> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<blockquote> <li><strong>input</strong> (<em>Variable|list</em>) &#8211; The input tensor(s) to the fully connected layer.</li>
<div>than two dimensions. If this happens, the <li><strong>size</strong> (<em>int</em>) &#8211; The number of output units in the fully connected layer.</li>
<li><strong>num_flatten_dims</strong> (<em>int</em>) &#8211; The fc layer can accept an input tensor with more
than two dimensions. If this happens, the
multidimensional tensor will first be flattened multidimensional tensor will first be flattened
into a 2-dimensional matrix. The parameter into a 2-dimensional matrix. The parameter
<cite>num_flatten_dims</cite> determines how the input tensor <cite>num_flatten_dims</cite> determines how the input tensor
...@@ -268,37 +266,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor ...@@ -268,37 +266,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor
with a shape [2, 3, 4, 5, 6], and with a shape [2, 3, 4, 5, 6], and
<cite>x_num_col_dims</cite> = 3. Then, the flattened matrix <cite>x_num_col_dims</cite> = 3. Then, the flattened matrix
will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
By default, <cite>x_num_col_dims</cite> is set to 1.</div></blockquote> By default, <cite>x_num_col_dims</cite> is set to 1.</li>
<dl class="docutils"> <li><strong>param_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for learnable
<dt>param_attr(ParamAttr|list): The parameter attribute for learnable</dt> parameters/weights of the fully connected
<dd>parameters/weights of the fully connected layer.</li>
layer.</dd> <li><strong>param_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the
<dt>param_initializer(ParamAttr|list): The initializer used for the</dt> weight/parameter. If set None,
<dd>weight/parameter. If set None, XavierInitializer() will be used.</li>
XavierInitializer() will be used.</dd> <li><strong>bias_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for the bias parameter
<dt>bias_attr(ParamAttr|list): The parameter attribute for the bias parameter</dt> for this layer. If set None, no bias will be
<dd>for this layer. If set None, no bias will be added to the output units.</li>
added to the output units.</dd> <li><strong>bias_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the bias.
<dt>bias_initializer(ParamAttr|list): The initializer used for the bias.</dt> If set None, then ConstantInitializer()
<dd>If set None, then ConstantInitializer() will be used.</li>
will be used.</dd> <li><strong>act</strong> (<em>str</em>) &#8211; Activation to be applied to the output of the fully connected
<dt>act(str): Activation to be applied to the output of the fully connected</dt> layer.</li>
<dd>layer.</dd> <li><strong>name</strong> (<em>str</em>) &#8211; Name/alias of the fully connected layer.</li>
</dl> </ul>
<p class="last">name(str): Name/alias of the fully connected layer.</p> </td>
</dd> </tr>
<dt>Returns:</dt> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">The output tensor variable.</p>
<dd>Variable: The output tensor variable.</dd> </td>
<dt>Raises:</dt> </tr>
<dd>ValueError: If rank of the input tensor is less than 2.</dd> <tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first">Variable</p>
<dt>Examples:</dt> </td>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span> </tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If rank of the input tensor is less than 2.</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span> <span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
</dd>
</dl>
</div></blockquote>
</dd></dl> </dd></dl>
</div> </div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -235,8 +235,7 @@ ...@@ -235,8 +235,7 @@
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><blockquote> <dd><p><strong>Fully Connected Layer</strong></p>
<div><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It <p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input tensor, which represents a fully connected weight matrix from each input
...@@ -248,11 +247,8 @@ created and added to the output. Finally, if activation is not None, ...@@ -248,11 +247,8 @@ created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p> it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p> <p>This process can be formulated as follows:</p>
<div class="math"> <div class="math">
\[Out = Act\left({\sum_{i=0}^{N-1}W_iX_i + b}\]</div> \[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
</div></blockquote> <p>In the above equation:</p>
<p>ight)</p>
<blockquote>
<div><p>In the above equation:</p>
<ul class="simple"> <ul class="simple">
<li><span class="math">\(N\)</span>: Number of the input.</li> <li><span class="math">\(N\)</span>: Number of the input.</li>
<li><span class="math">\(X_i\)</span>: The input tensor.</li> <li><span class="math">\(X_i\)</span>: The input tensor.</li>
...@@ -261,13 +257,15 @@ it will be applied to the output as well.</p> ...@@ -261,13 +257,15 @@ it will be applied to the output as well.</p>
<li><span class="math">\(Act\)</span>: The activation funtion.</li> <li><span class="math">\(Act\)</span>: The activation funtion.</li>
<li><span class="math">\(Out\)</span>: The output tensor.</li> <li><span class="math">\(Out\)</span>: The output tensor.</li>
</ul> </ul>
<dl class="docutils"> <table class="docutils field-list" frame="void" rules="none">
<dt>Args:</dt> <col class="field-name" />
<dd><p class="first">input(Variable|list): The input tensor(s) to the fully connected layer. <col class="field-body" />
size(int): The number of output units in the fully connected layer. <tbody valign="top">
num_flatten_dims(int): The fc layer can accept an input tensor with more</p> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<blockquote> <li><strong>input</strong> (<em>Variable|list</em>) &#8211; The input tensor(s) to the fully connected layer.</li>
<div>than two dimensions. If this happens, the <li><strong>size</strong> (<em>int</em>) &#8211; The number of output units in the fully connected layer.</li>
<li><strong>num_flatten_dims</strong> (<em>int</em>) &#8211; The fc layer can accept an input tensor with more
than two dimensions. If this happens, the
multidimensional tensor will first be flattened multidimensional tensor will first be flattened
into a 2-dimensional matrix. The parameter into a 2-dimensional matrix. The parameter
<cite>num_flatten_dims</cite> determines how the input tensor <cite>num_flatten_dims</cite> determines how the input tensor
...@@ -281,37 +279,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor ...@@ -281,37 +279,41 @@ For example, suppose <cite>X</cite> is a 6-dimensional tensor
with a shape [2, 3, 4, 5, 6], and with a shape [2, 3, 4, 5, 6], and
<cite>x_num_col_dims</cite> = 3. Then, the flattened matrix <cite>x_num_col_dims</cite> = 3. Then, the flattened matrix
will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
By default, <cite>x_num_col_dims</cite> is set to 1.</div></blockquote> By default, <cite>x_num_col_dims</cite> is set to 1.</li>
<dl class="docutils"> <li><strong>param_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for learnable
<dt>param_attr(ParamAttr|list): The parameter attribute for learnable</dt> parameters/weights of the fully connected
<dd>parameters/weights of the fully connected layer.</li>
layer.</dd> <li><strong>param_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the
<dt>param_initializer(ParamAttr|list): The initializer used for the</dt> weight/parameter. If set None,
<dd>weight/parameter. If set None, XavierInitializer() will be used.</li>
XavierInitializer() will be used.</dd> <li><strong>bias_attr</strong> (<em>ParamAttr|list</em>) &#8211; The parameter attribute for the bias parameter
<dt>bias_attr(ParamAttr|list): The parameter attribute for the bias parameter</dt> for this layer. If set None, no bias will be
<dd>for this layer. If set None, no bias will be added to the output units.</li>
added to the output units.</dd> <li><strong>bias_initializer</strong> (<em>ParamAttr|list</em>) &#8211; The initializer used for the bias.
<dt>bias_initializer(ParamAttr|list): The initializer used for the bias.</dt> If set None, then ConstantInitializer()
<dd>If set None, then ConstantInitializer() will be used.</li>
will be used.</dd> <li><strong>act</strong> (<em>str</em>) &#8211; Activation to be applied to the output of the fully connected
<dt>act(str): Activation to be applied to the output of the fully connected</dt> layer.</li>
<dd>layer.</dd> <li><strong>name</strong> (<em>str</em>) &#8211; Name/alias of the fully connected layer.</li>
</dl> </ul>
<p class="last">name(str): Name/alias of the fully connected layer.</p> </td>
</dd> </tr>
<dt>Returns:</dt> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">The output tensor variable.</p>
<dd>Variable: The output tensor variable.</dd> </td>
<dt>Raises:</dt> </tr>
<dd>ValueError: If rank of the input tensor is less than 2.</dd> <tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first">Variable</p>
<dt>Examples:</dt> </td>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span> </tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If rank of the input tensor is less than 2.</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span> <span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">fc</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="s2">&quot;tanh&quot;</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
</dd>
</dl>
</div></blockquote>
</dd></dl> </dd></dl>
</div> </div>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册