提交 0eda3b88 编写于 作者: T Travis CI

Deploy to GitHub Pages: 4b3e22b8

上级 5cc97252
...@@ -499,3 +499,8 @@ swish ...@@ -499,3 +499,8 @@ swish
------ ------
.. autofunction:: paddle.v2.fluid.layers.swish .. autofunction:: paddle.v2.fluid.layers.swish
:noindex: :noindex:
l2_normalize
------------
.. autofunction:: paddle.v2.fluid.layers.l2_normalize
:noindex:
...@@ -360,9 +360,9 @@ constructor.</p> ...@@ -360,9 +360,9 @@ constructor.</p>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Data Layer</strong></p> <dd><p><strong>Data Layer</strong></p>
<p>This function takes in the input and based on whether data has <p>This function takes in the input and based on whether data has
to be returned back as a minibatch, it creates the global variable using to be returned back as a minibatch, it creates the global variable by using
the helper functions. The global variables can be accessed by all the the helper functions. The global variables can be accessed by all the
following operations and layers in the graph.</p> following operators in the graph.</p>
<p>All the input variables of this function are passed in as local variables <p>All the input variables of this function are passed in as local variables
to the LayerHelper constructor.</p> to the LayerHelper constructor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -476,16 +476,15 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li> ...@@ -476,16 +476,15 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Add Operator.</p> <dd><p>Limited Elementwise Add Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X + Y$$</p>
\[Out = X + Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -497,21 +496,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -497,21 +496,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -526,16 +526,15 @@ Duplicable: False Optional: False</li> ...@@ -526,16 +526,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Sub Operator.</p> <dd><p>Limited Elementwise Sub Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X - Y$$</p>
\[Out = X - Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -547,21 +546,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -547,21 +546,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -576,16 +576,15 @@ Duplicable: False Optional: False</li> ...@@ -576,16 +576,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Mul Operator.</p> <dd><p>Limited Elementwise Mul Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X odotY$$</p>
\[Out = X \odot\ Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -597,21 +596,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -597,21 +596,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -626,16 +626,15 @@ Duplicable: False Optional: False</li> ...@@ -626,16 +626,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Div Operator.</p> <dd><p>Limited Elementwise Div Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X / Y$$</p>
\[Out = X / Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -647,21 +646,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -647,21 +646,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -1396,7 +1396,7 @@ then output is a Tensor: ...@@ -1396,7 +1396,7 @@ then output is a Tensor:
<h2>pool2d<a class="headerlink" href="#pool2d" title="Permalink to this headline"></a></h2> <h2>pool2d<a class="headerlink" href="#pool2d" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function adds the operator for pooling in 2 dimensions, using the <dd><p>This function adds the operator for pooling in 2 dimensions, using the
pooling configurations mentioned in input parameters.</p> pooling configurations mentioned in input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1406,7 +1406,7 @@ pooling configurations mentioned in input parameters.</p> ...@@ -1406,7 +1406,7 @@ pooling configurations mentioned in input parameters.</p>
<h2>batch_norm<a class="headerlink" href="#batch-norm" title="Permalink to this headline"></a></h2> <h2>batch_norm<a class="headerlink" href="#batch-norm" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function helps create an operator to implement <dd><p>This function helps create an operator to implement
the BatchNorm layer using the configurations from the input parameters.</p> the BatchNorm layer using the configurations from the input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1416,7 +1416,7 @@ the BatchNorm layer using the configurations from the input parameters.</p> ...@@ -1416,7 +1416,7 @@ the BatchNorm layer using the configurations from the input parameters.</p>
<h2>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="Permalink to this headline"></a></h2> <h2>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -1984,7 +1984,7 @@ to compute the length.</td> ...@@ -1984,7 +1984,7 @@ to compute the length.</td>
<h2>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="Permalink to this headline"></a></h2> <h2>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>The transpose of conv2d layer.</p> <dd><p>The transpose of conv2d layer.</p>
<p>This layer is also known as deconvolution layer.</p> <p>This layer is also known as deconvolution layer.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -2012,8 +2012,8 @@ stride_H = stride_W = stride.</li> ...@@ -2012,8 +2012,8 @@ stride_H = stride_W = stride.</li>
contain two integers, (dilation_H, dilation_W). Otherwise, the contain two integers, (dilation_H, dilation_W). Otherwise, the
dilation_H = dilation_W = dilation.</li> dilation_H = dilation_W = dilation.</li>
<li><strong>param_attr</strong> &#8211; Parameter Attribute.</li> <li><strong>param_attr</strong> &#8211; Parameter Attribute.</li>
<li><strong>main_program</strong> (<em>Program</em>) &#8211; the main program</li> <li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
<li><strong>startup_program</strong> (<em>Program</em>) &#8211; the startup program</li> will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2032,7 +2032,7 @@ dilation_H = dilation_W = dilation.</li> ...@@ -2032,7 +2032,7 @@ dilation_H = dilation_W = dilation.</li>
<h2>sequence_expand<a class="headerlink" href="#sequence-expand" title="Permalink to this headline"></a></h2> <h2>sequence_expand<a class="headerlink" href="#sequence-expand" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong> <dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong>
according to LoD information of <strong>y</strong>. And the following examples will according to LoD information of <strong>y</strong>. And the following examples will
explain how sequence_expand works:</p> explain how sequence_expand works:</p>
...@@ -2078,6 +2078,8 @@ explain how sequence_expand works:</p> ...@@ -2078,6 +2078,8 @@ explain how sequence_expand works:</p>
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> (<em>Variable</em>) &#8211; The input variable which is a Tensor or LoDTensor.</li> <li><strong>x</strong> (<em>Variable</em>) &#8211; The input variable which is a Tensor or LoDTensor.</li>
<li><strong>y</strong> (<em>Variable</em>) &#8211; The input variable which is a LoDTensor.</li> <li><strong>y</strong> (<em>Variable</em>) &#8211; The input variable which is a LoDTensor.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2156,7 +2158,7 @@ and concatenation of <span class="math">\(u_t\)</span>, <span class="math">\(r_t ...@@ -2156,7 +2158,7 @@ and concatenation of <span class="math">\(u_t\)</span>, <span class="math">\(r_t
<h2>lstm_unit<a class="headerlink" href="#lstm-unit" title="Permalink to this headline"></a></h2> <h2>lstm_unit<a class="headerlink" href="#lstm-unit" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Lstm unit layer. The equation of a lstm step is:</p> <dd><p>Lstm unit layer. The equation of a lstm step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
...@@ -2195,6 +2197,8 @@ shape M x S, M for batch size and S for size of lstm unit.</li> ...@@ -2195,6 +2197,8 @@ shape M x S, M for batch size and S for size of lstm unit.</li>
initializer, name etc.</li> initializer, name etc.</li>
<li><strong>bias_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of bias weights, if not False, <li><strong>bias_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of bias weights, if not False,
bias weights will be created and be set to default value.</li> bias weights will be created and be set to default value.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2259,7 +2263,7 @@ Duplicable: False Optional: False</td> ...@@ -2259,7 +2263,7 @@ Duplicable: False Optional: False</td>
<h2>reduce_sum<a class="headerlink" href="#reduce-sum" title="Permalink to this headline"></a></h2> <h2>reduce_sum<a class="headerlink" href="#reduce-sum" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the sum of tensor elements over the given dimension.</p> <dd><p>Computes the sum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2275,6 +2279,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li> ...@@ -2275,6 +2279,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li>
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2304,7 +2310,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2304,7 +2310,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_mean<a class="headerlink" href="#reduce-mean" title="Permalink to this headline"></a></h2> <h2>reduce_mean<a class="headerlink" href="#reduce-mean" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the mean of tensor elements over the given dimension.</p> <dd><p>Computes the mean of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2320,6 +2326,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>. ...@@ -2320,6 +2326,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>.
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2349,7 +2357,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2349,7 +2357,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_max<a class="headerlink" href="#reduce-max" title="Permalink to this headline"></a></h2> <h2>reduce_max<a class="headerlink" href="#reduce-max" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the maximum of tensor elements over the given dimension.</p> <dd><p>Computes the maximum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2365,6 +2373,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl ...@@ -2365,6 +2373,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2394,7 +2404,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2394,7 +2404,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_min<a class="headerlink" href="#reduce-min" title="Permalink to this headline"></a></h2> <h2>reduce_min<a class="headerlink" href="#reduce-min" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the minimum of tensor elements over the given dimension.</p> <dd><p>Computes the minimum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2410,6 +2420,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl ...@@ -2410,6 +2420,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2439,8 +2451,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2439,8 +2451,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>split<a class="headerlink" href="#split" title="Permalink to this headline"></a></h2> <h2>split<a class="headerlink" href="#split" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Splits the tensor into multiple sub-tensors.</p> <dd><p>Split the input tensor into multiple sub-tensors.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
...@@ -2455,6 +2467,8 @@ sub-tensors and the integers indicate the sizes of sub-tensors&#8217; ...@@ -2455,6 +2467,8 @@ sub-tensors and the integers indicate the sizes of sub-tensors&#8217;
<code class="xref py py-attr docutils literal"><span class="pre">dim</span></code> dimension orderly.</li> <code class="xref py py-attr docutils literal"><span class="pre">dim</span></code> dimension orderly.</li>
<li><strong>dim</strong> (<em>int</em>) &#8211; The dimension along which to split. If <span class="math">\(dim &lt; 0\)</span>, the <li><strong>dim</strong> (<em>int</em>) &#8211; The dimension along which to split. If <span class="math">\(dim &lt; 0\)</span>, the
dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li> dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2544,6 +2558,7 @@ will be named automatically.</li> ...@@ -2544,6 +2558,7 @@ will be named automatically.</li>
<span class="c1"># x: [K], y: [K]</span> <span class="c1"># x: [K], y: [K]</span>
<span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="c1"># out: [1]</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="c1"># out: [1]</span>
<span class="c1"># x: [M], y: [N]</span> <span class="c1"># x: [M], y: [N]</span>
<span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="bp">True</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span> <span class="c1"># out: [M, N]</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="bp">True</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span> <span class="c1"># out: [M, N]</span>
</pre></div> </pre></div>
</div> </div>
...@@ -3180,6 +3195,50 @@ Duplicable: False Optional: False</li> ...@@ -3180,6 +3195,50 @@ Duplicable: False Optional: False</li>
</table> </table>
</dd></dl> </dd></dl>
</div>
<div class="section" id="l2-normalize">
<h2>l2_normalize<a class="headerlink" href="#l2-normalize" title="Permalink to this headline"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">l2_normalize</code><span class="sig-paren">(</span><em>x</em>, <em>axis</em>, <em>epsilon=1e-12</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>L2 normalize Layer</strong></p>
<p>The l2 normalize layer normalizes <cite>x</cite> along dimension <cite>axis</cite> using an L2
norm. For a 1-D tensor (<cite>dim</cite> is fixed to 0), this layer computes</p>
<p>output = x / sqrt(max(sum(x**2), epsilon))</p>
<p>For <cite>x</cite> with more dimensions, this layer independently normalizes each 1-D
slice along dimension <cite>axis</cite>.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> (<em>Variable|list</em>) &#8211; The input tensor to l2_normalize layer.</li>
<li><strong>axis</strong> (<em>int</em>) &#8211; Dimension along which to normalize the input.</li>
<li><strong>epsilon</strong> (<em>float</em>) &#8211; A lower bound value for <cite>x</cite>&#8216;s l2 norm. sqrt(epsilon) will
be used as the divisor if the l2 norm of <cite>x</cite> is less than
sqrt(epsilon).</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">The output tensor variable.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first last">Variable</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span>
<span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">17</span><span class="p">,</span> <span class="mi">13</span><span class="p">),</span>
<span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">l2_normalize</span><span class="p">(</span><span class="n">x</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>
</div> </div>
</div> </div>
......
...@@ -1271,7 +1271,7 @@ ...@@ -1271,7 +1271,7 @@
} ] } ]
},{ },{
"type" : "clip", "type" : "clip",
"comment" : "\nClip Operator.\n\nThe clip operator limits the value of given input within an interval. The interval is\nspecified with arguments 'min' and 'max':\n\n$$\nOut = \\min(\\max(X, min), max)\n$$\n\n", "comment" : "\nClip Operator.\n\nThe clip operator limits the value of given input within an interval. The\ninterval is specified with arguments 'min' and 'max':\n\n$$\nOut = \\min(\\max(X, min), max)\n$$\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
...@@ -1789,23 +1789,23 @@ ...@@ -1789,23 +1789,23 @@
"attrs" : [ ] "attrs" : [ ]
},{ },{
"type" : "elementwise_sub", "type" : "elementwise_sub",
"comment" : "\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n.. math::\n Out = X - Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n$$Out = X - Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -1813,7 +1813,7 @@ ...@@ -1813,7 +1813,7 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
...@@ -3345,23 +3345,23 @@ ...@@ -3345,23 +3345,23 @@
} ] } ]
},{ },{
"type" : "elementwise_max", "type" : "elementwise_max",
"comment" : "\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n.. math::\n Out = max(X, Y)\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n$$Out = max(X, Y)$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -3369,7 +3369,7 @@ ...@@ -3369,7 +3369,7 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
...@@ -3551,23 +3551,23 @@ ...@@ -3551,23 +3551,23 @@
} ] } ]
},{ },{
"type" : "elementwise_mul", "type" : "elementwise_mul",
"comment" : "\nLimited Elementwise Mul Operator.\n\nThe equation is:\n\n.. math::\n Out = X \\odot\\ Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Mul Operator.\n\nThe equation is:\n\n$$Out = X \\odot\\ Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -3575,7 +3575,7 @@ ...@@ -3575,7 +3575,7 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
...@@ -3899,18 +3899,18 @@ ...@@ -3899,18 +3899,18 @@
} ] } ]
},{ },{
"type" : "expand", "type" : "expand",
"comment" : "\nExpand operator tiles the input by given times number. You should set times\nnumber for each dimension by providing attribute 'expand_times'. The rank of X\nshould be in [1, 6]. Please notice that size of 'expand_times' must be same with\nX's rank. Following is a using case:\n\nInput(X) is a 3-D tensor with shape [2, 3, 1]:\n\n [\n [[1], [2], [3]],\n [[4], [5], [6]]\n ]\n\nAttr(expand_times): [1, 2, 2]\n\nOutput(Out) is a 3-D tensor with shape [2, 6, 2]:\n\n [\n [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]],\n [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]]\n ]\n\n", "comment" : "\nExpand operator tiles the input by given times number. You should set times\nnumber for each dimension by providing attribute 'expand_times'. The rank of X\nshould be in [1, 6]. Please note that size of 'expand_times' must be the same\nwith X's rank. Following is a using case:\n\nInput(X) is a 3-D tensor with shape [2, 3, 1]:\n\n [\n [[1], [2], [3]],\n [[4], [5], [6]]\n ]\n\nAttr(expand_times): [1, 2, 2]\n\nOutput(Out) is a 3-D tensor with shape [2, 6, 2]:\n\n [\n [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]],\n [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]]\n ]\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor, default Tensor<float>) A tensor with rank in [1, 6].X is the input tensor to be expanded.", "comment" : "(Tensor, default Tensor<float>). A tensor with rank in [1, 6].X is the input to be expanded.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "(Tensor, default Tensor<float>) A tensor with rank in [1, 6].The rank of Output(Out) is same as Input(X) except that each dimension size of Output(Out) is equal to corresponding dimension size of Input(X) multiplying corresponding value of Attr(expand_times).", "comment" : "(Tensor, default Tensor<float>). A tensor with rank in [1, 6].The rank of Output(Out) have the same with Input(X). After expanding, size of each dimension of Output(Out) is equal to size of the corresponding dimension of Input(X) multiplying the corresponding value given by Attr(expand_times).",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -3923,23 +3923,23 @@ ...@@ -3923,23 +3923,23 @@
} ] } ]
},{ },{
"type" : "elementwise_min", "type" : "elementwise_min",
"comment" : "\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n.. math::\n Out = min(X, Y)\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n$$Out = min(X, Y)$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -3947,28 +3947,28 @@ ...@@ -3947,28 +3947,28 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
"type" : "elementwise_div", "type" : "elementwise_div",
"comment" : "\nLimited Elementwise Div Operator.\n\nThe equation is:\n\n.. math::\n Out = X / Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Div Operator.\n\nThe equation is:\n\n$$Out = X / Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -3976,28 +3976,28 @@ ...@@ -3976,28 +3976,28 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
"type" : "elementwise_add", "type" : "elementwise_add",
"comment" : "\nLimited Elementwise Add Operator.\n\nThe equation is:\n\n.. math::\n Out = X + Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n", "comment" : "\nLimited Elementwise Add Operator.\n\nThe equation is:\n\n$$Out = X + Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs" : [ "inputs" : [
{ {
"name" : "X", "name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op", "comment" : "(Tensor), The first input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
}, { }, {
"name" : "Y", "name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op", "comment" : "(Tensor), The second input tensor of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
"outputs" : [ "outputs" : [
{ {
"name" : "Out", "name" : "Out",
"comment" : "The output of elementwise op", "comment" : "The output of elementwise op.",
"duplicable" : 0, "duplicable" : 0,
"intermediate" : 0 "intermediate" : 0
} ], } ],
...@@ -4005,7 +4005,7 @@ ...@@ -4005,7 +4005,7 @@
{ {
"name" : "axis", "name" : "axis",
"type" : "int", "type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X", "comment" : "(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -499,3 +499,8 @@ swish ...@@ -499,3 +499,8 @@ swish
------ ------
.. autofunction:: paddle.v2.fluid.layers.swish .. autofunction:: paddle.v2.fluid.layers.swish
:noindex: :noindex:
l2_normalize
------------
.. autofunction:: paddle.v2.fluid.layers.l2_normalize
:noindex:
...@@ -379,9 +379,9 @@ constructor.</p> ...@@ -379,9 +379,9 @@ constructor.</p>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Data Layer</strong></p> <dd><p><strong>Data Layer</strong></p>
<p>This function takes in the input and based on whether data has <p>This function takes in the input and based on whether data has
to be returned back as a minibatch, it creates the global variable using to be returned back as a minibatch, it creates the global variable by using
the helper functions. The global variables can be accessed by all the the helper functions. The global variables can be accessed by all the
following operations and layers in the graph.</p> following operators in the graph.</p>
<p>All the input variables of this function are passed in as local variables <p>All the input variables of this function are passed in as local variables
to the LayerHelper constructor.</p> to the LayerHelper constructor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -495,16 +495,15 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li> ...@@ -495,16 +495,15 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Add Operator.</p> <dd><p>Limited Elementwise Add Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X + Y$$</p>
\[Out = X + Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -516,21 +515,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -516,21 +515,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -545,16 +545,15 @@ Duplicable: False Optional: False</li> ...@@ -545,16 +545,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Sub Operator.</p> <dd><p>Limited Elementwise Sub Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X - Y$$</p>
\[Out = X - Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -566,21 +565,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -566,21 +565,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -595,16 +595,15 @@ Duplicable: False Optional: False</li> ...@@ -595,16 +595,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Mul Operator.</p> <dd><p>Limited Elementwise Mul Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X odotY$$</p>
\[Out = X \odot\ Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -616,21 +615,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -616,21 +615,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -645,16 +645,15 @@ Duplicable: False Optional: False</li> ...@@ -645,16 +645,15 @@ Duplicable: False Optional: False</li>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Div Operator.</p> <dd><p>Limited Elementwise Div Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<div class="math"> <p>$$Out = X / Y$$</p>
\[Out = X / Y\]</div> <p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than smaller than or equal to the dimensions of $X$.</p>
or equal to the dimensions of X.</p>
<p>There are two cases for this operator: <p>There are two cases for this operator:
1. The shape of Y is same with X; 1. The shape of $Y$ is same with $X$;
2. The shape of Y is a subset of X.</p> 2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2: <p>For case 2:
Y will be broadcasted to match the shape of X and axis should be $Y$ will be broadcasted to match the shape of $X$ and axis should be
the starting dimension index for broadcasting Y onto X.</p> set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dl class="docutils"> <dl class="docutils">
<dt>For example</dt> <dt>For example</dt>
<dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span> <dd><div class="first last highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span><span class="p">(</span><span class="n">X</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="n">shape</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">=</span> <span class="p">(,)</span>
...@@ -666,21 +665,22 @@ the starting dimension index for broadcasting Y onto X.</p> ...@@ -666,21 +665,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div> </div>
</dd> </dd>
</dl> </dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p> <p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor) The first input tensor of elementwise op <li><strong>x</strong> &#8211; (Tensor), The first input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>y</strong> &#8211; (Tensor) The second input tensor of elementwise op <li><strong>y</strong> &#8211; (Tensor), The second input tensor of elementwise op.
Duplicable: False Optional: False</li> Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1) The starting dimension index for broadcasting Y onto X</li> <li><strong>axis</strong> (<em>INT</em>) &#8211; (int, default -1). The start dimension index for broadcasting Y onto X.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">The output of elementwise op.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
...@@ -1415,7 +1415,7 @@ then output is a Tensor: ...@@ -1415,7 +1415,7 @@ then output is a Tensor:
<h2>pool2d<a class="headerlink" href="#pool2d" title="永久链接至标题"></a></h2> <h2>pool2d<a class="headerlink" href="#pool2d" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function adds the operator for pooling in 2 dimensions, using the <dd><p>This function adds the operator for pooling in 2 dimensions, using the
pooling configurations mentioned in input parameters.</p> pooling configurations mentioned in input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1425,7 +1425,7 @@ pooling configurations mentioned in input parameters.</p> ...@@ -1425,7 +1425,7 @@ pooling configurations mentioned in input parameters.</p>
<h2>batch_norm<a class="headerlink" href="#batch-norm" title="永久链接至标题"></a></h2> <h2>batch_norm<a class="headerlink" href="#batch-norm" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function helps create an operator to implement <dd><p>This function helps create an operator to implement
the BatchNorm layer using the configurations from the input parameters.</p> the BatchNorm layer using the configurations from the input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1435,7 +1435,7 @@ the BatchNorm layer using the configurations from the input parameters.</p> ...@@ -1435,7 +1435,7 @@ the BatchNorm layer using the configurations from the input parameters.</p>
<h2>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="永久链接至标题"></a></h2> <h2>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -2003,7 +2003,7 @@ to compute the length.</td> ...@@ -2003,7 +2003,7 @@ to compute the length.</td>
<h2>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="永久链接至标题"></a></h2> <h2>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>The transpose of conv2d layer.</p> <dd><p>The transpose of conv2d layer.</p>
<p>This layer is also known as deconvolution layer.</p> <p>This layer is also known as deconvolution layer.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -2031,8 +2031,8 @@ stride_H = stride_W = stride.</li> ...@@ -2031,8 +2031,8 @@ stride_H = stride_W = stride.</li>
contain two integers, (dilation_H, dilation_W). Otherwise, the contain two integers, (dilation_H, dilation_W). Otherwise, the
dilation_H = dilation_W = dilation.</li> dilation_H = dilation_W = dilation.</li>
<li><strong>param_attr</strong> &#8211; Parameter Attribute.</li> <li><strong>param_attr</strong> &#8211; Parameter Attribute.</li>
<li><strong>main_program</strong> (<em>Program</em>) &#8211; the main program</li> <li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
<li><strong>startup_program</strong> (<em>Program</em>) &#8211; the startup program</li> will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2051,7 +2051,7 @@ dilation_H = dilation_W = dilation.</li> ...@@ -2051,7 +2051,7 @@ dilation_H = dilation_W = dilation.</li>
<h2>sequence_expand<a class="headerlink" href="#sequence-expand" title="永久链接至标题"></a></h2> <h2>sequence_expand<a class="headerlink" href="#sequence-expand" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong> <dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong>
according to LoD information of <strong>y</strong>. And the following examples will according to LoD information of <strong>y</strong>. And the following examples will
explain how sequence_expand works:</p> explain how sequence_expand works:</p>
...@@ -2097,6 +2097,8 @@ explain how sequence_expand works:</p> ...@@ -2097,6 +2097,8 @@ explain how sequence_expand works:</p>
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> (<em>Variable</em>) &#8211; The input variable which is a Tensor or LoDTensor.</li> <li><strong>x</strong> (<em>Variable</em>) &#8211; The input variable which is a Tensor or LoDTensor.</li>
<li><strong>y</strong> (<em>Variable</em>) &#8211; The input variable which is a LoDTensor.</li> <li><strong>y</strong> (<em>Variable</em>) &#8211; The input variable which is a LoDTensor.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2175,7 +2177,7 @@ and concatenation of <span class="math">\(u_t\)</span>, <span class="math">\(r_t ...@@ -2175,7 +2177,7 @@ and concatenation of <span class="math">\(u_t\)</span>, <span class="math">\(r_t
<h2>lstm_unit<a class="headerlink" href="#lstm-unit" title="永久链接至标题"></a></h2> <h2>lstm_unit<a class="headerlink" href="#lstm-unit" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Lstm unit layer. The equation of a lstm step is:</p> <dd><p>Lstm unit layer. The equation of a lstm step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
...@@ -2214,6 +2216,8 @@ shape M x S, M for batch size and S for size of lstm unit.</li> ...@@ -2214,6 +2216,8 @@ shape M x S, M for batch size and S for size of lstm unit.</li>
initializer, name etc.</li> initializer, name etc.</li>
<li><strong>bias_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of bias weights, if not False, <li><strong>bias_attr</strong> (<em>ParamAttr</em>) &#8211; The attributes of bias weights, if not False,
bias weights will be created and be set to default value.</li> bias weights will be created and be set to default value.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2278,7 +2282,7 @@ Duplicable: False Optional: False</td> ...@@ -2278,7 +2282,7 @@ Duplicable: False Optional: False</td>
<h2>reduce_sum<a class="headerlink" href="#reduce-sum" title="永久链接至标题"></a></h2> <h2>reduce_sum<a class="headerlink" href="#reduce-sum" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the sum of tensor elements over the given dimension.</p> <dd><p>Computes the sum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2294,6 +2298,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li> ...@@ -2294,6 +2298,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li>
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2323,7 +2329,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2323,7 +2329,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_mean<a class="headerlink" href="#reduce-mean" title="永久链接至标题"></a></h2> <h2>reduce_mean<a class="headerlink" href="#reduce-mean" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the mean of tensor elements over the given dimension.</p> <dd><p>Computes the mean of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2339,6 +2345,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>. ...@@ -2339,6 +2345,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>.
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2368,7 +2376,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2368,7 +2376,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_max<a class="headerlink" href="#reduce-max" title="永久链接至标题"></a></h2> <h2>reduce_max<a class="headerlink" href="#reduce-max" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the maximum of tensor elements over the given dimension.</p> <dd><p>Computes the maximum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2384,6 +2392,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl ...@@ -2384,6 +2392,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2413,7 +2423,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2413,7 +2423,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_min<a class="headerlink" href="#reduce-min" title="永久链接至标题"></a></h2> <h2>reduce_min<a class="headerlink" href="#reduce-min" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the minimum of tensor elements over the given dimension.</p> <dd><p>Computes the minimum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2429,6 +2439,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl ...@@ -2429,6 +2439,8 @@ If <span class="math">\(dim &lt; 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the <li><strong>keep_dim</strong> (<em>bool</em>) &#8211; Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension output Tensor. The result tensor will have one fewer dimension
than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li> than the <code class="xref py py-attr docutils literal"><span class="pre">input</span></code> unless <code class="xref py py-attr docutils literal"><span class="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2458,8 +2470,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input< ...@@ -2458,8 +2470,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>split<a class="headerlink" href="#split" title="永久链接至标题"></a></h2> <h2>split<a class="headerlink" href="#split" title="永久链接至标题"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Splits the tensor into multiple sub-tensors.</p> <dd><p>Split the input tensor into multiple sub-tensors.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
...@@ -2474,6 +2486,8 @@ sub-tensors and the integers indicate the sizes of sub-tensors&#8217; ...@@ -2474,6 +2486,8 @@ sub-tensors and the integers indicate the sizes of sub-tensors&#8217;
<code class="xref py py-attr docutils literal"><span class="pre">dim</span></code> dimension orderly.</li> <code class="xref py py-attr docutils literal"><span class="pre">dim</span></code> dimension orderly.</li>
<li><strong>dim</strong> (<em>int</em>) &#8211; The dimension along which to split. If <span class="math">\(dim &lt; 0\)</span>, the <li><strong>dim</strong> (<em>int</em>) &#8211; The dimension along which to split. If <span class="math">\(dim &lt; 0\)</span>, the
dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li> dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -2563,6 +2577,7 @@ will be named automatically.</li> ...@@ -2563,6 +2577,7 @@ will be named automatically.</li>
<span class="c1"># x: [K], y: [K]</span> <span class="c1"># x: [K], y: [K]</span>
<span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="c1"># out: [1]</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="c1"># out: [1]</span>
<span class="c1"># x: [M], y: [N]</span> <span class="c1"># x: [M], y: [N]</span>
<span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="bp">True</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span> <span class="c1"># out: [M, N]</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="bp">True</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span> <span class="c1"># out: [M, N]</span>
</pre></div> </pre></div>
</div> </div>
...@@ -3199,6 +3214,50 @@ Duplicable: False Optional: False</li> ...@@ -3199,6 +3214,50 @@ Duplicable: False Optional: False</li>
</table> </table>
</dd></dl> </dd></dl>
</div>
<div class="section" id="l2-normalize">
<h2>l2_normalize<a class="headerlink" href="#l2-normalize" title="永久链接至标题"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">l2_normalize</code><span class="sig-paren">(</span><em>x</em>, <em>axis</em>, <em>epsilon=1e-12</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>L2 normalize Layer</strong></p>
<p>The l2 normalize layer normalizes <cite>x</cite> along dimension <cite>axis</cite> using an L2
norm. For a 1-D tensor (<cite>dim</cite> is fixed to 0), this layer computes</p>
<p>output = x / sqrt(max(sum(x**2), epsilon))</p>
<p>For <cite>x</cite> with more dimensions, this layer independently normalizes each 1-D
slice along dimension <cite>axis</cite>.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> (<em>Variable|list</em>) &#8211; The input tensor to l2_normalize layer.</li>
<li><strong>axis</strong> (<em>int</em>) &#8211; Dimension along which to normalize the input.</li>
<li><strong>epsilon</strong> (<em>float</em>) &#8211; A lower bound value for <cite>x</cite>&#8216;s l2 norm. sqrt(epsilon) will
be used as the divisor if the l2 norm of <cite>x</cite> is less than
sqrt(epsilon).</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">The output tensor variable.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first last">Variable</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;data&quot;</span><span class="p">,</span>
<span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">17</span><span class="p">,</span> <span class="mi">13</span><span class="p">),</span>
<span class="n">dtype</span><span class="o">=</span><span class="s2">&quot;float32&quot;</span><span class="p">)</span>
<span class="n">fc</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">l2_normalize</span><span class="p">(</span><span class="n">x</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>
</div> </div>
</div> </div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册