提交 dde4a3fc 编写于 作者: T Travis CI

Deploy to GitHub Pages: 2b19a68c

上级 7e62fbdd
......@@ -225,14 +225,14 @@
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input
unit to each output unit. The fully connected layer multiplies each input
tensor with its coresponding weight to produce an output Tensor. If
multiple input tensors are given, the results of multiple multiplications
will be sumed up. If bias_attr is not None, a biases variable will be
created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p>
creates a variable (one for each input tensor) called weights for each
input tensor, which represents a fully connected weight matrix from
each input unit to each output unit. The fully connected layer
multiplies each input tensor with its coresponding weight to produce
an output Tensor. If multiple input tensors are given, the results of
multiple multiplications will be sumed up. If bias_attr is not None,
a biases variable will be created and added to the output. Finally,
if activation is not None, it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p>
<div class="math">
\[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
......@@ -862,51 +862,34 @@ Duplicable: False Optional: False</li>
<h2>transpose<a class="headerlink" href="#transpose" title="Permalink to this headline"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Transpose Operator.</p>
<p>The input tensor will be permuted according to the axis values given.
The op functions is similar to how numpy.transpose works in python.</p>
<p>For example:</p>
<blockquote>
<div><div class="highlight-text"><div class="highlight"><pre><span></span>input = numpy.arange(6).reshape((2,3))
the input is:
array([[0, 1, 2],
[3, 4, 5]])
given axis is:
[1, 0]
output = input.transpose(axis)
then the output is:
array([[0, 3],
[1, 4],
[2, 5]])
</pre></div>
</div>
</div></blockquote>
<p>So, given a input tensor of shape(N, C, H, W) and the axis is {0, 2, 3, 1},
the output tensor shape will be (N, H, W, C)</p>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>x</em>, <em>perm</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>transpose Layer</strong></p>
<p>Permute the dimensions of <cite>input</cite> according to <cite>perm</cite>.</p>
<p>The <cite>i</cite>-th dimension of the returned tensor will correspond to the
perm[i]-th dimension of <cite>input</cite>.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor)The input tensor, tensors with rank at most 6 are supported
Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INTS</em>) &#8211; (vector&lt;int&gt;)A list of values, and the size of the list should be the same with the input tensor rank, the tensor will permute the axes according the the values given</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; (Tensor), A Tensor.</li>
<li><strong>perm</strong> (<em>list</em>) &#8211; A permutation of the dimensions of <cite>input</cite>.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">(Tensor)The output tensor</p>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">A transposed Tensor.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first last">Variable</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;x&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mi">15</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span>
<span class="n">x_transposed</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">perm</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">])</span>
</pre></div>
</div>
</dd></dl>
</div>
......
......@@ -1158,6 +1158,24 @@
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "reciprocal",
"comment" : "\nReciprocal Activation Operator.\n\n$$out = \\frac{1}{x}$$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Reciprocal operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Reciprocal operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "softmax",
"comment" : "\nSoftmax Operator.\n\nThe input of the softmax operator is a 2-D tensor with shape N x K (N is the\nbatch_size, K is the dimension of input feature). The output tensor has the\nsame shape as the input tensor.\n\nFor each row of the input tensor, the softmax operator squashes the\nK-dimensional vector of arbitrary real values to a K-dimensional vector of real\nvalues in the range [0, 1] that add up to 1.\nIt computes the exponential of the given dimension and the sum of exponential\nvalues of all the other dimensions in the K-dimensional vector input.\nThen the ratio of the exponential of the given dimension and the sum of\nexponential values of all the other dimensions is the output of the softmax\noperator.\n\nFor each row $i$ and each column $j$ in Input(X), we have:\n $$Out[i, j] = \\frac{\\exp(X[i, j])}{\\sum_j(exp(X[i, j])}$$\n\n",
......@@ -1574,24 +1592,6 @@
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "reciprocal",
"comment" : "\nReciprocal Activation Operator.\n\n$$out = \\frac{1}{x}$$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Reciprocal operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Reciprocal operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "reduce_min",
"comment" : "\nReduceMin Operator.\n\nThis operator computes the min of input tensor along the given dimension. \nThe result tensor has 1 fewer dimension than the input unless keep_dim is true.\nIf reduce_all is true, just reduce along all dimensions and output a scalar.\n\n",
......@@ -2827,18 +2827,18 @@
} ]
},{
"type" : "transpose",
"comment" : "\nTranspose Operator.\n\nThe input tensor will be permuted according to the axis values given.\nThe op functions is similar to how numpy.transpose works in python.\n\nFor example:\n\n .. code-block:: text\n\n input = numpy.arange(6).reshape((2,3))\n\n the input is:\n\n array([[0, 1, 2],\n [3, 4, 5]])\n\n given axis is:\n\n [1, 0]\n\n output = input.transpose(axis)\n\n then the output is:\n\n array([[0, 3],\n [1, 4],\n [2, 5]])\n\nSo, given a input tensor of shape(N, C, H, W) and the axis is {0, 2, 3, 1},\nthe output tensor shape will be (N, H, W, C)\n\n",
"comment" : "\nTranspose Operator.\n\nThe input tensor will be permuted according to the axes given.\nThe behavior of this operator is similar to how `numpy.transpose` works.\n\n- suppose the input `X` is a 2-D tensor:\n $$\n X = \\begin{pmatrix}\n 0 &1 &2 \\\\\n 3 &4 &5\n \\end{pmatrix}$$\n\n the given `axes` is: $[1, 0]$, and $Y$ = transpose($X$, axis)\n\n then the output $Y$ is:\n\n $$\n Y = \\begin{pmatrix}\n 0 &3 \\\\\n 1 &4 \\\\\n 2 &5\n \\end{pmatrix}$$\n\n- Given a input tensor with shape $(N, C, H, W)$ and the `axes` is \n$[0, 2, 3, 1]$, then shape of the output tensor will be: $(N, H, W, C)$.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor)The input tensor, tensors with rank at most 6 are supported",
"comment" : "(Tensor) The input tensor, tensors with rank up to 6 are supported.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "(Tensor)The output tensor",
"comment" : "(Tensor)The output tensor.",
"duplicable" : 0,
"intermediate" : 0
} ],
......@@ -2846,7 +2846,7 @@
{
"name" : "axis",
"type" : "int array",
"comment" : "(vector<int>)A list of values, and the size of the list should be the same with the input tensor rank, the tensor will permute the axes according the the values given",
"comment" : "(vector<int>) A list of values, and the size of the list should be the same with the input tensor rank. This operator permutes the input tensor's axes according to the values given.",
"generated" : 0
} ]
},{
......@@ -5008,6 +5008,29 @@
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "bipartite_match",
"comment" : "\nThis operator is a greedy bipartite matching algorithm, which is used to\nobtain the matching with the maximum distance based on the input\ndistance matrix. For input 2D matrix, the bipartite matching algorithm can\nfind the matched column for each row, also can find the matched row for\neach column. And this operator only calculate matched indices from column\nto row. For each instance, the number of matched indices is the number of\nof columns of the input ditance matrix.\n\nThere are two outputs to save matched indices and distance.\nA simple description, this algothrim matched the best (maximum distance)\nrow entity to the column entity and the matched indices are not duplicated\nin each row of ColToRowMatchIndices. If the column entity is not matched\nany row entity, set -1 in ColToRowMatchIndices.\n\nPlease note that the input DistMat can be LoDTensor (with LoD) or Tensor.\nIf LoDTensor with LoD, the height of ColToRowMatchIndices is batch size.\nIf Tensor, the height of ColToRowMatchIndices is 1.\n\n",
"inputs" : [
{
"name" : "DistMat",
"comment" : "(LoDTensor or Tensor) this input is a 2-D LoDTensor with shape [K, M]. It is pair-wise distance matrix between the entities represented by each row and each column. For example, assumed one entity is A with shape [K], another entity is B with shape [M]. The DistMat[i][j] is the distance between A[i] and B[j]. The bigger the distance is, the better macthing the pairs are. Please note, This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "ColToRowMatchIndices",
"comment" : "(Tensor) A 2-D Tensor with shape [N, M] in int type. N is the batch size. If ColToRowMatchIndices[i][j] is -1, it means B[j] does not match any entity in i-th instance. Otherwise, it means B[j] is matched to row ColToRowMatchIndices[i][j] in i-th instance. The row number of i-th instance is saved in ColToRowMatchIndices[i][j].",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "ColToRowMatchDis",
"comment" : "(Tensor) A 2-D Tensor with shape [N, M] in float type. N is batch size. If ColToRowMatchIndices[i][j] is -1, ColToRowMatchDis[i][j] is also -1.0. Otherwise, assumed ColToRowMatchIndices[i][j] = d, and the row offsets of each instance are called LoD. Then ColToRowMatchDis[i][j] = DistMat[d+LoD[i]][j]",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "lrn",
"comment" : "\nLocal Response Normalization Operator.\n\nThis operator comes from the paper:\n<<ImageNet Classification with Deep Convolutional Neural Networks>>.\n\nThe original formula is:\n\n$$\nOutput(i, x, y) = Input(i, x, y) / \\left(\nk + \\alpha \\sum\\limits^{\\min(C, c + n/2)}_{j = \\max(0, c - n/2)}\n(Input(j, x, y))^2\n\\right)^{\\beta}\n$$\n\nFunction implementation:\n\nInputs and outpus are in NCHW format, while input.shape.ndims() equals 4.\nAnd dimensions 0 ~ 3 represent batch size, feature maps, rows,\nand columns, respectively.\n\nInput and Output in the formula above is for each map(i) of one image, and\nInput(i, x, y), Output(i, x, y) represents an element in an image.\n\nC is the number of feature maps of one image. n is a hyper-parameter\nconfigured when operator is initialized. The sum in the denominator\nis the sum of the same positions in the neighboring maps.\n\n",
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -244,14 +244,14 @@
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each input
tensor, which represents a fully connected weight matrix from each input
unit to each output unit. The fully connected layer multiplies each input
tensor with its coresponding weight to produce an output Tensor. If
multiple input tensors are given, the results of multiple multiplications
will be sumed up. If bias_attr is not None, a biases variable will be
created and added to the output. Finally, if activation is not None,
it will be applied to the output as well.</p>
creates a variable (one for each input tensor) called weights for each
input tensor, which represents a fully connected weight matrix from
each input unit to each output unit. The fully connected layer
multiplies each input tensor with its coresponding weight to produce
an output Tensor. If multiple input tensors are given, the results of
multiple multiplications will be sumed up. If bias_attr is not None,
a biases variable will be created and added to the output. Finally,
if activation is not None, it will be applied to the output as well.</p>
<p>This process can be formulated as follows:</p>
<div class="math">
\[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]</div>
......@@ -881,51 +881,34 @@ Duplicable: False Optional: False</li>
<h2>transpose<a class="headerlink" href="#transpose" title="永久链接至标题"></a></h2>
<dl class="function">
<dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Transpose Operator.</p>
<p>The input tensor will be permuted according to the axis values given.
The op functions is similar to how numpy.transpose works in python.</p>
<p>For example:</p>
<blockquote>
<div><div class="highlight-text"><div class="highlight"><pre><span></span>input = numpy.arange(6).reshape((2,3))
the input is:
array([[0, 1, 2],
[3, 4, 5]])
given axis is:
[1, 0]
output = input.transpose(axis)
then the output is:
array([[0, 3],
[1, 4],
[2, 5]])
</pre></div>
</div>
</div></blockquote>
<p>So, given a input tensor of shape(N, C, H, W) and the axis is {0, 2, 3, 1},
the output tensor shape will be (N, H, W, C)</p>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>x</em>, <em>perm</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>transpose Layer</strong></p>
<p>Permute the dimensions of <cite>input</cite> according to <cite>perm</cite>.</p>
<p>The <cite>i</cite>-th dimension of the returned tensor will correspond to the
perm[i]-th dimension of <cite>input</cite>.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>x</strong> &#8211; (Tensor)The input tensor, tensors with rank at most 6 are supported
Duplicable: False Optional: False</li>
<li><strong>axis</strong> (<em>INTS</em>) &#8211; (vector&lt;int&gt;)A list of values, and the size of the list should be the same with the input tensor rank, the tensor will permute the axes according the the values given</li>
<li><strong>input</strong> (<em>Variable</em>) &#8211; (Tensor), A Tensor.</li>
<li><strong>perm</strong> (<em>list</em>) &#8211; A permutation of the dimensions of <cite>input</cite>.</li>
</ul>
</td>
</tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">(Tensor)The output tensor</p>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">A transposed Tensor.</p>
</td>
</tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first last">Variable</p>
</td>
</tr>
</tbody>
</table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">x</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;x&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mi">15</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span>
<span class="n">x_transposed</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">perm</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">])</span>
</pre></div>
</div>
</dd></dl>
</div>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册