<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X + Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -497,21 +496,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X - Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -547,21 +546,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X odotY$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -597,21 +596,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X / Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -647,21 +646,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
@@ -2275,6 +2279,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li>
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2304,7 +2310,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_mean<aclass="headerlink"href="#reduce-mean"title="Permalink to this headline">¶</a></h2>
@@ -2320,6 +2326,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>.
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2349,7 +2357,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_max<aclass="headerlink"href="#reduce-max"title="Permalink to this headline">¶</a></h2>
@@ -2365,6 +2373,8 @@ If <span class="math">\(dim < 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2394,7 +2404,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>reduce_min<aclass="headerlink"href="#reduce-min"title="Permalink to this headline">¶</a></h2>
@@ -2410,6 +2420,8 @@ If <span class="math">\(dim < 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2439,8 +2451,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
<h2>split<aclass="headerlink"href="#split"title="Permalink to this headline">¶</a></h2>
"comment":"\nClip Operator.\n\nThe clip operator limits the value of given input within an interval. The interval is\nspecified with arguments 'min' and 'max':\n\n$$\nOut = \\min(\\max(X, min), max)\n$$\n\n",
"comment":"\nClip Operator.\n\nThe clip operator limits the value of given input within an interval. The\ninterval is specified with arguments 'min' and 'max':\n\n$$\nOut = \\min(\\max(X, min), max)\n$$\n\n",
"inputs":[
{
"name":"X",
...
...
@@ -1789,23 +1789,23 @@
"attrs":[]
},{
"type":"elementwise_sub",
"comment":"\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n.. math::\n Out = X - Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n$$Out = X - Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -1813,7 +1813,7 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated":0
}]
},{
...
...
@@ -3345,23 +3345,23 @@
}]
},{
"type":"elementwise_max",
"comment":"\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n.. math::\n Out = max(X, Y)\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n$$Out = max(X, Y)$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -3369,7 +3369,7 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated":0
}]
},{
...
...
@@ -3551,23 +3551,23 @@
}]
},{
"type":"elementwise_mul",
"comment":"\nLimited Elementwise Mul Operator.\n\nThe equation is:\n\n.. math::\n Out = X \\odot\\ Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Mul Operator.\n\nThe equation is:\n\n$$Out = X \\odot\\ Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -3575,7 +3575,7 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated":0
}]
},{
...
...
@@ -3899,18 +3899,18 @@
}]
},{
"type":"expand",
"comment":"\nExpand operator tiles the input by given times number. You should set times\nnumber for each dimension by providing attribute 'expand_times'. The rank of X\nshould be in [1, 6]. Please notice that size of 'expand_times' must be same with\nX's rank. Following is a using case:\n\nInput(X) is a 3-D tensor with shape [2, 3, 1]:\n\n [\n [[1], [2], [3]],\n [[4], [5], [6]]\n ]\n\nAttr(expand_times): [1, 2, 2]\n\nOutput(Out) is a 3-D tensor with shape [2, 6, 2]:\n\n [\n [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]],\n [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]]\n ]\n\n",
"comment":"\nExpand operator tiles the input by given times number. You should set times\nnumber for each dimension by providing attribute 'expand_times'. The rank of X\nshould be in [1, 6]. Please note that size of 'expand_times' must be the same\nwith X's rank. Following is a using case:\n\nInput(X) is a 3-D tensor with shape [2, 3, 1]:\n\n [\n [[1], [2], [3]],\n [[4], [5], [6]]\n ]\n\nAttr(expand_times): [1, 2, 2]\n\nOutput(Out) is a 3-D tensor with shape [2, 6, 2]:\n\n [\n [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]],\n [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]]\n ]\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor, default Tensor<float>) A tensor with rank in [1, 6].X is the input tensor to be expanded.",
"comment":"(Tensor, default Tensor<float>). A tensor with rank in [1, 6].X is the input to be expanded.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"(Tensor, default Tensor<float>) A tensor with rank in [1, 6].The rank of Output(Out) is same as Input(X) except that each dimension size of Output(Out) is equal to corresponding dimension size of Input(X) multiplying corresponding value of Attr(expand_times).",
"comment":"(Tensor, default Tensor<float>). A tensor with rank in [1, 6].The rank of Output(Out) have the same with Input(X). After expanding, size of each dimension of Output(Out) is equal to size of the corresponding dimension of Input(X) multiplying the corresponding value given by Attr(expand_times).",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -3923,23 +3923,23 @@
}]
},{
"type":"elementwise_min",
"comment":"\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n.. math::\n Out = min(X, Y)\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Max Operator.\n\nThe equation is:\n\n$$Out = min(X, Y)$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -3947,28 +3947,28 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated":0
}]
},{
"type":"elementwise_div",
"comment":"\nLimited Elementwise Div Operator.\n\nThe equation is:\n\n.. math::\n Out = X / Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Div Operator.\n\nThe equation is:\n\n$$Out = X / Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -3976,28 +3976,28 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
"generated":0
}]
},{
"type":"elementwise_add",
"comment":"\nLimited Elementwise Add Operator.\n\nThe equation is:\n\n.. math::\n Out = X + Y\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.\n\n",
"comment":"\nLimited Elementwise Add Operator.\n\nThe equation is:\n\n$$Out = X + Y$$\n\n$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be\nsmaller than or equal to the dimensions of $X$.\n\nThere are two cases for this operator:\n1. The shape of $Y$ is same with $X$;\n2. The shape of $Y$ is a subset of $X$.\n\nFor case 2:\n$Y$ will be broadcasted to match the shape of $X$ and axis should be\nset to index of the start dimension to broadcast $Y$ onto $X$.\n\nFor example\n .. code-block:: python\n\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nEither of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)\ninformation. However, the output only shares the LoD information with input $X$.\n\n",
"inputs":[
{
"name":"X",
"comment":"(Tensor) The first input tensor of elementwise op",
"comment":"(Tensor), The first input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
},{
"name":"Y",
"comment":"(Tensor) The second input tensor of elementwise op",
"comment":"(Tensor), The second input tensor of elementwise op.",
"duplicable":0,
"intermediate":0
}],
"outputs":[
{
"name":"Out",
"comment":"The output of elementwise op",
"comment":"The output of elementwise op.",
"duplicable":0,
"intermediate":0
}],
...
...
@@ -4005,7 +4005,7 @@
{
"name":"axis",
"type":"int",
"comment":"(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment":"(int, default -1). The start dimension index for broadcasting Y onto X.",
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X + Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -516,21 +515,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X - Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -566,21 +565,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X odotY$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -616,21 +615,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
<p>X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.</p>
<p>$$Out = X / Y$$</p>
<p>$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.</p>
<p>There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.</p>
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.</p>
<p>For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.</p>
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.</p>
<dlclass="docutils">
<dt>For example</dt>
<dd><divclass="first last highlight-python"><divclass="highlight"><pre><span></span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">X</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(</span><spanclass="mi">2</span><spanclass="p">,</span><spanclass="mi">3</span><spanclass="p">,</span><spanclass="mi">4</span><spanclass="p">,</span><spanclass="mi">5</span><spanclass="p">),</span><spanclass="n">shape</span><spanclass="p">(</span><spanclass="n">Y</span><spanclass="p">)</span><spanclass="o">=</span><spanclass="p">(,)</span>
...
...
@@ -666,21 +665,22 @@ the starting dimension index for broadcasting Y onto X.</p>
</div>
</dd>
</dl>
<p>Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.</p>
<p>Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.</p>
@@ -2294,6 +2298,8 @@ the dimension to reduce is <span class="math">\(rank + dim\)</span>.</li>
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2323,7 +2329,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
@@ -2339,6 +2345,8 @@ must be in the range <span class="math">\([-rank(input), rank(input))\)</span>.
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2368,7 +2376,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
@@ -2384,6 +2392,8 @@ If <span class="math">\(dim < 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2413,7 +2423,7 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<
@@ -2429,6 +2439,8 @@ If <span class="math">\(dim < 0\)</span>, the dimension to reduce is <span cl
<li><strong>keep_dim</strong> (<em>bool</em>) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the <codeclass="xref py py-attr docutils literal"><spanclass="pre">input</span></code> unless <codeclass="xref py py-attr docutils literal"><spanclass="pre">keep_dim</span></code> is true.</li>
<li><strong>name</strong> (<em>str|None</em>) – A name for this layer(optional). If set None, the layer
will be named automatically.</li>
</ul>
</td>
</tr>
...
...
@@ -2458,8 +2470,8 @@ than the <code class="xref py py-attr docutils literal"><span class="pre">input<