提交 f7bd8c60 编写于 作者: T Travis CI

Deploy to GitHub Pages: 9b1a17a8

上级 e77bb920
...@@ -1265,13 +1265,12 @@ are in NCHW format. Where N is batch size, C is the number of channels, H is the ...@@ -1265,13 +1265,12 @@ are in NCHW format. Where N is batch size, C is the number of channels, H is the
of the feature, and W is the width of the feature. of the feature, and W is the width of the feature.
The details of convolution layer, please refer UFLDL&#8217;s <a class="reference external" href="http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/">convolution,</a> . The details of convolution layer, please refer UFLDL&#8217;s <a class="reference external" href="http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/">convolution,</a> .
If bias attribution and activation type are provided, bias is added to the output of the convolution, If bias attribution and activation type are provided, bias is added to the output of the convolution,
and the corresponding activation function is applied to the final result. and the corresponding activation function is applied to the final result.</p>
For each input <span class="math">\(X\)</span>, the equation is:</p> <p>For each input <span class="math">\(X\)</span>, the equation is:</p>
<div class="math"> <div class="math">
\[Out = \sigma (W \ast X + b)\]</div> \[Out = \sigma (W \ast X + b)\]</div>
<p>In the above equation:</p> <p>In the above equation:</p>
<blockquote> <ul class="simple">
<div><ul class="simple">
<li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li> <li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li>
<li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li> <li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li>
<li><span class="math">\(\ast\)</span>: Convolution operation.</li> <li><span class="math">\(\ast\)</span>: Convolution operation.</li>
...@@ -1279,20 +1278,21 @@ For each input <span class="math">\(X\)</span>, the equation is:</p> ...@@ -1279,20 +1278,21 @@ For each input <span class="math">\(X\)</span>, the equation is:</p>
<li><span class="math">\(\sigma\)</span>: Activation function.</li> <li><span class="math">\(\sigma\)</span>: Activation function.</li>
<li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li> <li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li>
</ul> </ul>
</div></blockquote>
<p class="rubric">Example</p> <p class="rubric">Example</p>
<dl class="docutils"> <ul>
<dt>Input:</dt> <li><p class="first">Input:</p>
<dd><p class="first">Input shape: $(N, C_{in}, H_{in}, W_{in})$</p> <p>Input shape: $(N, C_{in}, H_{in}, W_{in})$</p>
<p class="last">Filter shape: $(C_{out}, C_{in}, H_f, W_f)$</p> <p>Filter shape: $(C_{out}, C_{in}, H_f, W_f)$</p>
</dd> </li>
<dt>Output:</dt> <li><p class="first">Output:
<dd>Output shape: $(N, C_{out}, H_{out}, W_{out})$</dd> Output shape: $(N, C_{out}, H_{out}, W_{out})$</p>
</dl> </li>
</ul>
<p>Where</p> <p>Where</p>
<div class="math"> <div class="math">
\[\begin{split}H_{out}&amp;= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ \[\]</div>
W_{out}&amp;= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]</div> <p>H_{out}&amp;= frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \
W_{out}&amp;= frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
...@@ -2067,33 +2067,63 @@ to compute the length.</td> ...@@ -2067,33 +2067,63 @@ to compute the length.</td>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>The transpose of conv2d layer.</p> <dd><p><strong>Convlution2D transpose layer</strong></p>
<p>This layer is also known as deconvolution layer.</p> <p>The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW format. Where N is batch size, C is the number of channels,
H is the height of the feature, and W is the width of the feature.
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references <a class="reference external" href="http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf">therein</a>.</p>
<p>For each input <span class="math">\(X\)</span>, the equation is:</p>
<div class="math">
\[Out = W \ast X\]</div>
<p>In the above equation:</p>
<ul class="simple">
<li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li>
<li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li>
<li><span class="math">\(\ast\)</span> : Convolution transpose operation.</li>
<li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li>
</ul>
<p class="rubric">Example</p>
<ul>
<li><p class="first">Input:</p>
<p>Input shape: $(N, C_{in}, H_{in}, W_{in})$</p>
<p>Filter shape: $(C_{in}, C_{out}, H_f, W_f)$</p>
</li>
<li><p class="first">Output:</p>
<p>Output shape: $(N, C_{out}, H_{out}, W_{out})$</p>
</li>
</ul>
<p>Where</p>
<div class="math">
\[\begin{split}H_{out} &amp;= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\
W_{out} &amp;= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1\end{split}\]</div>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; The input image with [N, C, H, W] format.</li> <li><strong>input</strong> (<em>Variable</em>) &#8211; The input image with [N, C, H, W] format.</li>
<li><strong>num_filters</strong> (<em>int</em>) &#8211; The number of filter. It is as same as the output <li><strong>num_filters</strong> (<em>int</em>) &#8211; The number of the filter. It is as same as the output
image channel.</li> image channel.</li>
<li><strong>output_size</strong> (<em>int|tuple|None</em>) &#8211; The output image size. If output size is a <li><strong>output_size</strong> (<em>int|tuple|None</em>) &#8211; The output image size. If output size is a
tuple, it must contain two integers, (image_H, image_W). This tuple, it must contain two integers, (image_H, image_W). This
parameter only works when filter_size is None.</li> parameter only works when filter_size is None.</li>
<li><strong>filter_size</strong> (<em>int|tuple|None</em>) &#8211; The filter size. If filter_size is a tuple, <li><strong>filter_size</strong> (<em>int|tuple|None</em>) &#8211; The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_H, filter_size_W). it must contain two integers, (filter_size_H, filter_size_W).
Otherwise, the filter will be a square. None if use output size to Otherwise, the filter will be a square. None if use output size to
calculate filter_size</li> calculate filter_size.</li>
<li><strong>padding</strong> (<em>int|tuple</em>) &#8211; The padding size. If padding is a tuple, it must <li><strong>padding</strong> (<em>int|tuple</em>) &#8211; The padding size. If padding is a tuple, it must
contain two integers, (padding_H, padding_W). Otherwise, the contain two integers, (padding_H, padding_W). Otherwise, the
padding_H = padding_W = padding.</li> padding_H = padding_W = padding. Default: padding = 0.</li>
<li><strong>stride</strong> (<em>int|tuple</em>) &#8211; The stride size. If stride is a tuple, it must <li><strong>stride</strong> (<em>int|tuple</em>) &#8211; The stride size. If stride is a tuple, it must
contain two integers, (stride_H, stride_W). Otherwise, the contain two integers, (stride_H, stride_W). Otherwise, the
stride_H = stride_W = stride.</li> stride_H = stride_W = stride. Default: stride = 1.</li>
<li><strong>dilation</strong> (<em>int|tuple</em>) &#8211; The dilation size. If dilation is a tuple, it must <li><strong>dilation</strong> (<em>int|tuple</em>) &#8211; The dilation size. If dilation is a tuple, it must
contain two integers, (dilation_H, dilation_W). Otherwise, the contain two integers, (dilation_H, dilation_W). Otherwise, the
dilation_H = dilation_W = dilation.</li> dilation_H = dilation_W = dilation. Default: dilation = 1.</li>
<li><strong>param_attr</strong> &#8211; Parameter Attribute.</li> <li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The parameters to the Conv2d_transpose Layer. Default: None</li>
<li><strong>use_cudnn</strong> (<em>bool</em>) &#8211; Use cudnn kernel or not, it is valid only when the cudnn <li><strong>use_cudnn</strong> (<em>bool</em>) &#8211; Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True</li> library is installed. Default: True</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer <li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
...@@ -2101,14 +2131,22 @@ will be named automatically.</li> ...@@ -2101,14 +2131,22 @@ will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">Output image.</p> <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first">The tensor variable storing the convolution transpose result.</p>
</td> </td>
</tr> </tr>
<tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first last">Variable</p> <tr class="field-odd field"><th class="field-name">Return type:</th><td class="field-body"><p class="first">Variable</p>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If the shapes of input, filter_size, stride, padding and groups mismatch.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;data&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span>
<span class="n">conv2d_transpose</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d_transpose</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filters</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">filter_size</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl> </dd></dl>
</div> </div>
......
...@@ -649,7 +649,7 @@ ...@@ -649,7 +649,7 @@
} ] } ]
},{ },{
"type" : "conv3d_transpose", "type" : "conv3d_transpose",
"comment" : "\nConvolution3D Transpose Operator.\n\nThe convolution transpose operation calculates the output based on the input, filter\nand dilations, strides, paddings, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and output(Output) are in NCDHW format. Where N is batch size, C is the\nnumber of channels, D is the depth of the feature, H is the height of the feature,\nand W is the width of the feature.\nFilter(Input) is in MCDHW format. Where M is the number of input feature channels,\nC is the number of output feature channels, D is the depth of the filter,H is the\nheight of the filter, and W is the width of the filter.\nParameters(strides, paddings) are three elements. These three elements represent\ndepth, height and width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample: \n Input:\n Input shape: $(N, C_{in}, D_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{in}, C_{out}, D_f, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, D_{out}, H_{out}, W_{out})$\n Where\n $$\n D_{out} = (D_{in} - 1) * strides[0] - 2 * paddings[0] + D_f \\\\\n H_{out} = (H_{in} - 1) * strides[1] - 2 * paddings[1] + H_f \\\\\n W_{out} = (W_{in} - 1) * strides[2] - 2 * paddings[2] + W_f\n $$\n", "comment" : "\nConvolution3D Transpose Operator.\n\nThe convolution transpose operation calculates the output based on the input, filter\nand dilations, strides, paddings, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and output(Output) are in NCDHW format. Where N is batch size, C is the\nnumber of channels, D is the depth of the feature, H is the height of the feature,\nand W is the width of the feature.\nFilter(Input) is in MCDHW format. Where M is the number of input feature channels,\nC is the number of output feature channels, D is the depth of the filter,H is the\nheight of the filter, and W is the width of the filter.\nParameters(strides, paddings) are three elements. These three elements represent\ndepth, height and width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample: \n Input:\n Input shape: $(N, C_{in}, D_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{in}, C_{out}, D_f, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, D_{out}, H_{out}, W_{out})$\n Where\n $$\n D_{out} = (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\\\\n H_{out} = (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\\\\n W_{out} = (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1\n $$\n",
"inputs" : [ "inputs" : [
{ {
"name" : "Input", "name" : "Input",
...@@ -1805,7 +1805,7 @@ ...@@ -1805,7 +1805,7 @@
"attrs" : [ ] "attrs" : [ ]
},{ },{
"type" : "conv2d_transpose", "type" : "conv2d_transpose",
"comment" : "\nConvolution2D Transpose Operator.\n\nThe convolution transpose operation calculates the output based on the input, filter\nand dilations, strides, paddings, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and output(Output) are in NCHW format. Where N is batchsize, C is the\nnumber of channels, H is the height of the feature, and W is the width of the feature.\nFilter(Input) is in MCHW format. Where M is the number of input feature channels,\nC is the number of output feature channels, H is the height of the filter,\nand W is the width of the filter.\nParameters(strides, paddings) are two elements. These two elements represent height\nand width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample:\n Input:\n Input shape: $(N, C_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{in}, C_{out}, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, H_{out}, W_{out})$\n Where\n $$\n H_{out} = (H_{in} - 1) * strides[0] - 2 * paddings[0] + H_f \\\\\n W_{out} = (W_{in} - 1) * strides[1] - 2 * paddings[1] + W_f\n $$\n", "comment" : "\nConvolution2D Transpose Operator.\n\nThe convolution transpose operation calculates the output based on the input, filter\nand dilations, strides, paddings, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and output(Output) are in NCHW format. Where N is batchsize, C is the\nnumber of channels, H is the height of the feature, and W is the width of the feature.\nFilter(Input) is in MCHW format. Where M is the number of input feature channels,\nC is the number of output feature channels, H is the height of the filter,\nand W is the width of the filter.\nParameters(strides, paddings) are two elements. These two elements represent height\nand width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample:\n Input:\n Input shape: $(N, C_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{in}, C_{out}, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, H_{out}, W_{out})$\n Where\n $$\n H_{out} = (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\\\\n W_{out} = (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1\n $$\n",
"inputs" : [ "inputs" : [
{ {
"name" : "Input", "name" : "Input",
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -1284,13 +1284,12 @@ are in NCHW format. Where N is batch size, C is the number of channels, H is the ...@@ -1284,13 +1284,12 @@ are in NCHW format. Where N is batch size, C is the number of channels, H is the
of the feature, and W is the width of the feature. of the feature, and W is the width of the feature.
The details of convolution layer, please refer UFLDL&#8217;s <a class="reference external" href="http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/">convolution,</a> . The details of convolution layer, please refer UFLDL&#8217;s <a class="reference external" href="http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/">convolution,</a> .
If bias attribution and activation type are provided, bias is added to the output of the convolution, If bias attribution and activation type are provided, bias is added to the output of the convolution,
and the corresponding activation function is applied to the final result. and the corresponding activation function is applied to the final result.</p>
For each input <span class="math">\(X\)</span>, the equation is:</p> <p>For each input <span class="math">\(X\)</span>, the equation is:</p>
<div class="math"> <div class="math">
\[Out = \sigma (W \ast X + b)\]</div> \[Out = \sigma (W \ast X + b)\]</div>
<p>In the above equation:</p> <p>In the above equation:</p>
<blockquote> <ul class="simple">
<div><ul class="simple">
<li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li> <li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li>
<li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li> <li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li>
<li><span class="math">\(\ast\)</span>: Convolution operation.</li> <li><span class="math">\(\ast\)</span>: Convolution operation.</li>
...@@ -1298,20 +1297,21 @@ For each input <span class="math">\(X\)</span>, the equation is:</p> ...@@ -1298,20 +1297,21 @@ For each input <span class="math">\(X\)</span>, the equation is:</p>
<li><span class="math">\(\sigma\)</span>: Activation function.</li> <li><span class="math">\(\sigma\)</span>: Activation function.</li>
<li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li> <li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li>
</ul> </ul>
</div></blockquote>
<p class="rubric">Example</p> <p class="rubric">Example</p>
<dl class="docutils"> <ul>
<dt>Input:</dt> <li><p class="first">Input:</p>
<dd><p class="first">Input shape: $(N, C_{in}, H_{in}, W_{in})$</p> <p>Input shape: $(N, C_{in}, H_{in}, W_{in})$</p>
<p class="last">Filter shape: $(C_{out}, C_{in}, H_f, W_f)$</p> <p>Filter shape: $(C_{out}, C_{in}, H_f, W_f)$</p>
</dd> </li>
<dt>Output:</dt> <li><p class="first">Output:
<dd>Output shape: $(N, C_{out}, H_{out}, W_{out})$</dd> Output shape: $(N, C_{out}, H_{out}, W_{out})$</p>
</dl> </li>
</ul>
<p>Where</p> <p>Where</p>
<div class="math"> <div class="math">
\[\begin{split}H_{out}&amp;= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ \[\]</div>
W_{out}&amp;= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]</div> <p>H_{out}&amp;= frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \
W_{out}&amp;= frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
...@@ -2086,33 +2086,63 @@ to compute the length.</td> ...@@ -2086,33 +2086,63 @@ to compute the length.</td>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>The transpose of conv2d layer.</p> <dd><p><strong>Convlution2D transpose layer</strong></p>
<p>This layer is also known as deconvolution layer.</p> <p>The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW format. Where N is batch size, C is the number of channels,
H is the height of the feature, and W is the width of the feature.
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references <a class="reference external" href="http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf">therein</a>.</p>
<p>For each input <span class="math">\(X\)</span>, the equation is:</p>
<div class="math">
\[Out = W \ast X\]</div>
<p>In the above equation:</p>
<ul class="simple">
<li><span class="math">\(X\)</span>: Input value, a tensor with NCHW format.</li>
<li><span class="math">\(W\)</span>: Filter value, a tensor with MCHW format.</li>
<li><span class="math">\(\ast\)</span> : Convolution transpose operation.</li>
<li><span class="math">\(Out\)</span>: Output value, the shape of <span class="math">\(Out\)</span> and <span class="math">\(X\)</span> may be different.</li>
</ul>
<p class="rubric">Example</p>
<ul>
<li><p class="first">Input:</p>
<p>Input shape: $(N, C_{in}, H_{in}, W_{in})$</p>
<p>Filter shape: $(C_{in}, C_{out}, H_f, W_f)$</p>
</li>
<li><p class="first">Output:</p>
<p>Output shape: $(N, C_{out}, H_{out}, W_{out})$</p>
</li>
</ul>
<p>Where</p>
<div class="math">
\[\begin{split}H_{out} &amp;= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\
W_{out} &amp;= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1\end{split}\]</div>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
<col class="field-body" /> <col class="field-body" />
<tbody valign="top"> <tbody valign="top">
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"> <tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>input</strong> (<em>Variable</em>) &#8211; The input image with [N, C, H, W] format.</li> <li><strong>input</strong> (<em>Variable</em>) &#8211; The input image with [N, C, H, W] format.</li>
<li><strong>num_filters</strong> (<em>int</em>) &#8211; The number of filter. It is as same as the output <li><strong>num_filters</strong> (<em>int</em>) &#8211; The number of the filter. It is as same as the output
image channel.</li> image channel.</li>
<li><strong>output_size</strong> (<em>int|tuple|None</em>) &#8211; The output image size. If output size is a <li><strong>output_size</strong> (<em>int|tuple|None</em>) &#8211; The output image size. If output size is a
tuple, it must contain two integers, (image_H, image_W). This tuple, it must contain two integers, (image_H, image_W). This
parameter only works when filter_size is None.</li> parameter only works when filter_size is None.</li>
<li><strong>filter_size</strong> (<em>int|tuple|None</em>) &#8211; The filter size. If filter_size is a tuple, <li><strong>filter_size</strong> (<em>int|tuple|None</em>) &#8211; The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_H, filter_size_W). it must contain two integers, (filter_size_H, filter_size_W).
Otherwise, the filter will be a square. None if use output size to Otherwise, the filter will be a square. None if use output size to
calculate filter_size</li> calculate filter_size.</li>
<li><strong>padding</strong> (<em>int|tuple</em>) &#8211; The padding size. If padding is a tuple, it must <li><strong>padding</strong> (<em>int|tuple</em>) &#8211; The padding size. If padding is a tuple, it must
contain two integers, (padding_H, padding_W). Otherwise, the contain two integers, (padding_H, padding_W). Otherwise, the
padding_H = padding_W = padding.</li> padding_H = padding_W = padding. Default: padding = 0.</li>
<li><strong>stride</strong> (<em>int|tuple</em>) &#8211; The stride size. If stride is a tuple, it must <li><strong>stride</strong> (<em>int|tuple</em>) &#8211; The stride size. If stride is a tuple, it must
contain two integers, (stride_H, stride_W). Otherwise, the contain two integers, (stride_H, stride_W). Otherwise, the
stride_H = stride_W = stride.</li> stride_H = stride_W = stride. Default: stride = 1.</li>
<li><strong>dilation</strong> (<em>int|tuple</em>) &#8211; The dilation size. If dilation is a tuple, it must <li><strong>dilation</strong> (<em>int|tuple</em>) &#8211; The dilation size. If dilation is a tuple, it must
contain two integers, (dilation_H, dilation_W). Otherwise, the contain two integers, (dilation_H, dilation_W). Otherwise, the
dilation_H = dilation_W = dilation.</li> dilation_H = dilation_W = dilation. Default: dilation = 1.</li>
<li><strong>param_attr</strong> &#8211; Parameter Attribute.</li> <li><strong>param_attr</strong> (<em>ParamAttr</em>) &#8211; The parameters to the Conv2d_transpose Layer. Default: None</li>
<li><strong>use_cudnn</strong> (<em>bool</em>) &#8211; Use cudnn kernel or not, it is valid only when the cudnn <li><strong>use_cudnn</strong> (<em>bool</em>) &#8211; Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True</li> library is installed. Default: True</li>
<li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer <li><strong>name</strong> (<em>str|None</em>) &#8211; A name for this layer(optional). If set None, the layer
...@@ -2120,14 +2150,22 @@ will be named automatically.</li> ...@@ -2120,14 +2150,22 @@ will be named automatically.</li>
</ul> </ul>
</td> </td>
</tr> </tr>
<tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">Output image.</p> <tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first">The tensor variable storing the convolution transpose result.</p>
</td> </td>
</tr> </tr>
<tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first last">Variable</p> <tr class="field-odd field"><th class="field-name">返回类型:</th><td class="field-body"><p class="first">Variable</p>
</td>
</tr>
<tr class="field-even field"><th class="field-name">Raises:</th><td class="field-body"><p class="first last"><code class="xref py py-exc docutils literal"><span class="pre">ValueError</span></code> &#8211; If the shapes of input, filter_size, stride, padding and groups mismatch.</p>
</td> </td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
<p class="rubric">Examples</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">&#39;data&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">&#39;float32&#39;</span><span class="p">)</span>
<span class="n">conv2d_transpose</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">conv2d_transpose</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filters</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">filter_size</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl> </dd></dl>
</div> </div>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册