Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
9666979d
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 2 年 前同步成功
通知
2325
Star
20933
Fork
5424
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
9666979d
编写于
11月 23, 2022
作者:
C
ccrrong
提交者:
GitHub
11月 23, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
move conv2d_transpose and conv3d_transpose (#48198)
上级
32462c64
变更
18
显示空白变更内容
内联
并排
Showing
18 changed file
with
804 addition
and
798 deletion
+804
-798
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+0
-727
python/paddle/fluid/tests/unittests/ir/inference/test_mkldnn_conv_bias_fuse_pass.py
...unittests/ir/inference/test_mkldnn_conv_bias_fuse_pass.py
+2
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv3d_transpose_op.py
...ts/unittests/ir/inference/test_trt_conv3d_transpose_op.py
+3
-2
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
.../fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
+2
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_quant_dequant_pass.py
...nittests/ir/inference/test_trt_conv_quant_dequant_pass.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_conv2d_transposed_op_mlu.py
...luid/tests/unittests/mlu/test_conv2d_transposed_op_mlu.py
+13
-13
python/paddle/fluid/tests/unittests/npu/test_conv2d_transpose_op_npu.py
...fluid/tests/unittests/npu/test_conv2d_transpose_op_npu.py
+7
-7
python/paddle/fluid/tests/unittests/test_conv2d_transpose_layer.py
...ddle/fluid/tests/unittests/test_conv2d_transpose_layer.py
+2
-1
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
.../paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
+15
-15
python/paddle/fluid/tests/unittests/test_conv3d_transpose_layer.py
...ddle/fluid/tests/unittests/test_conv3d_transpose_layer.py
+2
-1
python/paddle/fluid/tests/unittests/test_conv3d_transpose_part2_op.py
...e/fluid/tests/unittests/test_conv3d_transpose_part2_op.py
+12
-11
python/paddle/fluid/tests/unittests/test_conv_transpose_nn_grad.py
...ddle/fluid/tests/unittests/test_conv_transpose_nn_grad.py
+5
-5
python/paddle/fluid/tests/unittests/test_functional_conv2d_transpose.py
...fluid/tests/unittests/test_functional_conv2d_transpose.py
+1
-1
python/paddle/fluid/tests/unittests/test_functional_conv3d_transpose.py
...fluid/tests/unittests/test_functional_conv3d_transpose.py
+2
-2
python/paddle/fluid/tests/unittests/test_imperative_load_static_param.py
...luid/tests/unittests/test_imperative_load_static_param.py
+4
-4
python/paddle/fluid/tests/unittests/test_layers.py
python/paddle/fluid/tests/unittests/test_layers.py
+3
-3
python/paddle/static/nn/__init__.py
python/paddle/static/nn/__init__.py
+2
-2
python/paddle/static/nn/common.py
python/paddle/static/nn/common.py
+728
-1
未找到文件。
python/paddle/fluid/layers/nn.py
浏览文件 @
9666979d
...
...
@@ -77,8 +77,6 @@ __all__ = [
'inplace_abn'
,
'instance_norm'
,
'data_norm'
,
'conv2d_transpose',
'conv3d_transpose',
'reduce_sum'
,
'reduce_mean'
,
'reduce_max'
,
...
...
@@ -3811,731 +3809,6 @@ def spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None):
return
out
def conv2d_transpose(
input,
num_filters,
output_size=None,
filter_size=None,
padding=0,
stride=1,
dilation=1,
groups=None,
param_attr=None,
bias_attr=None,
use_cudnn=True,
act=None,
name=None,
data_format='NCHW',
):
r"""
:api_attr: Static Graph
The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW or NHWC format. Where N is batch size, C is the number of channels,
H is the height of the feature, and W is the width of the feature.
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references
`therein <https://arxiv.org/pdf/1603.07285.pdf>`_.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.
For each input :math:`X`, the equation is:
.. math::
Out = \sigma (W \\ast X + b)
Where:
* :math:`X`: Input value, a 4-D Tensor with NCHW or NHWC format.
* :math:`W`: Filter value, a 4-D Tensor with MCHW format.
* :math:`\\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\\sigma`: Activation function.
* :math:`Out`: Output value, a 4-D Tensor with data format 'NCHW' or 'NHWC', the shape of :math:`Out` and :math:`X` may be different.
Example:
- Input:
Input shape: :math:`(N, C_{in}, H_{in}, W_{in})`
Filter shape: :math:`(C_{in}, C_{out}, H_f, W_f)`
- Output:
Output shape: :math:`(N, C_{out}, H_{out}, W_{out})`
Where
.. math::
H^\prime_{out} &= (H_{in} - 1) * strides[0] - pad_height_top - pad_height_bottom + dilations[0] * (H_f - 1) + 1 \\\\
W^\prime_{out} &= (W_{in} - 1) * strides[1] - pad_width_left - pad_width_right + dilations[1] * (W_f - 1) + 1 \\\\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ] \\\\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ]
Note:
The conv2d_transpose can be seen as the backward of the conv2d. For conv2d,
when stride > 1, conv2d maps multiple input shape to the same output shape,
so for conv2d_transpose, when stride > 1, input shape maps multiple output shape.
If output_size is None, :math:`H_{out} = H^\prime_{out}, W_{out} = W^\prime_{out}`;
else, the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[0]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[1]`,
conv2d_transpose can compute the kernel size automatically.
Args:
input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
its data type is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output
image channel.
output_size(int|tuple, optional): The output image size. If output size is a
tuple, it must contain two integers, (image_height, image_width). None if use
filter_size, padding, and stride to calculate output_size.
If output_size and filter_size are specified at the same time, They
should follow the formula above. Default: None. output_size and filter_size
should not be None at the same time.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_height, filter_size_width).
Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None. filter_size and
output_size should not be None at the same time.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
on both sides for each dimension. If `padding` is a string, either 'VALID' or
'SAME' which is the padding algorithm. If `padding` is a tuple or list,
it could be in three forms: `[pad_height, pad_width]` or
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `"NCHW"`, `padding` can be in the form
`[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `"NHWC"`, `padding` can be in the form
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
If dilation is a tuple, it must contain two integers, (dilation_height, dilation_width).
Otherwise, dilation_height = dilation_width = dilation. Default: dilation = 1.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_height, filter_size_width).
Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None.
groups(int, optional): The groups number of the Conv2d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky's Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups = 1.
param_attr (ParamAttr, optional): The parameter attribute for learnable parameters/weights
of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose
will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional): The parameter attribute for the bias of conv2d_transpose.
If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv2d_transpose
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True.
act (str, optional): Activation type, if it is set to None, activation is not appended.
Default: None.
name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from: `"NCHW"`, `"NHWC"`.
The default is `"NCHW"`. When it is `"NCHW"`, the data is stored in the order of:
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Tensor representing the conv2d_transpose, whose
data type is the same with input and shape is (num_batches, channels, out_h,
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor
storing the transposed convolution result, and if act is not None, the
tensor storing transposed convolution and non-linearity activation
result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCHW" or "NHWC".
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ValueError: If `output_size` and filter_size are None at the same time.
ShapeError: If the input is not 4-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels.
ShapeError: If the size of `output_size` is not equal to that of `stride`.
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
"""
assert (
param_attr is not False
), "param_attr should not be False in conv2d_transpose."
if len(input.shape) != 4:
raise ValueError(
"Input size should be 4, "
"but received {}".format(len(input.shape))
)
if data_format not in ['NCHW', 'NHWC']:
raise ValueError(
"Attr(data_format) of Op(fluid.layers.conv2d_transpose) got wrong value: received "
+ data_format
+ " but only NCHW or NHWC supported."
)
input_channel = input.shape[1] if data_format == 'NCHW' else input.shape[-1]
op_type = 'conv2d_transpose'
if (
input_channel == groups
and num_filters == input_channel
and not use_cudnn
):
op_type = 'depthwise_conv2d_transpose'
helper = LayerHelper(op_type, **locals())
if not isinstance(input, Variable):
raise TypeError("Input of conv2d_transpose must be Variable")
stride = utils.convert_to_list(stride, 2, 'stride')
dilation = utils.convert_to_list(dilation, 2, 'dilation')
if not isinstance(use_cudnn, bool):
raise ValueError("use_cudnn should be True or False")
def _update_padding(padding, data_format):
def is_list_or_tuple(ele):
if isinstance(ele, list) or isinstance(ele, tuple):
return True
return False
if is_list_or_tuple(padding) and len(padding) == 4:
if is_list_or_tuple(padding[0]) and (data_format == "NCHW"):
if not (padding[0] == [0, 0] and padding[1] == [0, 0]):
raise ValueError(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported." % str(padding)
)
padding = padding[2:4]
padding = [ele for a_list in padding for ele in a_list]
elif is_list_or_tuple(padding[0]) and (data_format == "NHWC"):
if not (padding[0] == [0, 0] and padding[3] == [0, 0]):
raise ValueError(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported." % str(padding)
)
padding = padding[1:3]
padding = [ele for a_list in padding for ele in a_list]
padding = utils.convert_to_list(padding, 4, 'padding')
else:
padding = utils.convert_to_list(padding, 2, 'padding')
padding = [padding[0], padding[0], padding[1], padding[1]]
return padding
padding_algorithm = "EXPLICIT"
if isinstance(padding, str):
padding = padding.upper()
if padding not in ["SAME", "VALID"]:
raise ValueError(
"Unknown padding: '%s'. It can only be 'SAME' or 'VALID'."
% str(padding)
)
if padding == "VALID":
padding_algorithm = "VALID"
padding = [0, 0, 0, 0]
elif padding == "SAME":
padding_algorithm = "SAME"
padding = [0, 0, 0, 0]
padding = _update_padding(padding, data_format)
if output_size is None:
output_size = []
elif isinstance(output_size, (list, tuple)):
if utils._contain_var(output_size):
output_size = utils._convert_to_tensor_list(output_size)
else:
output_size = utils.convert_to_list(output_size, 2, 'output_size')
elif isinstance(output_size, int):
output_size = utils.convert_to_list(output_size, 2, 'output_size')
elif isinstance(output_size, Variable):
check_dtype(
output_size.dtype,
'output_size',
['int32', 'int64'],
'conv2d_transpose',
)
if len(output_size.shape) == 1 and (
output_size.shape[0] == 1 or output_size.shape[0] == 2
):
if output_size.shape[0] == 1:
output_size = [output_size, output_size]
else:
raise ValueError("output_size must contain one or two integers.")
else:
raise ValueError(
"output_size should be int, list[int] or tuple[int] or Tensor"
)
if filter_size is None:
if output_size is []:
raise ValueError("output_size must be set when filter_size is None")
if not _non_static_mode():
if isinstance(output_size, Variable) or utils._contain_var(
output_size
):
raise ValueError(
"filter_size should not be None when output_size is Variable or contain Variable in static mode."
)
else:
output_size = utils.convert_shape_to_list(output_size)
if len(output_size) == 1:
output_size = utils.convert_to_list(
output_size[0], 2, 'output_size'
)
h_in = input.shape[2] if data_format == 'NCHW' else input.shape[1]
w_in = input.shape[3] if data_format == 'NCHW' else input.shape[2]
filter_size_h = (
output_size[0]
- (h_in - 1) * stride[0]
+ padding[0]
+ padding[1]
- 1
) // dilation[0] + 1
filter_size_w = (
output_size[1]
- (w_in - 1) * stride[1]
+ padding[2]
+ padding[3]
- 1
) // dilation[1] + 1
filter_size = [filter_size_h, filter_size_w]
else:
filter_size = utils.convert_to_list(
filter_size, 2, 'conv2d_transpose.filter_size'
)
if len(padding) == 4 and utils._is_symmetric_padding(padding, 2):
padding = [padding[0], padding[2]]
if groups is None:
groups = 1
elif groups <= 0:
raise ValueError(
"the groups of input must be greater than 0, "
"but received the groups of input is {}".format(groups)
)
filter_shape = [input_channel, num_filters // groups] + filter_size
img_filter = helper.create_parameter(
dtype=input.dtype, shape=filter_shape, attr=helper.param_attr
)
pre_bias = helper.create_variable_for_type_inference(dtype=input.dtype)
helper.append_op(
type=op_type,
inputs={'Input': [input], 'Filter': [img_filter]},
outputs={'Output': pre_bias},
attrs={
'output_size': output_size,
'strides': stride,
'paddings': padding,
'padding_algorithm': padding_algorithm,
'dilations': dilation,
'groups': groups,
'use_cudnn': use_cudnn,
'data_format': data_format,
},
)
if data_format == 'NCHW':
pre_act = helper.append_bias_op(pre_bias, dim_start=1, dim_end=2)
else:
pre_act = helper.append_bias_op(pre_bias, dim_start=3, dim_end=4)
out = helper.append_activation(pre_act)
return out
def conv3d_transpose(
input,
num_filters,
output_size=None,
filter_size=None,
padding=0,
stride=1,
dilation=1,
groups=None,
param_attr=None,
bias_attr=None,
use_cudnn=True,
act=None,
name=None,
data_format='NCDHW',
):
r"""
:api_attr: Static Graph
The convolution3D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCDHW or NDHWC format. Where N is batch size, C is the number of channels,
D is the depth of the feature, H is the height of the feature, and W
is the width of the feature. Parameters(dilations, strides, paddings) are
two elements. These two elements represent height and width, respectively.
The details of convolution transpose layer, please refer to the following
explanation and references `therein <https://arxiv.org/pdf/1603.07285.pdf>`_.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.
For each input :math:`X`, the equation is:
.. math::
Out = \sigma (W \ast X + b)
In the above equation:
* :math:`X`: Input value, a Tensor with NCDHW or NDHWC format.
* :math:`W`: Filter value, a Tensor with MCDHW format.
* :math:`\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
Example:
- Input:
Input shape: :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`
Filter shape: :math:`(C_{in}, C_{out}, D_f, H_f, W_f)`
- Output:
Output shape: :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`
Where
.. math::
D^\prime_{out} &= (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\\\
H^\prime_{out} &= (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\\\
W^\prime_{out} &= (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1 \\\\
D_{out} &\in [ D^\prime_{out}, D^\prime_{out} + strides[0] ] \\\\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[1] ] \\\\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[2] ]
Note:
The conv3d_transpose can be seen as the backward of the conv3d. For conv3d,
when stride > 1, conv3d maps multiple input shape to the same output shape,
so for conv3d_transpose, when stride > 1, input shape maps multiple output shape.
If output_size is None, :math:`H_{out} = H^\prime_{out}, :math:`H_{out} = \
H^\prime_{out}, W_{out} = W^\prime_{out}`; else, the :math:`D_{out}` of the output
size must between :math:`D^\prime_{out}` and :math:`D^\prime_{out} + strides[0]`,
the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[1]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[2]`,
conv3d_transpose can compute the kernel size automatically.
Args:
input(Tensor): The input is 5-D Tensor with shape [N, C, D, H, W] or [N, D, H, W, C], the data type
of input is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output
image channel.
output_size(int|tuple, optional): The output image size. If output size is a
tuple, it must contain three integers, (image_depth, image_height, image_width). This
parameter only works when filter_size is None. If output_size and filter_size are
specified at the same time, They should follow the formula above. Default: None.
Output_size and filter_size should not be None at the same time.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain three integers, (filter_size_depth, filter_size_height,
filter_size_width). Otherwise, filter_size_depth = filter_size_height = \
filter_size_width = filter_size. None if use output size to
calculate filter_size. Default: None. filter_size and output_size should not be
None at the same time.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively
adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string,
either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding`
is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or
`[pad_depth_front, pad_depth_back, pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `'NCDHW'`, `padding` can be in the form
`[[0,0], [0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `'NDHWC'`, `padding` can be in the form
`[[0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain three integers, (stride_depth, stride_height,
stride_width). Otherwise, stride_depth = stride_height = stride_width = stride.
Default: stride = 1.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
If dilation is a tuple, it must contain three integers, (dilation_depth, dilation_height,
dilation_width). Otherwise, dilation_depth = dilation_height = dilation_width = dilation.
Default: dilation = 1.
groups(int, optional): The groups number of the Conv3d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky's Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups=1
param_attr (ParamAttr, optional): The parameter attribute for learnable parameters/weights
of conv3d_transpose. If it is set to None or one attribute of ParamAttr, conv3d_transpose
will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional): The parameter attribute for the bias of conv3d_transpose.
If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv3d_transpose
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True
act (str, optional): Activation type, if it is set to None, activation is not appended.
Default: None.
name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from: `"NCHW"`, `"NHWC"`.
The default is `"NCHW"`. When it is `"NCHW"`, the data is stored in the order of:
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Variable holding Tensor representing the conv3d_transpose, whose data
type is the same with input and shape is (num_batches, channels, out_d, out_h,
out_w) or (num_batches, out_d, out_h, out_w, channels). If act is None, the tensor
variable storing the transposed convolution result, and if act is not None, the tensor
variable storing transposed convolution and non-linearity activation result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCDHW" or "NDHWC".
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ValueError: If `output_size` and filter_size are None at the same time.
ShapeError: If the input is not 5-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels.
ShapeError: If the size of `output_size` is not equal to that of `stride`.
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_static()
data = paddle.static.data(name='data', shape=[None, 3, 12, 32, 32], dtype='float32')
param_attr = paddle.framework.ParamAttr(name='conv3d.weight', initializer=paddle.nn.initializer.XavierNormal(), learning_rate=0.001)
res = paddle.static.nn.conv3d_transpose(input=data, num_filters=2, filter_size=3, act="relu", param_attr=param_attr)
place = paddle.CPUPlace()
exe = paddle.static.Executor(place)
exe.run(paddle.static.default_startup_program())
x = np.random.rand(1, 3, 12, 32, 32).astype("float32")
output = exe.run(feed={"data": x}, fetch_list=[res])
print(output)
"""
assert (
param_attr is not False
), "param_attr should not be False in conv3d_transpose."
if data_format not in ['NCDHW', 'NDHWC']:
raise ValueError(
"Param(data_format) of Op(fluid.layers.conv3d_transpose) got wrong value: received "
+ data_format
+ " but only NCDHW or NDHWC supported."
)
l_type = "conv3d_transpose"
helper = LayerHelper(l_type, **locals())
if not isinstance(input, Variable):
raise TypeError("Input of conv3d_transpose must be Variable")
if len(input.shape) != 5:
raise ValueError(
"Input should be 5D tensor, but received input with the shape of {}".format(
input.shape
)
)
input_channel = (
input.shape[1] if data_format == 'NCDHW' else input.shape[-1]
)
stride = utils.convert_to_list(stride, 3, 'stride')
dilation = utils.convert_to_list(dilation, 3, 'dilation')
if not isinstance(use_cudnn, bool):
raise ValueError("use_cudnn should be True or False")
def _update_padding(padding, data_format):
def is_list_or_tuple(ele):
if isinstance(ele, list) or isinstance(ele, tuple):
return True
return False
if is_list_or_tuple(padding) and len(padding) == 5:
if is_list_or_tuple(padding[0]) and (data_format == "NCDHW"):
if not (padding[0] == [0, 0] and padding[1] == [0, 0]):
raise ValueError(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported." % str(padding)
)
padding = padding[2:5]
padding = [ele for a_list in padding for ele in a_list]
elif is_list_or_tuple(padding[0]) and (data_format == "NDHWC"):
if not (padding[0] == [0, 0] and padding[4] == [0, 0]):
raise ValueError(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported." % str(padding)
)
padding = padding[1:4]
padding = [ele for a_list in padding for ele in a_list]
padding = utils.convert_to_list(padding, 6, 'padding')
elif is_list_or_tuple(padding) and len(padding) == 6:
padding = utils.convert_to_list(padding, 6, 'padding')
else:
padding = utils.convert_to_list(padding, 3, 'padding')
padding = [
padding[0],
padding[0],
padding[1],
padding[1],
padding[2],
padding[2],
]
return padding
padding_algorithm = "EXPLICIT"
if isinstance(padding, str):
padding = padding.upper()
if padding not in ["SAME", "VALID"]:
raise ValueError(
"Unknown padding: '%s'. It can only be 'SAME' or 'VALID'."
% str(padding)
)
if padding == "VALID":
padding_algorithm = "VALID"
padding = [0, 0, 0, 0, 0, 0]
elif padding == "SAME":
padding_algorithm = "SAME"
padding = [0, 0, 0, 0, 0, 0]
padding = _update_padding(padding, data_format)
if filter_size is None:
if output_size is None:
raise ValueError("output_size must be set when filter_size is None")
if isinstance(output_size, int):
output_size = [output_size, output_size, output_size]
d_in = input.shape[2] if data_format == 'NCDHW' else input.shape[1]
h_in = input.shape[3] if data_format == 'NCDHW' else input.shape[2]
w_in = input.shape[4] if data_format == 'NCDHW' else input.shape[3]
filter_size_d = (
output_size[0]
- (d_in - 1) * stride[0]
+ padding[0]
+ padding[1]
- 1
) // dilation[0] + 1
filter_size_h = (
output_size[1]
- (h_in - 1) * stride[1]
+ padding[2]
+ padding[3]
- 1
) // dilation[1] + 1
filter_size_w = (
output_size[2]
- (w_in - 1) * stride[2]
+ padding[4]
+ padding[5]
- 1
) // dilation[2] + 1
filter_size = [filter_size_d, filter_size_h, filter_size_w]
else:
filter_size = utils.convert_to_list(
filter_size, 3, 'conv3d_transpose.filter_size'
)
if len(padding) == 6 and utils._is_symmetric_padding(padding, 3):
padding = [padding[0], padding[2], padding[4]]
if output_size is None:
output_size = []
elif isinstance(output_size, (list, tuple, int)):
output_size = utils.convert_to_list(output_size, 3, 'output_size')
else:
raise ValueError("output_size should be int, list[int] or tuple[int]")
groups = 1 if groups is None else groups
if groups <= 0:
raise ValueError(
"the groups of conv3d_transpose should be greater than 0. Received groups: {}".format(
groups
)
)
if num_filters % groups != 0:
raise ValueError(
"Attr(num_filters) must be divisible by groups,"
"Received: Attr(num_filters) is {}, the groups is {}".format(
num_filters, groups
)
)
filter_shape = [input_channel, num_filters // groups] + filter_size
img_filter = helper.create_parameter(
dtype=input.dtype, shape=filter_shape, attr=helper.param_attr
)
if data_format == 'NCDHW':
data_format = 'NCHW'
if data_format == 'NDHWC':
data_format = 'NHWC'
pre_bias = helper.create_variable_for_type_inference(dtype=input.dtype)
helper.append_op(
type=l_type,
inputs={'Input': [input], 'Filter': [img_filter]},
outputs={'Output': pre_bias},
attrs={
'output_size': output_size,
'strides': stride,
'paddings': padding,
'padding_algorithm': padding_algorithm,
'dilations': dilation,
'groups': groups,
'use_cudnn': use_cudnn,
'data_format': data_format,
},
)
if data_format == 'NCHW':
pre_act = helper.append_bias_op(pre_bias, dim_start=1, dim_end=2)
else:
pre_act = helper.append_bias_op(pre_bias, dim_start=4, dim_end=5)
out = helper.append_activation(pre_act)
return out
def
reduce_sum
(
input
,
dim
=
None
,
keep_dim
=
False
,
name
=
None
):
"""
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_mkldnn_conv_bias_fuse_pass.py
浏览文件 @
9666979d
...
...
@@ -15,6 +15,7 @@
import
unittest
import
numpy
as
np
from
inference_pass_test
import
InferencePassTest
import
paddle
import
paddle.fluid
as
fluid
from
paddle.fluid.core
import
PassVersionChecker
...
...
@@ -173,7 +174,7 @@ class ConvTransposeMkldnnFusePassDialtionsGroupsTest(InferencePassTest):
initializer
=
fluid
.
initializer
.
Xavier
(
uniform
=
False
),
learning_rate
=
0.001
,
)
conv_out
=
fluid
.
layers
.
conv2d_transpose
(
conv_out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
num_filters
=
3
,
filter_size
=
3
,
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv3d_transpose_op.py
浏览文件 @
9666979d
...
...
@@ -15,6 +15,7 @@
import
unittest
import
numpy
as
np
from
inference_pass_test
import
InferencePassTest
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
from
paddle.fluid.core
import
PassVersionChecker
...
...
@@ -28,7 +29,7 @@ class TensorRTSubgraphPassConv3dTransposeTest(InferencePassTest):
data
=
fluid
.
data
(
name
=
"data"
,
shape
=
[
-
1
,
4
,
4
,
32
,
32
],
dtype
=
"float32"
)
conv_out
=
fluid
.
layers
.
conv3d_transpose
(
conv_out
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data
,
num_filters
=
self
.
conv_num_filters
,
filter_size
=
self
.
conv_filter_size
,
...
...
@@ -95,7 +96,7 @@ class DynamicShapeTensorRTSubgraphPassConv3dTransposeTest(InferencePassTest):
data
=
fluid
.
data
(
name
=
"data"
,
shape
=
[
-
1
,
6
,
-
1
,
-
1
,
-
1
],
dtype
=
"float32"
)
conv_out
=
fluid
.
layers
.
conv3d_transpose
(
conv_out
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data
,
num_filters
=
self
.
conv_num_filters
,
filter_size
=
self
.
conv_filter_size
,
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
浏览文件 @
9666979d
...
...
@@ -16,6 +16,7 @@ import os
import
unittest
import
numpy
as
np
from
inference_pass_test
import
InferencePassTest
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
from
paddle.fluid.core
import
PassVersionChecker
...
...
@@ -109,7 +110,7 @@ class TensorRTSubgraphPassConvTransposeTest(InferencePassTest):
data
=
fluid
.
data
(
name
=
"data"
,
shape
=
[
-
1
,
6
,
64
,
64
],
dtype
=
"float32"
)
conv_out
=
fluid
.
layers
.
conv2d_transpose
(
conv_out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
num_filters
=
self
.
conv_num_filters
,
filter_size
=
self
.
conv_filter_size
,
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_quant_dequant_pass.py
浏览文件 @
9666979d
...
...
@@ -237,7 +237,7 @@ class QuantDequantTensorRTSubgraphPassConvTransposeTest(QuantDequantTest):
data_reshape
=
paddle
.
reshape
(
self
.
data
,
shape
=
[
1
,
4
,
14
,
14
])
self
.
label
=
fluid
.
data
(
name
=
'label'
,
shape
=
[
1
,
1
],
dtype
=
'int64'
)
label_shape
=
paddle
.
reshape
(
self
.
label
,
shape
=
[
1
,
1
,
1
])
conv_out
=
fluid
.
layers
.
conv2d_transpose
(
conv_out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data_reshape
,
num_filters
=
self
.
conv_num_filters
,
filter_size
=
self
.
conv_filter_size
,
...
...
python/paddle/fluid/tests/unittests/mlu/test_conv2d_transposed_op_mlu.py
浏览文件 @
9666979d
...
...
@@ -499,21 +499,21 @@ class TestConv2DTransposeAPI(unittest.TestCase):
data2
=
fluid
.
layers
.
data
(
name
=
'data2'
,
shape
=
[
5
,
5
,
3
],
dtype
=
'float32'
)
out1
=
fluid
.
layers
.
conv2d_transpose
(
out1
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NCHW'
,
)
out2
=
fluid
.
layers
.
conv2d_transpose
(
out2
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NHWC'
,
)
out3
=
fluid
.
layers
.
conv2d_transpose
(
out3
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -521,7 +521,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
0
,
0
]],
data_format
=
'NHWC'
,
)
out4
=
fluid
.
layers
.
conv2d_transpose
(
out4
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
3
,
num_filters
=
6
,
...
...
@@ -529,7 +529,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
0
,
0
],
[
2
,
1
],
[
0
,
0
]],
data_format
=
'NCHW'
,
)
out5
=
fluid
.
layers
.
conv2d_transpose
(
out5
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -537,7 +537,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'SAME'
,
data_format
=
'NCHW'
,
)
out6
=
fluid
.
layers
.
conv2d_transpose
(
out6
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -545,7 +545,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'VALID'
,
data_format
=
'NHWC'
,
)
out7
=
fluid
.
layers
.
conv2d_transpose
(
out7
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -586,7 +586,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
data
=
fluid
.
layers
.
data
(
name
=
'data'
,
shape
=
[
3
,
5
,
5
],
dtype
=
"float32"
)
def
attr_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -597,7 +597,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_data_format
)
def
attr_padding_str
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -608,7 +608,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_str
)
def
attr_padding_list
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -619,7 +619,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_list
)
def
attr_padding_with_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -635,14 +635,14 @@ class TestConv2DTransposeOpException(unittest.TestCase):
)
def
error_input_size
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
error_input
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
)
self
.
assertRaises
(
ValueError
,
error_input_size
)
def
error_groups
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
0
,
num_filters
=
6
,
...
...
python/paddle/fluid/tests/unittests/npu/test_conv2d_transpose_op_npu.py
浏览文件 @
9666979d
...
...
@@ -435,21 +435,21 @@ class TestConv2DTransposeAPI(unittest.TestCase):
data2
=
fluid
.
layers
.
data
(
name
=
'data2'
,
shape
=
[
5
,
5
,
3
],
dtype
=
'float32'
)
out1
=
fluid
.
layers
.
conv2d_transpose
(
out1
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NCHW'
,
)
out2
=
fluid
.
layers
.
conv2d_transpose
(
out2
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NHWC'
,
)
out3
=
fluid
.
layers
.
conv2d_transpose
(
out3
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -457,7 +457,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
0
,
0
]],
data_format
=
'NHWC'
,
)
out4
=
fluid
.
layers
.
conv2d_transpose
(
out4
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
3
,
num_filters
=
6
,
...
...
@@ -465,7 +465,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
0
,
0
],
[
2
,
1
],
[
0
,
0
]],
data_format
=
'NCHW'
,
)
out5
=
fluid
.
layers
.
conv2d_transpose
(
out5
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -473,7 +473,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'SAME'
,
data_format
=
'NCHW'
,
)
out6
=
fluid
.
layers
.
conv2d_transpose
(
out6
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -481,7 +481,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'VALID'
,
data_format
=
'NHWC'
,
)
out7
=
fluid
.
layers
.
conv2d_transpose
(
out7
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
python/paddle/fluid/tests/unittests/test_conv2d_transpose_layer.py
浏览文件 @
9666979d
...
...
@@ -13,6 +13,7 @@
# limitations under the License.
import
numpy
as
np
import
paddle
from
paddle
import
fluid
,
nn
import
paddle.fluid.dygraph
as
dg
import
paddle.nn.functional
as
F
...
...
@@ -104,7 +105,7 @@ class Conv2DTransposeTestCase(unittest.TestCase):
else
:
bias_attr
=
I
.
NumpyArrayInitializer
(
self
.
bias
)
y_var
=
fluid
.
layers
.
conv2d_transpose
(
y_var
=
paddle
.
static
.
nn
.
conv2d_transpose
(
x_var
,
self
.
num_filters
,
filter_size
=
self
.
filter_size
,
...
...
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
浏览文件 @
9666979d
...
...
@@ -835,21 +835,21 @@ class TestConv2DTransposeAPI(unittest.TestCase):
data2
=
fluid
.
layers
.
data
(
name
=
'data2'
,
shape
=
[
5
,
5
,
3
],
dtype
=
'float32'
)
out1
=
fluid
.
layers
.
conv2d_transpose
(
out1
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NCHW'
,
)
out2
=
fluid
.
layers
.
conv2d_transpose
(
out2
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NHWC'
,
)
out3
=
fluid
.
layers
.
conv2d_transpose
(
out3
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -857,7 +857,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
0
,
0
]],
data_format
=
'NHWC'
,
)
out4
=
fluid
.
layers
.
conv2d_transpose
(
out4
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
3
,
num_filters
=
6
,
...
...
@@ -865,7 +865,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
0
,
0
],
[
2
,
1
],
[
0
,
0
]],
data_format
=
'NCHW'
,
)
out5
=
fluid
.
layers
.
conv2d_transpose
(
out5
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -873,7 +873,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'SAME'
,
data_format
=
'NCHW'
,
)
out6
=
fluid
.
layers
.
conv2d_transpose
(
out6
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -881,7 +881,7 @@ class TestConv2DTransposeAPI(unittest.TestCase):
padding
=
'VALID'
,
data_format
=
'NHWC'
,
)
out7
=
fluid
.
layers
.
conv2d_transpose
(
out7
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -919,7 +919,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
data
=
fluid
.
layers
.
data
(
name
=
'data'
,
shape
=
[
3
,
5
,
5
],
dtype
=
"float32"
)
def
attr_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -930,7 +930,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_data_format
)
def
attr_padding_str
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -941,7 +941,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_str
)
def
attr_padding_list
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -952,7 +952,7 @@ class TestConv2DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_list
)
def
attr_padding_with_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -968,14 +968,14 @@ class TestConv2DTransposeOpException(unittest.TestCase):
)
def
error_input_size
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
error_input
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
)
self
.
assertRaises
(
ValueError
,
error_input_size
)
def
error_groups
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
0
,
num_filters
=
6
,
...
...
@@ -1064,7 +1064,7 @@ class TestTensorOutputSize3(TestTensorOutputSize1):
def
call_func
(
self
,
x
):
w_var
=
paddle
.
randn
((
3
,
6
,
3
,
3
),
dtype
=
'float32'
)
output_size
=
paddle
.
assign
([
17
])
out
=
paddle
.
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
x
,
num_filters
=
6
,
output_size
=
output_size
,
filter_size
=
3
,
stride
=
2
)
return
out
...
...
@@ -1076,7 +1076,7 @@ class TestTensorOutputSize4(TestTensorOutputSize1):
def
call_func
(
self
,
x
):
output_size
=
[
17
,
paddle
.
assign
([
17
])]
out
=
paddle
.
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
x
,
num_filters
=
6
,
output_size
=
output_size
,
filter_size
=
3
,
stride
=
2
)
return
out
...
...
python/paddle/fluid/tests/unittests/test_conv3d_transpose_layer.py
浏览文件 @
9666979d
...
...
@@ -13,6 +13,7 @@
# limitations under the License.
import
numpy
as
np
import
paddle
from
paddle
import
fluid
,
nn
import
paddle.fluid.dygraph
as
dg
import
paddle.nn.functional
as
F
...
...
@@ -101,7 +102,7 @@ class Conv3DTransposeTestCase(unittest.TestCase):
bias_attr
=
False
else
:
bias_attr
=
I
.
NumpyArrayInitializer
(
self
.
bias
)
y_var
=
fluid
.
layers
.
conv3d_transpose
(
y_var
=
paddle
.
static
.
nn
.
conv3d_transpose
(
x_var
,
self
.
num_filters
,
filter_size
=
self
.
filter_size
,
...
...
python/paddle/fluid/tests/unittests/test_conv3d_transpose_part2_op.py
浏览文件 @
9666979d
...
...
@@ -15,6 +15,7 @@
import
unittest
import
numpy
as
np
import
paddle
import
paddle.fluid.core
as
core
import
paddle.fluid
as
fluid
from
test_conv3d_transpose_op
import
TestConv3DTransposeOp
...
...
@@ -91,21 +92,21 @@ class TestConv3DTransposeAPI(unittest.TestCase):
name
=
'data2'
,
shape
=
[
5
,
5
,
5
,
3
],
dtype
=
'float32'
)
out1
=
fluid
.
layers
.
conv3d_transpose
(
out1
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NCDHW'
,
)
out2
=
fluid
.
layers
.
conv3d_transpose
(
out2
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
filter_size
=
3
,
data_format
=
'NDHWC'
,
)
out3
=
fluid
.
layers
.
conv3d_transpose
(
out3
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data1
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -113,7 +114,7 @@ class TestConv3DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
0
,
0
],
[
1
,
1
],
[
0
,
0
],
[
1
,
1
]],
data_format
=
'NCDHW'
,
)
out4
=
fluid
.
layers
.
conv3d_transpose
(
out4
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data2
,
groups
=
3
,
num_filters
=
6
,
...
...
@@ -121,7 +122,7 @@ class TestConv3DTransposeAPI(unittest.TestCase):
padding
=
[[
0
,
0
],
[
0
,
0
],
[
1
,
1
],
[
1
,
2
],
[
0
,
0
]],
data_format
=
'NDHWC'
,
)
out5
=
fluid
.
layers
.
conv3d_transpose
(
out5
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -129,7 +130,7 @@ class TestConv3DTransposeAPI(unittest.TestCase):
padding
=
'SAME'
,
data_format
=
'NCDHW'
,
)
out6
=
fluid
.
layers
.
conv3d_transpose
(
out6
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -137,7 +138,7 @@ class TestConv3DTransposeAPI(unittest.TestCase):
padding
=
'VALID'
,
data_format
=
'NDHWC'
,
)
out7
=
fluid
.
layers
.
conv3d_transpose
(
out7
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
data2
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -177,7 +178,7 @@ class TestConv3DTransposeOpException(unittest.TestCase):
)
def
attr_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -188,7 +189,7 @@ class TestConv3DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_data_format
)
def
attr_padding_str
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -199,7 +200,7 @@ class TestConv3DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_str
)
def
attr_padding_list
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
@@ -210,7 +211,7 @@ class TestConv3DTransposeOpException(unittest.TestCase):
self
.
assertRaises
(
ValueError
,
attr_padding_list
)
def
attr_padding_with_data_format
():
out
=
fluid
.
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
data
,
groups
=
1
,
num_filters
=
6
,
...
...
python/paddle/fluid/tests/unittests/test_conv_transpose_nn_grad.py
浏览文件 @
9666979d
...
...
@@ -36,7 +36,7 @@ class TestConvTransposeDoubleGradCheck(unittest.TestCase):
if
core
.
is_compiled_with_rocm
():
dtype
=
np
.
float32
x
=
layers
.
data
(
'x'
,
shape
,
False
,
dtype
)
y
=
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
x
,
2
,
filter_size
=
1
,
groups
=
1
,
bias_attr
=
False
)
x_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
shape
).
astype
(
dtype
)
...
...
@@ -92,7 +92,7 @@ class TestConvTranspose2DoubleGradCheck_AsyPadding(
if
core
.
is_compiled_with_rocm
():
dtype
=
np
.
float32
x
=
layers
.
data
(
'x'
,
shape
,
False
,
dtype
)
y
=
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
x
,
num_filters
=
2
,
filter_size
=
1
,
...
...
@@ -145,7 +145,7 @@ class TestConvTranspose2DoubleGradCheck_PaddingSAME(
if
core
.
is_compiled_with_rocm
():
dtype
=
np
.
float32
x
=
layers
.
data
(
'x'
,
shape
,
False
,
dtype
)
y
=
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
x
,
num_filters
=
2
,
filter_size
=
1
,
...
...
@@ -198,7 +198,7 @@ class TestConvTranspose2DoubleGradCheck_PaddingVALID(
if
core
.
is_compiled_with_rocm
():
dtype
=
np
.
float32
x
=
layers
.
data
(
'x'
,
shape
,
False
,
dtype
)
y
=
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
x
,
num_filters
=
2
,
filter_size
=
1
,
...
...
@@ -251,7 +251,7 @@ class TestConvTranspose2DoubleGradCheck_ChannelLast(
if
core
.
is_compiled_with_rocm
():
dtype
=
np
.
float32
x
=
layers
.
data
(
'x'
,
shape
,
False
,
dtype
)
y
=
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
x
,
num_filters
=
2
,
filter_size
=
1
,
...
...
python/paddle/fluid/tests/unittests/test_functional_conv2d_transpose.py
浏览文件 @
9666979d
...
...
@@ -89,7 +89,7 @@ class TestFunctionalConv2D(TestCase):
(
-
1
,
self
.
in_channels
,
-
1
,
-
1
),
dtype
=
self
.
dtype
,
)
y
=
fluid
.
layers
.
conv2d_transpose
(
y
=
paddle
.
static
.
nn
.
conv2d_transpose
(
x
,
self
.
out_channels
,
output_size
=
self
.
output_size
,
...
...
python/paddle/fluid/tests/unittests/test_functional_conv3d_transpose.py
浏览文件 @
9666979d
...
...
@@ -89,7 +89,7 @@ class TestFunctionalConv3DTranspose(TestCase):
(
-
1
,
self
.
in_channels
,
-
1
,
-
1
,
-
1
),
dtype
=
self
.
dtype
,
)
y
=
fluid
.
layers
.
conv3d_transpose
(
y
=
paddle
.
static
.
nn
.
conv3d_transpose
(
x
,
self
.
out_channels
,
output_size
=
self
.
output_size
,
...
...
@@ -550,7 +550,7 @@ class TestFunctionalConv3DTransposeErrorCase10(TestCase):
with
fluid
.
unique_name
.
guard
():
with
fluid
.
program_guard
(
main
,
start
):
x
=
fluid
.
data
(
"input"
,
self
.
input
.
shape
,
dtype
=
paddle
.
float32
)
y
=
fluid
.
layers
.
conv3d_transpose
(
y
=
paddle
.
static
.
nn
.
conv3d_transpose
(
x
,
self
.
num_filters
,
self
.
filter_size
,
...
...
python/paddle/fluid/tests/unittests/test_imperative_load_static_param.py
浏览文件 @
9666979d
...
...
@@ -103,20 +103,20 @@ class TestDygraphLoadStatic(unittest.TestCase):
name
=
"conv2d_trans_in"
,
shape
=
[
None
,
10
,
10
,
10
]
)
conv2d_trans_out_1
=
fluid
.
layers
.
conv2d_transpose
(
conv2d_trans_out_1
=
paddle
.
static
.
nn
.
conv2d_transpose
(
conv2d_trans_in
,
num_filters
=
10
,
filter_size
=
5
,
act
=
"relu"
)
conv2d_trans_out_2
=
fluid
.
layers
.
conv2d_transpose
(
conv2d_trans_out_2
=
paddle
.
static
.
nn
.
conv2d_transpose
(
conv2d_trans_in
,
num_filters
=
10
,
filter_size
=
5
,
act
=
"relu"
)
conv3d_trans_in
=
fluid
.
data
(
name
=
'conv3d_trans_in'
,
shape
=
[
None
,
3
,
12
,
32
,
32
],
dtype
=
'float32'
)
conv3d_trans_out_1
=
fluid
.
layers
.
conv3d_transpose
(
conv3d_trans_out_1
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
conv3d_trans_in
,
num_filters
=
2
,
filter_size
=
3
,
act
=
"relu"
)
conv3d_trans_out_2
=
fluid
.
layers
.
conv3d_transpose
(
conv3d_trans_out_2
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
conv3d_trans_in
,
num_filters
=
2
,
filter_size
=
3
,
act
=
"relu"
)
...
...
python/paddle/fluid/tests/unittests/test_layers.py
浏览文件 @
9666979d
...
...
@@ -716,7 +716,7 @@ class TestLayer(LayerTest):
inp_np
=
np
.
arange
(
0
,
24
).
reshape
([
2
,
3
,
2
,
2
]).
astype
(
'float32'
)
with
self
.
static_graph
():
img
=
layers
.
data
(
name
=
'pixel'
,
shape
=
[
3
,
2
,
2
],
dtype
=
'float32'
)
out
=
layers
.
conv2d_transpose
(
out
=
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
img
,
num_filters
=
10
,
filter_size
=
27
,
...
...
@@ -2270,7 +2270,7 @@ class TestLayer(LayerTest):
with
self
.
static_graph
():
img
=
layers
.
data
(
name
=
'pixel'
,
shape
=
[
3
,
2
,
2
,
2
],
dtype
=
'float32'
)
out
=
layers
.
conv3d_transpose
(
out
=
paddle
.
static
.
nn
.
conv3d_transpose
(
input
=
img
,
num_filters
=
12
,
filter_size
=
12
,
use_cudnn
=
False
)
static_rlt
=
self
.
get_static_graph_result
(
...
...
@@ -3062,7 +3062,7 @@ class TestBook(LayerTest):
fluid
.
default_main_program
(),
fluid
.
default_startup_program
()
):
img
=
self
.
_get_data
(
name
=
'pixel'
,
shape
=
[
3
,
2
,
2
],
dtype
=
'float32'
)
return
layers
.
conv2d_transpose
(
return
paddle
.
static
.
nn
.
conv2d_transpose
(
input
=
img
,
num_filters
=
10
,
output_size
=
28
)
...
...
python/paddle/static/nn/__init__.py
浏览文件 @
9666979d
...
...
@@ -14,15 +14,15 @@
from
.common
import
fc
# noqa: F401
from
.common
import
deform_conv2d
# noqa: F401
from
.common
import
conv2d_transpose
# noqa: F401
from
.common
import
conv3d_transpose
# noqa: F401
from
...fluid.layers
import
batch_norm
# noqa: F401
from
...fluid.layers
import
bilinear_tensor_product
# noqa: F401
from
...fluid.layers
import
case
# noqa: F401
from
...fluid.layers
import
cond
# noqa: F401
from
...fluid.layers
import
conv2d
# noqa: F401
from
...fluid.layers
import
conv2d_transpose
# noqa: F401
from
...fluid.layers
import
conv3d
# noqa: F401
from
...fluid.layers
import
conv3d_transpose
# noqa: F401
from
...fluid.layers
import
create_parameter
# noqa: F401
from
...fluid.layers
import
crf_decoding
# noqa: F401
from
...fluid.layers
import
data_norm
# noqa: F401
...
...
python/paddle/static/nn/common.py
浏览文件 @
9666979d
...
...
@@ -13,7 +13,9 @@
# limitations under the License.
import
paddle
from
paddle.fluid.framework
import
static_only
from
paddle.fluid.framework
import
static_only
,
Variable
,
_non_static_mode
from
paddle.fluid.data_feeder
import
check_dtype
from
paddle.common_ops_import
import
(
check_type
,
...
...
@@ -174,6 +176,731 @@ def fc(
)
def
conv2d_transpose
(
input
,
num_filters
,
output_size
=
None
,
filter_size
=
None
,
padding
=
0
,
stride
=
1
,
dilation
=
1
,
groups
=
None
,
param_attr
=
None
,
bias_attr
=
None
,
use_cudnn
=
True
,
act
=
None
,
name
=
None
,
data_format
=
'NCHW'
,
):
r
"""
:api_attr: Static Graph
The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW or NHWC format. Where N is batch size, C is the number of channels,
H is the height of the feature, and W is the width of the feature.
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references
`therein <https://arxiv.org/pdf/1603.07285.pdf>`_.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.
For each input :math:`X`, the equation is:
.. math::
Out = \sigma (W \\ast X + b)
Where:
* :math:`X`: Input value, a 4-D Tensor with NCHW or NHWC format.
* :math:`W`: Filter value, a 4-D Tensor with MCHW format.
* :math:`\\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\\sigma`: Activation function.
* :math:`Out`: Output value, a 4-D Tensor with data format 'NCHW' or 'NHWC', the shape of :math:`Out` and :math:`X` may be different.
Example:
- Input:
Input shape: :math:`(N, C_{in}, H_{in}, W_{in})`
Filter shape: :math:`(C_{in}, C_{out}, H_f, W_f)`
- Output:
Output shape: :math:`(N, C_{out}, H_{out}, W_{out})`
Where
.. math::
H^\prime_{out} &= (H_{in} - 1) * strides[0] - pad_height_top - pad_height_bottom + dilations[0] * (H_f - 1) + 1 \\\\
W^\prime_{out} &= (W_{in} - 1) * strides[1] - pad_width_left - pad_width_right + dilations[1] * (W_f - 1) + 1 \\\\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ] \\\\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ]
Note:
The conv2d_transpose can be seen as the backward of the conv2d. For conv2d,
when stride > 1, conv2d maps multiple input shape to the same output shape,
so for conv2d_transpose, when stride > 1, input shape maps multiple output shape.
If output_size is None, :math:`H_{out} = H^\prime_{out}, W_{out} = W^\prime_{out}`;
else, the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[0]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[1]`,
conv2d_transpose can compute the kernel size automatically.
Args:
input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
its data type is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output
image channel.
output_size(int|tuple, optional): The output image size. If output size is a
tuple, it must contain two integers, (image_height, image_width). None if use
filter_size, padding, and stride to calculate output_size.
If output_size and filter_size are specified at the same time, They
should follow the formula above. Default: None. output_size and filter_size
should not be None at the same time.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_height, filter_size_width).
Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None. filter_size and
output_size should not be None at the same time.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
on both sides for each dimension. If `padding` is a string, either 'VALID' or
'SAME' which is the padding algorithm. If `padding` is a tuple or list,
it could be in three forms: `[pad_height, pad_width]` or
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `"NCHW"`, `padding` can be in the form
`[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `"NHWC"`, `padding` can be in the form
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
If dilation is a tuple, it must contain two integers, (dilation_height, dilation_width).
Otherwise, dilation_height = dilation_width = dilation. Default: dilation = 1.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_height, filter_size_width).
Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None.
groups(int, optional): The groups number of the Conv2d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky's Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups = 1.
param_attr (ParamAttr, optional): The parameter attribute for learnable parameters/weights
of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose
will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional): The parameter attribute for the bias of conv2d_transpose.
If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv2d_transpose
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True.
act (str, optional): Activation type, if it is set to None, activation is not appended.
Default: None.
name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from: `"NCHW"`, `"NHWC"`.
The default is `"NCHW"`. When it is `"NCHW"`, the data is stored in the order of:
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Tensor representing the conv2d_transpose, whose
data type is the same with input and shape is (num_batches, channels, out_h,
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor
storing the transposed convolution result, and if act is not None, the
tensor storing transposed convolution and non-linearity activation
result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCHW" or "NHWC".
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ValueError: If `output_size` and filter_size are None at the same time.
ShapeError: If the input is not 4-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels.
ShapeError: If the size of `output_size` is not equal to that of `stride`.
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
"""
assert
(
param_attr
is
not
False
),
"param_attr should not be False in conv2d_transpose."
if
len
(
input
.
shape
)
!=
4
:
raise
ValueError
(
"Input size should be 4, "
"but received {}"
.
format
(
len
(
input
.
shape
))
)
if
data_format
not
in
[
'NCHW'
,
'NHWC'
]:
raise
ValueError
(
"Attr(data_format) of Op(paddle.static.nn.layers.conv2d_transpose) got wrong value: received "
+
data_format
+
" but only NCHW or NHWC supported."
)
input_channel
=
input
.
shape
[
1
]
if
data_format
==
'NCHW'
else
input
.
shape
[
-
1
]
op_type
=
'conv2d_transpose'
if
(
input_channel
==
groups
and
num_filters
==
input_channel
and
not
use_cudnn
):
op_type
=
'depthwise_conv2d_transpose'
helper
=
LayerHelper
(
op_type
,
**
locals
())
if
not
isinstance
(
input
,
Variable
):
raise
TypeError
(
"Input of conv2d_transpose must be Variable"
)
stride
=
utils
.
convert_to_list
(
stride
,
2
,
'stride'
)
dilation
=
utils
.
convert_to_list
(
dilation
,
2
,
'dilation'
)
if
not
isinstance
(
use_cudnn
,
bool
):
raise
ValueError
(
"use_cudnn should be True or False"
)
def
_update_padding
(
padding
,
data_format
):
def
is_list_or_tuple
(
ele
):
if
isinstance
(
ele
,
list
)
or
isinstance
(
ele
,
tuple
):
return
True
return
False
if
is_list_or_tuple
(
padding
)
and
len
(
padding
)
==
4
:
if
is_list_or_tuple
(
padding
[
0
])
and
(
data_format
==
"NCHW"
):
if
not
(
padding
[
0
]
==
[
0
,
0
]
and
padding
[
1
]
==
[
0
,
0
]):
raise
ValueError
(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported."
%
str
(
padding
)
)
padding
=
padding
[
2
:
4
]
padding
=
[
ele
for
a_list
in
padding
for
ele
in
a_list
]
elif
is_list_or_tuple
(
padding
[
0
])
and
(
data_format
==
"NHWC"
):
if
not
(
padding
[
0
]
==
[
0
,
0
]
and
padding
[
3
]
==
[
0
,
0
]):
raise
ValueError
(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported."
%
str
(
padding
)
)
padding
=
padding
[
1
:
3
]
padding
=
[
ele
for
a_list
in
padding
for
ele
in
a_list
]
padding
=
utils
.
convert_to_list
(
padding
,
4
,
'padding'
)
else
:
padding
=
utils
.
convert_to_list
(
padding
,
2
,
'padding'
)
padding
=
[
padding
[
0
],
padding
[
0
],
padding
[
1
],
padding
[
1
]]
return
padding
padding_algorithm
=
"EXPLICIT"
if
isinstance
(
padding
,
str
):
padding
=
padding
.
upper
()
if
padding
not
in
[
"SAME"
,
"VALID"
]:
raise
ValueError
(
"Unknown padding: '%s'. It can only be 'SAME' or 'VALID'."
%
str
(
padding
)
)
if
padding
==
"VALID"
:
padding_algorithm
=
"VALID"
padding
=
[
0
,
0
,
0
,
0
]
elif
padding
==
"SAME"
:
padding_algorithm
=
"SAME"
padding
=
[
0
,
0
,
0
,
0
]
padding
=
_update_padding
(
padding
,
data_format
)
if
output_size
is
None
:
output_size
=
[]
elif
isinstance
(
output_size
,
(
list
,
tuple
)):
if
utils
.
_contain_var
(
output_size
):
output_size
=
utils
.
_convert_to_tensor_list
(
output_size
)
else
:
output_size
=
utils
.
convert_to_list
(
output_size
,
2
,
'output_size'
)
elif
isinstance
(
output_size
,
int
):
output_size
=
utils
.
convert_to_list
(
output_size
,
2
,
'output_size'
)
elif
isinstance
(
output_size
,
Variable
):
check_dtype
(
output_size
.
dtype
,
'output_size'
,
[
'int32'
,
'int64'
],
'conv2d_transpose'
,
)
if
len
(
output_size
.
shape
)
==
1
and
(
output_size
.
shape
[
0
]
==
1
or
output_size
.
shape
[
0
]
==
2
):
if
output_size
.
shape
[
0
]
==
1
:
output_size
=
[
output_size
,
output_size
]
else
:
raise
ValueError
(
"output_size must contain one or two integers."
)
else
:
raise
ValueError
(
"output_size should be int, list[int] or tuple[int] or Tensor"
)
if
filter_size
is
None
:
if
output_size
is
[]:
raise
ValueError
(
"output_size must be set when filter_size is None"
)
if
not
_non_static_mode
():
if
isinstance
(
output_size
,
Variable
)
or
utils
.
_contain_var
(
output_size
):
raise
ValueError
(
"filter_size should not be None when output_size is Variable or contain Variable in static mode."
)
else
:
output_size
=
utils
.
convert_shape_to_list
(
output_size
)
if
len
(
output_size
)
==
1
:
output_size
=
utils
.
convert_to_list
(
output_size
[
0
],
2
,
'output_size'
)
h_in
=
input
.
shape
[
2
]
if
data_format
==
'NCHW'
else
input
.
shape
[
1
]
w_in
=
input
.
shape
[
3
]
if
data_format
==
'NCHW'
else
input
.
shape
[
2
]
filter_size_h
=
(
output_size
[
0
]
-
(
h_in
-
1
)
*
stride
[
0
]
+
padding
[
0
]
+
padding
[
1
]
-
1
)
//
dilation
[
0
]
+
1
filter_size_w
=
(
output_size
[
1
]
-
(
w_in
-
1
)
*
stride
[
1
]
+
padding
[
2
]
+
padding
[
3
]
-
1
)
//
dilation
[
1
]
+
1
filter_size
=
[
filter_size_h
,
filter_size_w
]
else
:
filter_size
=
utils
.
convert_to_list
(
filter_size
,
2
,
'conv2d_transpose.filter_size'
)
if
len
(
padding
)
==
4
and
utils
.
_is_symmetric_padding
(
padding
,
2
):
padding
=
[
padding
[
0
],
padding
[
2
]]
if
groups
is
None
:
groups
=
1
elif
groups
<=
0
:
raise
ValueError
(
"the groups of input must be greater than 0, "
"but received the groups of input is {}"
.
format
(
groups
)
)
filter_shape
=
[
input_channel
,
num_filters
//
groups
]
+
filter_size
img_filter
=
helper
.
create_parameter
(
dtype
=
input
.
dtype
,
shape
=
filter_shape
,
attr
=
helper
.
param_attr
)
pre_bias
=
helper
.
create_variable_for_type_inference
(
dtype
=
input
.
dtype
)
helper
.
append_op
(
type
=
op_type
,
inputs
=
{
'Input'
:
[
input
],
'Filter'
:
[
img_filter
]},
outputs
=
{
'Output'
:
pre_bias
},
attrs
=
{
'output_size'
:
output_size
,
'strides'
:
stride
,
'paddings'
:
padding
,
'padding_algorithm'
:
padding_algorithm
,
'dilations'
:
dilation
,
'groups'
:
groups
,
'use_cudnn'
:
use_cudnn
,
'data_format'
:
data_format
,
},
)
if
data_format
==
'NCHW'
:
pre_act
=
helper
.
append_bias_op
(
pre_bias
,
dim_start
=
1
,
dim_end
=
2
)
else
:
pre_act
=
helper
.
append_bias_op
(
pre_bias
,
dim_start
=
3
,
dim_end
=
4
)
out
=
helper
.
append_activation
(
pre_act
)
return
out
def
conv3d_transpose
(
input
,
num_filters
,
output_size
=
None
,
filter_size
=
None
,
padding
=
0
,
stride
=
1
,
dilation
=
1
,
groups
=
None
,
param_attr
=
None
,
bias_attr
=
None
,
use_cudnn
=
True
,
act
=
None
,
name
=
None
,
data_format
=
'NCDHW'
,
):
r
"""
:api_attr: Static Graph
The convolution3D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCDHW or NDHWC format. Where N is batch size, C is the number of channels,
D is the depth of the feature, H is the height of the feature, and W
is the width of the feature. Parameters(dilations, strides, paddings) are
two elements. These two elements represent height and width, respectively.
The details of convolution transpose layer, please refer to the following
explanation and references `therein <https://arxiv.org/pdf/1603.07285.pdf>`_.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.
For each input :math:`X`, the equation is:
.. math::
Out = \sigma (W \ast X + b)
In the above equation:
* :math:`X`: Input value, a Tensor with NCDHW or NDHWC format.
* :math:`W`: Filter value, a Tensor with MCDHW format.
* :math:`\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
Example:
- Input:
Input shape: :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`
Filter shape: :math:`(C_{in}, C_{out}, D_f, H_f, W_f)`
- Output:
Output shape: :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`
Where
.. math::
D^\prime_{out} &= (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\\\
H^\prime_{out} &= (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\\\
W^\prime_{out} &= (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1 \\\\
D_{out} &\in [ D^\prime_{out}, D^\prime_{out} + strides[0] ] \\\\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[1] ] \\\\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[2] ]
Note:
The conv3d_transpose can be seen as the backward of the conv3d. For conv3d,
when stride > 1, conv3d maps multiple input shape to the same output shape,
so for conv3d_transpose, when stride > 1, input shape maps multiple output shape.
If output_size is None, :math:`H_{out} = H^\prime_{out}, :math:`H_{out} = \
H^\prime_{out}, W_{out} = W^\prime_{out}`; else, the :math:`D_{out}` of the output
size must between :math:`D^\prime_{out}` and :math:`D^\prime_{out} + strides[0]`,
the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[1]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[2]`,
conv3d_transpose can compute the kernel size automatically.
Args:
input(Tensor): The input is 5-D Tensor with shape [N, C, D, H, W] or [N, D, H, W, C], the data type
of input is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output
image channel.
output_size(int|tuple, optional): The output image size. If output size is a
tuple, it must contain three integers, (image_depth, image_height, image_width). This
parameter only works when filter_size is None. If output_size and filter_size are
specified at the same time, They should follow the formula above. Default: None.
Output_size and filter_size should not be None at the same time.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain three integers, (filter_size_depth, filter_size_height,
filter_size_width). Otherwise, filter_size_depth = filter_size_height = \
filter_size_width = filter_size. None if use output size to
calculate filter_size. Default: None. filter_size and output_size should not be
None at the same time.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively
adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string,
either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding`
is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or
`[pad_depth_front, pad_depth_back, pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `'NCDHW'`, `padding` can be in the form
`[[0,0], [0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `'NDHWC'`, `padding` can be in the form
`[[0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain three integers, (stride_depth, stride_height,
stride_width). Otherwise, stride_depth = stride_height = stride_width = stride.
Default: stride = 1.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
If dilation is a tuple, it must contain three integers, (dilation_depth, dilation_height,
dilation_width). Otherwise, dilation_depth = dilation_height = dilation_width = dilation.
Default: dilation = 1.
groups(int, optional): The groups number of the Conv3d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky's Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups=1
param_attr (ParamAttr, optional): The parameter attribute for learnable parameters/weights
of conv3d_transpose. If it is set to None or one attribute of ParamAttr, conv3d_transpose
will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional): The parameter attribute for the bias of conv3d_transpose.
If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv3d_transpose
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True
act (str, optional): Activation type, if it is set to None, activation is not appended.
Default: None.
name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from: `"NCHW"`, `"NHWC"`.
The default is `"NCHW"`. When it is `"NCHW"`, the data is stored in the order of:
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Variable holding Tensor representing the conv3d_transpose, whose data
type is the same with input and shape is (num_batches, channels, out_d, out_h,
out_w) or (num_batches, out_d, out_h, out_w, channels). If act is None, the tensor
variable storing the transposed convolution result, and if act is not None, the tensor
variable storing transposed convolution and non-linearity activation result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCDHW" or "NDHWC".
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ValueError: If `output_size` and filter_size are None at the same time.
ShapeError: If the input is not 5-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels.
ShapeError: If the size of `output_size` is not equal to that of `stride`.
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_static()
data = paddle.static.data(name='data', shape=[None, 3, 12, 32, 32], dtype='float32')
param_attr = paddle.framework.ParamAttr(name='conv3d.weight', initializer=paddle.nn.initializer.XavierNormal(), learning_rate=0.001)
res = paddle.static.nn.conv3d_transpose(input=data, num_filters=2, filter_size=3, act="relu", param_attr=param_attr)
place = paddle.CPUPlace()
exe = paddle.static.Executor(place)
exe.run(paddle.static.default_startup_program())
x = np.random.rand(1, 3, 12, 32, 32).astype("float32")
output = exe.run(feed={"data": x}, fetch_list=[res])
print(output)
"""
assert
(
param_attr
is
not
False
),
"param_attr should not be False in conv3d_transpose."
if
data_format
not
in
[
'NCDHW'
,
'NDHWC'
]:
raise
ValueError
(
"Param(data_format) of Op(paddle.static.nn.conv3d_transpose) got wrong value: received "
+
data_format
+
" but only NCDHW or NDHWC supported."
)
l_type
=
"conv3d_transpose"
helper
=
LayerHelper
(
l_type
,
**
locals
())
if
not
isinstance
(
input
,
Variable
):
raise
TypeError
(
"Input of conv3d_transpose must be Variable"
)
if
len
(
input
.
shape
)
!=
5
:
raise
ValueError
(
"Input should be 5D tensor, but received input with the shape of {}"
.
format
(
input
.
shape
)
)
input_channel
=
(
input
.
shape
[
1
]
if
data_format
==
'NCDHW'
else
input
.
shape
[
-
1
]
)
stride
=
utils
.
convert_to_list
(
stride
,
3
,
'stride'
)
dilation
=
utils
.
convert_to_list
(
dilation
,
3
,
'dilation'
)
if
not
isinstance
(
use_cudnn
,
bool
):
raise
ValueError
(
"use_cudnn should be True or False"
)
def
_update_padding
(
padding
,
data_format
):
def
is_list_or_tuple
(
ele
):
if
isinstance
(
ele
,
list
)
or
isinstance
(
ele
,
tuple
):
return
True
return
False
if
is_list_or_tuple
(
padding
)
and
len
(
padding
)
==
5
:
if
is_list_or_tuple
(
padding
[
0
])
and
(
data_format
==
"NCDHW"
):
if
not
(
padding
[
0
]
==
[
0
,
0
]
and
padding
[
1
]
==
[
0
,
0
]):
raise
ValueError
(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported."
%
str
(
padding
)
)
padding
=
padding
[
2
:
5
]
padding
=
[
ele
for
a_list
in
padding
for
ele
in
a_list
]
elif
is_list_or_tuple
(
padding
[
0
])
and
(
data_format
==
"NDHWC"
):
if
not
(
padding
[
0
]
==
[
0
,
0
]
and
padding
[
4
]
==
[
0
,
0
]):
raise
ValueError
(
"Non-zero padding(%s) in the batch or channel dimensions "
"is not supported."
%
str
(
padding
)
)
padding
=
padding
[
1
:
4
]
padding
=
[
ele
for
a_list
in
padding
for
ele
in
a_list
]
padding
=
utils
.
convert_to_list
(
padding
,
6
,
'padding'
)
elif
is_list_or_tuple
(
padding
)
and
len
(
padding
)
==
6
:
padding
=
utils
.
convert_to_list
(
padding
,
6
,
'padding'
)
else
:
padding
=
utils
.
convert_to_list
(
padding
,
3
,
'padding'
)
padding
=
[
padding
[
0
],
padding
[
0
],
padding
[
1
],
padding
[
1
],
padding
[
2
],
padding
[
2
],
]
return
padding
padding_algorithm
=
"EXPLICIT"
if
isinstance
(
padding
,
str
):
padding
=
padding
.
upper
()
if
padding
not
in
[
"SAME"
,
"VALID"
]:
raise
ValueError
(
"Unknown padding: '%s'. It can only be 'SAME' or 'VALID'."
%
str
(
padding
)
)
if
padding
==
"VALID"
:
padding_algorithm
=
"VALID"
padding
=
[
0
,
0
,
0
,
0
,
0
,
0
]
elif
padding
==
"SAME"
:
padding_algorithm
=
"SAME"
padding
=
[
0
,
0
,
0
,
0
,
0
,
0
]
padding
=
_update_padding
(
padding
,
data_format
)
if
filter_size
is
None
:
if
output_size
is
None
:
raise
ValueError
(
"output_size must be set when filter_size is None"
)
if
isinstance
(
output_size
,
int
):
output_size
=
[
output_size
,
output_size
,
output_size
]
d_in
=
input
.
shape
[
2
]
if
data_format
==
'NCDHW'
else
input
.
shape
[
1
]
h_in
=
input
.
shape
[
3
]
if
data_format
==
'NCDHW'
else
input
.
shape
[
2
]
w_in
=
input
.
shape
[
4
]
if
data_format
==
'NCDHW'
else
input
.
shape
[
3
]
filter_size_d
=
(
output_size
[
0
]
-
(
d_in
-
1
)
*
stride
[
0
]
+
padding
[
0
]
+
padding
[
1
]
-
1
)
//
dilation
[
0
]
+
1
filter_size_h
=
(
output_size
[
1
]
-
(
h_in
-
1
)
*
stride
[
1
]
+
padding
[
2
]
+
padding
[
3
]
-
1
)
//
dilation
[
1
]
+
1
filter_size_w
=
(
output_size
[
2
]
-
(
w_in
-
1
)
*
stride
[
2
]
+
padding
[
4
]
+
padding
[
5
]
-
1
)
//
dilation
[
2
]
+
1
filter_size
=
[
filter_size_d
,
filter_size_h
,
filter_size_w
]
else
:
filter_size
=
utils
.
convert_to_list
(
filter_size
,
3
,
'conv3d_transpose.filter_size'
)
if
len
(
padding
)
==
6
and
utils
.
_is_symmetric_padding
(
padding
,
3
):
padding
=
[
padding
[
0
],
padding
[
2
],
padding
[
4
]]
if
output_size
is
None
:
output_size
=
[]
elif
isinstance
(
output_size
,
(
list
,
tuple
,
int
)):
output_size
=
utils
.
convert_to_list
(
output_size
,
3
,
'output_size'
)
else
:
raise
ValueError
(
"output_size should be int, list[int] or tuple[int]"
)
groups
=
1
if
groups
is
None
else
groups
if
groups
<=
0
:
raise
ValueError
(
"the groups of conv3d_transpose should be greater than 0. Received groups: {}"
.
format
(
groups
)
)
if
num_filters
%
groups
!=
0
:
raise
ValueError
(
"Attr(num_filters) must be divisible by groups,"
"Received: Attr(num_filters) is {}, the groups is {}"
.
format
(
num_filters
,
groups
)
)
filter_shape
=
[
input_channel
,
num_filters
//
groups
]
+
filter_size
img_filter
=
helper
.
create_parameter
(
dtype
=
input
.
dtype
,
shape
=
filter_shape
,
attr
=
helper
.
param_attr
)
if
data_format
==
'NCDHW'
:
data_format
=
'NCHW'
if
data_format
==
'NDHWC'
:
data_format
=
'NHWC'
pre_bias
=
helper
.
create_variable_for_type_inference
(
dtype
=
input
.
dtype
)
helper
.
append_op
(
type
=
l_type
,
inputs
=
{
'Input'
:
[
input
],
'Filter'
:
[
img_filter
]},
outputs
=
{
'Output'
:
pre_bias
},
attrs
=
{
'output_size'
:
output_size
,
'strides'
:
stride
,
'paddings'
:
padding
,
'padding_algorithm'
:
padding_algorithm
,
'dilations'
:
dilation
,
'groups'
:
groups
,
'use_cudnn'
:
use_cudnn
,
'data_format'
:
data_format
,
},
)
if
data_format
==
'NCHW'
:
pre_act
=
helper
.
append_bias_op
(
pre_bias
,
dim_start
=
1
,
dim_end
=
2
)
else
:
pre_act
=
helper
.
append_bias_op
(
pre_bias
,
dim_start
=
4
,
dim_end
=
5
)
out
=
helper
.
append_activation
(
pre_act
)
return
out
def
deformable_conv
(
input
,
offset
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录