未验证 提交 2ee7a6b0 编写于 作者: L liuyuhui 提交者: GitHub

[paddle v2.0.0rc1: API fixs] assign/conv2d/conv2d_transpose/cast/ParamAttr (#29171)

* fix DLTP-15151, paddle.ParamAttr API

* fix DLTP-15083/DLTP-15274, paddle.nn.functionl.assign paddle.cast API

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API

* fix DLTP-15083, paddle.nn.functionl.assign API

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API

* support in_dygraph_mode for cast op, test=develop

* fix bug,test=develop

* fix doc

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API
上级 6cb68886
...@@ -1403,7 +1403,7 @@ def conv2d(input, ...@@ -1403,7 +1403,7 @@ def conv2d(input,
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1 W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
Args: Args:
input (Variable): The input is 4-D Tensor with shape [N, C, H, W], the data type input (Tensor): The input is 4-D Tensor with shape [N, C, H, W], the data type
of input is float16 or float32 or float64. of input is float16 or float32 or float64.
num_filters(int): The number of filter. It is as same as the output num_filters(int): The number of filter. It is as same as the output
image channel. image channel.
...@@ -1456,9 +1456,9 @@ def conv2d(input, ...@@ -1456,9 +1456,9 @@ def conv2d(input,
`[batch_size, input_channels, input_height, input_width]`. `[batch_size, input_channels, input_height, input_width]`.
Returns: Returns:
A Variable holding Tensor representing the conv2d, whose data type is the A Tensor representing the conv2d, whose data type is the
same with input. If act is None, the tensor variable storing the convolution same with input. If act is None, the tensor storing the convolution
result, and if act is not None, the tensor variable storing convolution result, and if act is not None, the tensor storing convolution
and non-linearity activation result. and non-linearity activation result.
Raises: Raises:
...@@ -1477,12 +1477,12 @@ def conv2d(input, ...@@ -1477,12 +1477,12 @@ def conv2d(input,
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid
import paddle import paddle
paddle.enable_static() paddle.enable_static()
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32') data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu") conv2d = paddle.static.nn.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
print(conv2d.shape) # [-1, 2, 30, 30]
""" """
check_variable_and_dtype(input, 'input', ['float16', 'float32', 'float64'], check_variable_and_dtype(input, 'input', ['float16', 'float32', 'float64'],
...@@ -3806,7 +3806,7 @@ def conv2d_transpose(input, ...@@ -3806,7 +3806,7 @@ def conv2d_transpose(input,
conv2d_transpose can compute the kernel size automatically. conv2d_transpose can compute the kernel size automatically.
Args: Args:
input(Variable): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format, input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
its data type is float32 or float64. its data type is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output num_filters(int): The number of the filter. It is as same as the output
image channel. image channel.
...@@ -3824,15 +3824,14 @@ def conv2d_transpose(input, ...@@ -3824,15 +3824,14 @@ def conv2d_transpose(input,
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution. stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width). If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1. Otherwise, stride_height = stride_width = stride. Default: stride = 1.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively adds padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
`dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a on both sides for each dimension. If `padding` is a string, either 'VALID' or
string, either 'VALID' or 'SAME' supported, which is the padding algorithm. 'SAME' which is the padding algorithm. If `padding` is a tuple or list,
If `padding` is a tuple or list, it could be in three forms: it could be in three forms: `[pad_height, pad_width]` or
`[pad_height, pad_width]` or `[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`, and and when `data_format` is `"NCHW"`, `padding` can be in the form
when `data_format` is `'NCHW'`, `[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
`padding` can be in the form `[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`. when `data_format` is `"NHWC"`, `padding` can be in the form
when `data_format` is `'NHWC'`, `padding` can be in the form
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`. `[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0. Default: padding = 0.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points. dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
...@@ -3870,11 +3869,11 @@ def conv2d_transpose(input, ...@@ -3870,11 +3869,11 @@ def conv2d_transpose(input,
`[batch_size, input_channels, input_height, input_width]`. `[batch_size, input_channels, input_height, input_width]`.
Returns: Returns:
A Variable holding Tensor representing the conv2d_transpose, whose A Tensor representing the conv2d_transpose, whose
data type is the same with input and shape is (num_batches, channels, out_h, data type is the same with input and shape is (num_batches, channels, out_h,
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor variable out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor
storing the transposed convolution result, and if act is not None, the storing the transposed convolution result, and if act is not None, the
tensor variable storing transposed convolution and non-linearity activation tensor storing transposed convolution and non-linearity activation
result. result.
Raises: Raises:
...@@ -3893,11 +3892,12 @@ def conv2d_transpose(input, ...@@ -3893,11 +3892,12 @@ def conv2d_transpose(input,
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid
import paddle import paddle
paddle.enable_static() paddle.enable_static()
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3) data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
""" """
assert param_attr is not False, "param_attr should not be False in conv2d_transpose." assert param_attr is not False, "param_attr should not be False in conv2d_transpose."
if data_format not in ['NCHW', 'NHWC']: if data_format not in ['NCHW', 'NHWC']:
......
...@@ -203,7 +203,7 @@ def create_global_var(shape, ...@@ -203,7 +203,7 @@ def create_global_var(shape,
def cast(x, dtype): def cast(x, dtype):
""" """
This OP takes in the Variable :attr:`x` with :attr:`x.dtype` and casts it This OP takes in the Tensor :attr:`x` with :attr:`x.dtype` and casts it
to the output with :attr:`dtype`. It's meaningless if the output dtype to the output with :attr:`dtype`. It's meaningless if the output dtype
equals the input dtype, but it's fine if you do so. equals the input dtype, but it's fine if you do so.
...@@ -545,20 +545,20 @@ def assign(input, output=None): ...@@ -545,20 +545,20 @@ def assign(input, output=None):
The OP copies the :attr:`input` to the :attr:`output`. The OP copies the :attr:`input` to the :attr:`output`.
Parameters: Parameters:
input (Variable|numpy.ndarray): A tensor or numpy ndarray, its data type supports input (Tensor|numpy.ndarray): A tensor or numpy ndarray, its data type supports
float16, float32, float64, int32 and int64. float16, float32, float64, int32 and int64.
output (Variable, optional): A tensor. If :attr:`output` is None, a new tensor will output (Tensor, optional): A tensor. If :attr:`output` is None, a new tensor will
be created as :attr:`output`. Default: None. be created as :attr:`output`. Default: None.
Returns: Returns:
Variable: A tensor with the same shape, data type and value as :attr:`input`. Tensor: A tensor with the same shape, data type and value as :attr:`input`.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import numpy as np import numpy as np
data = paddle.fill_constant(shape=[3, 2], value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]] data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1], array = np.array([[1, 1],
[3, 4], [3, 4],
[1, 3]]).astype(np.int64) [1, 3]]).astype(np.int64)
......
...@@ -37,8 +37,8 @@ class ParamAttr(object): ...@@ -37,8 +37,8 @@ class ParamAttr(object):
Note: Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0. ``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope. Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_GradientClipByGlobalNorm` , There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` . :ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Parameters: Parameters:
name (str, optional): The parameter's name. Default None, meaning that the name name (str, optional): The parameter's name. Default None, meaning that the name
...@@ -50,8 +50,8 @@ class ParamAttr(object): ...@@ -50,8 +50,8 @@ class ParamAttr(object):
optimize is the global learning rates times the parameter's learning rate times optimize is the global learning rates times the parameter's learning rate times
the factor of learning rate scheduler. Default 1.0. the factor of learning rate scheduler. Default 1.0.
regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are two method: regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are two method:
:ref:`api_fluid_regularizer_L1Decay` , :ref:`api_fluid_regularizer_L2Decay` . If :ref:`api_paddle_regularizer_L1Decay` , :ref:`api_paddle_regularizer_L2Decay` . If
regularizer is also set in ``optimizer`` (such as :ref:`api_fluid_optimizer_SGDOptimizer` ), regularizer is also set in ``optimizer`` (such as :ref:`api_paddle_optimizer_SGD` ),
that regularizer setting in optimizer will be ignored. Default None, meaning there is that regularizer setting in optimizer will be ignored. Default None, meaning there is
no regularization. no regularization.
trainable (bool): Whether this parameter is trainable. Default True. trainable (bool): Whether this parameter is trainable. Default True.
...@@ -63,7 +63,6 @@ class ParamAttr(object): ...@@ -63,7 +63,6 @@ class ParamAttr(object):
.. code-block:: python .. code-block:: python
import paddle import paddle
paddle.enable_static()
weight_attr = paddle.ParamAttr(name="weight", weight_attr = paddle.ParamAttr(name="weight",
learning_rate=0.5, learning_rate=0.5,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册