未验证 提交 0f53f3d9 编写于 作者: L liuyuhui 提交者: GitHub

[paddle v2.0.0rc1: API fixs] assign/conv2d/conv2d_transpose/cast/ParamAttr (#29397)

* fix bug,test=develop

* fix DLTP-15151, paddle.ParamAttr API

* fix DLTP-15083/DLTP-15274, paddle.nn.functionl.assign paddle.cast API

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API

* fix DLTP-15083, paddle.nn.functionl.assign API

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API

* support in_dygraph_mode for cast op, test=develop

* fix bug,test=develop

* fix doc

* fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API, test=document_fix
上级 de3c067a
......@@ -1403,7 +1403,7 @@ def conv2d(input,
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
Args:
input (Variable): The input is 4-D Tensor with shape [N, C, H, W], the data type
input (Tensor): The input is 4-D Tensor with shape [N, C, H, W], the data type
of input is float16 or float32 or float64.
num_filters(int): The number of filter. It is as same as the output
image channel.
......@@ -1456,9 +1456,9 @@ def conv2d(input,
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Variable holding Tensor representing the conv2d, whose data type is the
same with input. If act is None, the tensor variable storing the convolution
result, and if act is not None, the tensor variable storing convolution
A Tensor representing the conv2d, whose data type is the
same with input. If act is None, the tensor storing the convolution
result, and if act is not None, the tensor storing convolution
and non-linearity activation result.
Raises:
......@@ -1477,12 +1477,12 @@ def conv2d(input,
Examples:
.. code-block:: python
import paddle.fluid as fluid
import paddle
paddle.enable_static()
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d = paddle.static.nn.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
print(conv2d.shape) # [-1, 2, 30, 30]
"""
check_variable_and_dtype(input, 'input', ['float16', 'float32', 'float64'],
......@@ -3805,7 +3805,7 @@ def conv2d_transpose(input,
conv2d_transpose can compute the kernel size automatically.
Args:
input(Variable): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
its data type is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output
image channel.
......@@ -3823,15 +3823,14 @@ def conv2d_transpose(input,
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively adds
`dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a
string, either 'VALID' or 'SAME' supported, which is the padding algorithm.
If `padding` is a tuple or list, it could be in three forms:
`[pad_height, pad_width]` or
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`, and
when `data_format` is `'NCHW'`,
`padding` can be in the form `[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `'NHWC'`, `padding` can be in the form
padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
on both sides for each dimension. If `padding` is a string, either 'VALID' or
'SAME' which is the padding algorithm. If `padding` is a tuple or list,
it could be in three forms: `[pad_height, pad_width]` or
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `"NCHW"`, `padding` can be in the form
`[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `"NHWC"`, `padding` can be in the form
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
......@@ -3869,11 +3868,11 @@ def conv2d_transpose(input,
`[batch_size, input_channels, input_height, input_width]`.
Returns:
A Variable holding Tensor representing the conv2d_transpose, whose
A Tensor representing the conv2d_transpose, whose
data type is the same with input and shape is (num_batches, channels, out_h,
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor variable
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor
storing the transposed convolution result, and if act is not None, the
tensor variable storing transposed convolution and non-linearity activation
tensor storing transposed convolution and non-linearity activation
result.
Raises:
......@@ -3892,11 +3891,12 @@ def conv2d_transpose(input,
Examples:
.. code-block:: python
import paddle.fluid as fluid
import paddle
paddle.enable_static()
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
"""
assert param_attr is not False, "param_attr should not be False in conv2d_transpose."
if data_format not in ['NCHW', 'NHWC']:
......
......@@ -203,7 +203,7 @@ def create_global_var(shape,
def cast(x, dtype):
"""
This OP takes in the Variable :attr:`x` with :attr:`x.dtype` and casts it
This OP takes in the Tensor :attr:`x` with :attr:`x.dtype` and casts it
to the output with :attr:`dtype`. It's meaningless if the output dtype
equals the input dtype, but it's fine if you do so.
......@@ -539,20 +539,20 @@ def assign(input, output=None):
The OP copies the :attr:`input` to the :attr:`output`.
Parameters:
input (Variable|numpy.ndarray): A tensor or numpy ndarray, its data type supports
input (Tensor|numpy.ndarray): A tensor or numpy ndarray, its data type supports
float16, float32, float64, int32 and int64.
output (Variable, optional): A tensor. If :attr:`output` is None, a new tensor will
output (Tensor, optional): A tensor. If :attr:`output` is None, a new tensor will
be created as :attr:`output`. Default: None.
Returns:
Variable: A tensor with the same shape, data type and value as :attr:`input`.
Tensor: A tensor with the same shape, data type and value as :attr:`input`.
Examples:
.. code-block:: python
import paddle
import numpy as np
data = paddle.fill_constant(shape=[3, 2], value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1],
[3, 4],
[1, 3]]).astype(np.int64)
......
......@@ -37,8 +37,8 @@ class ParamAttr(object):
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_GradientClipByGlobalNorm` ,
:ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` .
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Parameters:
name (str, optional): The parameter's name. Default None, meaning that the name
......@@ -50,8 +50,8 @@ class ParamAttr(object):
optimize is the global learning rates times the parameter's learning rate times
the factor of learning rate scheduler. Default 1.0.
regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are two method:
:ref:`api_fluid_regularizer_L1Decay` , :ref:`api_fluid_regularizer_L2Decay` . If
regularizer is also set in ``optimizer`` (such as :ref:`api_fluid_optimizer_SGDOptimizer` ),
:ref:`api_paddle_regularizer_L1Decay` , :ref:`api_paddle_regularizer_L2Decay` . If
regularizer is also set in ``optimizer`` (such as :ref:`api_paddle_optimizer_SGD` ),
that regularizer setting in optimizer will be ignored. Default None, meaning there is
no regularization.
trainable (bool): Whether this parameter is trainable. Default True.
......@@ -63,7 +63,6 @@ class ParamAttr(object):
.. code-block:: python
import paddle
paddle.enable_static()
weight_attr = paddle.ParamAttr(name="weight",
learning_rate=0.5,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册