未验证 提交 2dca718a 编写于 作者: L Liyulingyue 提交者: GitHub

fix en docs in fft and io.dataset (#44948)

* irfftn; test = docutment_fix

* fft; test=document_fix

* fft; test=document_fix

* fft; test=document_fix

* subdata; test=document_fix

* adaptive_avg_pool2d; test=document_fix

* adaptive_avg_pool3d; test = document_fix

* ftt; test=document_fix

* ftt; test=document_fix

* AvgPool1D; test=document_fix

* avg_pool1d; test=document_fix

* test=document_fix

* test=document_fix

* test=document_fix

* test=document_fix

* fft; test=document_fix

* emb; test=document_fix

* emb; test=document_fix

* emb;test=document_fix

* fold; test=document_fix

* fold; test=document_fix

* fold; test=document_fix

* fold;test=document_fix

* GELU;test=document_fix

* update irfftn docs;test=document_fix

* Update fft.py

* Update fft.py

* Update common.py

* Update common.py

* Update fft.py

* Update input.py

* Update pooling.py

* dropout2d; test=document_fix

* Fold; test=document_fix

* update fold math;test=document_fix

* Update common.py
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
上级 fa06d9c3
此差异已折叠。
......@@ -411,7 +411,7 @@ class Subset(Dataset):
indices (sequence): Indices in the whole set selected for subset.
Returns:
Dataset: A Dataset which is the subset of the original dataset.
List[Dataset]: A Dataset which is the subset of the original dataset.
Examples:
......
......@@ -176,7 +176,7 @@ def interpolate(x,
name=None):
"""
This op resizes a batch of images.
This API resizes a batch of images.
The input must be a 3-D Tensor of the shape (num_batches, channels, in_w)
or 4-D (num_batches, channels, in_h, in_w), or a 5-D Tensor of the shape
(num_batches, channels, in_d, in_h, in_w) or (num_batches, in_d, in_h, in_w, channels),
......@@ -341,46 +341,28 @@ def interpolate(x,
A 3-D Tensor of the shape (num_batches, channels, out_w) or (num_batches, out_w, channels),
A 4-D Tensor of the shape (num_batches, channels, out_h, out_w) or (num_batches, out_h, out_w, channels),
or 5-D Tensor of the shape (num_batches, channels, out_d, out_h, out_w) or (num_batches, out_d, out_h, out_w, channels).
Raises:
TypeError: size should be a list or tuple or Tensor.
ValueError: The 'mode' of image_resize can only be 'linear', 'bilinear',
'trilinear', 'bicubic', 'area' or 'nearest' currently.
ValueError: 'linear' only support 3-D tensor.
ValueError: 'bilinear' and 'bicubic' only support 4-D tensor.
ValueError: 'nearest' only support 4-D or 5-D tensor.
ValueError: 'trilinear' only support 5-D tensor.
ValueError: One of size and scale_factor must not be None.
ValueError: size length should be 1 for input 3-D tensor.
ValueError: size length should be 2 for input 4-D tensor.
ValueError: size length should be 3 for input 5-D tensor.
ValueError: scale_factor should be greater than zero.
TypeError: align_corners should be a bool value
ValueError: align_mode can only be '0' or '1'
ValueError: data_format can only be 'NCW', 'NWC', 'NCHW', 'NHWC', 'NCDHW' or 'NDHWC'.
Examples:
.. code-block:: python
import paddle
import numpy as np
import paddle.nn.functional as F
# given out size
input_data = np.random.rand(2,3,6,10).astype("float32")
x = paddle.to_tensor(input_data)
output_1 = F.interpolate(x=x, size=[12,12])
print(output_1.shape)
# [2L, 3L, 12L, 12L]
# given scale
output_2 = F.interpolate(x=x, scale_factor=[2,1])
print(output_2.shape)
# [2L, 3L, 12L, 10L]
# bilinear interp
output_3 = F.interpolate(x=x, scale_factor=[2,1], mode="bilinear")
print(output_2.shape)
# [2L, 3L, 12L, 10L]
import paddle
import paddle.nn.functional as F
input_data = paddle.randn(shape=(2,3,6,10)).astype(paddle.float32)
output_1 = F.interpolate(x=input_data, size=[12,12])
print(output_1.shape)
# [2L, 3L, 12L, 12L]
# given scale
output_2 = F.interpolate(x=input_data, scale_factor=[2,1])
print(output_2.shape)
# [2L, 3L, 12L, 10L]
# bilinear interp
output_3 = F.interpolate(x=input_data, scale_factor=[2,1], mode="bilinear")
print(output_2.shape)
# [2L, 3L, 12L, 10L]
"""
data_format = data_format.upper()
resample = mode.upper()
......@@ -668,7 +650,7 @@ def upsample(x,
data_format='NCHW',
name=None):
"""
This op resizes a batch of images.
This API resizes a batch of images.
The input must be a 3-D Tensor of the shape (num_batches, channels, in_w)
or 4-D (num_batches, channels, in_h, in_w), or a 5-D Tensor of the shape
......@@ -716,6 +698,7 @@ def upsample(x,
Example:
.. code-block:: text
For scale_factor:
if align_corners = True && out_size > 1 :
scale_factor = (in_size-1.0)/(out_size-1.0)
......@@ -801,23 +784,23 @@ def upsample(x,
Parameters:
x (Tensor): 3-D, 4-D or 5-D Tensor, its data type is float32, float64, or uint8,
its data format is specified by :attr:`data_format`.
size (list|tuple|Tensor|None): Output shape of image resize
size (list|tuple|Tensor|None, optional): Output shape of image resize
layer, the shape is (out_w, ) when input is a 3-D Tensor, the shape is (out_h, out_w)
when input is a 4-D Tensor and is (out_d, out_h, out_w) when input is a 5-D Tensor.
Default: None. If a list/tuple, each element can be an integer or a Tensor of shape: [1].
If a Tensor , its dimensions size should be a 1.
scale_factor (float|Tensor|list|tuple|None): The multiplier for the input height or width. At
scale_factor (float|Tensor|list|tuple|None, optional): The multiplier for the input height or width. At
least one of :attr:`size` or :attr:`scale_factor` must be set.
And :attr:`size` has a higher priority than :attr:`scale_factor`.Has to match input size if
it is either a list or a tuple or a Tensor.
Default: None.
mode (str): The resample method. It supports 'linear', 'nearest', 'bilinear',
mode (str, optional): The resample method. It supports 'linear', 'nearest', 'bilinear',
'bicubic' and 'trilinear' currently. Default: 'nearest'
align_corners(bool) : An optional bool, If True, the centers of the 4 corner pixels of the
align_corners(bool, optional) : An optional bool, If True, the centers of the 4 corner pixels of the
input and output tensors are aligned, preserving the values at the
corner pixels.
Default: False
align_mode(int) : An optional for linear/bilinear/trilinear interpolation. Refer to the formula in the example above,
align_mode(int, optional) : An optional for linear/bilinear/trilinear interpolation. Refer to the formula in the example above,
it can be \'0\' for src_idx = scale_factor*(dst_indx+0.5)-0.5 , can be \'1\' for
src_idx = scale_factor*dst_index.
data_format (str, optional): Specify the data format of the input, and the data format of the output
......@@ -832,32 +815,19 @@ def upsample(x,
A 3-D Tensor of the shape (num_batches, channels, out_w) or (num_batches, out_w, channels),
A 4-D Tensor of the shape (num_batches, channels, out_h, out_w) or (num_batches, out_h, out_w, channels),
or 5-D Tensor of the shape (num_batches, channels, out_d, out_h, out_w) or (num_batches, out_d, out_h, out_w, channels).
Raises:
TypeError: size should be a list or tuple or Tensor.
ValueError: The 'mode' of image_resize can only be 'linear', 'bilinear',
'trilinear', 'bicubic', or 'nearest' currently.
ValueError: 'linear' only support 3-D tensor.
ValueError: 'bilinear', 'bicubic' and 'nearest' only support 4-D tensor.
ValueError: 'trilinear' only support 5-D tensor.
ValueError: One of size and scale_factor must not be None.
ValueError: size length should be 1 for input 3-D tensor.
ValueError: size length should be 2 for input 4-D tensor.
ValueError: size length should be 3 for input 5-D tensor.
ValueError: scale_factor should be greater than zero.
TypeError: align_corners should be a bool value
ValueError: align_mode can only be '0' or '1'
ValueError: data_format can only be 'NCW', 'NWC', 'NCHW', 'NHWC', 'NCDHW' or 'NDHWC'.
Examples:
.. code-block:: python
import paddle
import numpy as np
import paddle.nn.functional as F
import paddle
import paddle.nn as nn
input_data = np.random.rand(2,3,6,10).astype("float32")
input = paddle.to_tensor(input_data)
output = F.upsample(x=input, size=[12,12])
print(output.shape)
# [2L, 3L, 12L, 12L]
input_data = paddle.randn(shape=(2,3,6,10)).astype(paddle.float32)
upsample_out = paddle.nn.Upsample(size=[12,12])
output = upsample_out(x=input_data)
print(output.shape)
# [2L, 3L, 12L, 12L]
"""
return interpolate(x, size, scale_factor, mode, align_corners, align_mode,
......@@ -884,17 +854,17 @@ def bilinear(x1, x2, weight, bias=None, name=None):
Examples:
.. code-block:: python
import paddle
import numpy
import paddle.nn.functional as F
x1 = numpy.random.random((5, 5)).astype('float32')
x2 = numpy.random.random((5, 4)).astype('float32')
w = numpy.random.random((1000, 5, 4)).astype('float32')
b = numpy.random.random((1, 1000)).astype('float32')
import paddle
import paddle.nn.functional as F
result = F.bilinear(paddle.to_tensor(x1), paddle.to_tensor(x2), paddle.to_tensor(w), paddle.to_tensor(b)) # result shape [5, 1000]
x1 = paddle.randn((5, 5)).astype(paddle.float32)
x2 = paddle.randn((5, 4)).astype(paddle.float32)
w = paddle.randn((1000, 5, 4)).astype(paddle.float32)
b = paddle.randn((1, 1000)).astype(paddle.float32)
result = F.bilinear(x1, x2, w, b)
print(result.shape)
# [5, 1000]
"""
if in_dygraph_mode():
......@@ -933,10 +903,10 @@ def dropout(x,
Args:
x (Tensor): The input tensor. The data type is float32 or float64.
p (float|int): Probability of setting units to zero. Default 0.5.
axis (int|list|tuple): The axis along which the dropout is performed. Default None.
training (bool): A flag indicating whether it is in train phrase or not. Default True.
mode(str): ['upscale_in_train'(default) | 'downscale_in_infer'].
p (float|int, optional): Probability of setting units to zero. Default 0.5.
axis (int|list|tuple, optional): The axis along which the dropout is performed. Default None.
training (bool, optional): A flag indicating whether it is in train phrase or not. Default True.
mode(str, optional): ['upscale_in_train'(default) | 'downscale_in_infer'].
1. upscale_in_train(default), upscale the output at training time
......@@ -1036,22 +1006,38 @@ def dropout(x,
.. code-block:: python
import paddle
import numpy as np
x = np.array([[1,2,3], [4,5,6]]).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.dropout(x, 0.5)
y_test = paddle.nn.functional.dropout(x, 0.5, training=False)
y_0 = paddle.nn.functional.dropout(x, axis=0)
y_1 = paddle.nn.functional.dropout(x, axis=1)
y_01 = paddle.nn.functional.dropout(x, axis=[0,1])
print(x)
print(y_train)
print(y_test)
print(y_0)
print(y_1)
print(y_01)
import paddle
x = paddle.to_tensor([[1,2,3], [4,5,6]]).astype(paddle.float32)
y_train = paddle.nn.functional.dropout(x, 0.5)
y_test = paddle.nn.functional.dropout(x, 0.5, training=False)
y_0 = paddle.nn.functional.dropout(x, axis=0)
y_1 = paddle.nn.functional.dropout(x, axis=1)
y_01 = paddle.nn.functional.dropout(x, axis=[0,1])
print(x)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[1., 2., 3.],
# [4., 5., 6.]])
print(y_train)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[2. , 0. , 6. ],
# [8. , 0. , 12.]])
print(y_test)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[1., 2., 3.],
# [4., 5., 6.]])
print(y_0)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[0. , 0. , 0. ],
# [8. , 10., 12.]])
print(y_1)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[2. , 0. , 6. ],
# [8. , 0. , 12.]])
print(y_01)
# Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[0. , 0. , 0. ],
# [8. , 0. , 12.]])
"""
if not isinstance(p, (float, int, Variable)):
......@@ -1199,17 +1185,16 @@ def dropout2d(x, p=0.5, training=True, data_format='NCHW', name=None):
.. code-block:: python
import paddle
import numpy as np
x = np.random.random(size=(2, 3, 4, 5)).astype('float32')
x = paddle.to_tensor(x)
x = paddle.randn(shape=(2, 3, 4, 5)).astype(paddle.float32)
y_train = paddle.nn.functional.dropout2d(x) #train
y_test = paddle.nn.functional.dropout2d(x, training=False) #test
for i in range(2):
for j in range(3):
print(x.numpy()[i,j,:,:])
print(y_train.numpy()[i,j,:,:]) # may all 0
print(y_test.numpy()[i,j,:,:])
print(x[i,j,:,:])
print(y_train[i,j,:,:]) # may all 0
print(y_test[i,j,:,:])
"""
input_shape = x.shape
if len(input_shape) != 4:
......@@ -1252,16 +1237,15 @@ def dropout3d(x, p=0.5, training=True, data_format='NCDHW', name=None):
Examples:
.. code-block:: python
import paddle
import numpy as np
import paddle
x = paddle.randn(shape=(2, 3, 4, 5, 6)).astype(paddle.float32)
y_train = paddle.nn.functional.dropout3d(x) #train
y_test = paddle.nn.functional.dropout3d(x, training=False) #test
print(x[0,0,:,:,:])
print(y_train[0,0,:,:,:]) # may all 0
print(y_test[0,0,:,:,:])
x = np.random.random(size=(2, 3, 4, 5, 6)).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.dropout3d(x) #train
y_test = paddle.nn.functional.dropout3d(x, training=False) #test
print(x.numpy()[0,0,:,:,:])
print(y_train.numpy()[0,0,:,:,:]) # may all 0
print(y_test.numpy()[0,0,:,:,:])
"""
input_shape = x.shape
......@@ -1301,17 +1285,19 @@ def alpha_dropout(x, p=0.5, training=True, name=None):
Examples:
.. code-block:: python
import paddle
import numpy as np
x = np.array([[-1, 1], [-1, 1]]).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.alpha_dropout(x, 0.5)
y_test = paddle.nn.functional.alpha_dropout(x, 0.5, training=False)
print(x)
print(y_train)
# [[-0.10721093, 1.6655989 ], [-0.7791938, -0.7791938]] (randomly)
print(y_test)
import paddle
x = paddle.to_tensor([[-1, 1], [-1, 1]]).astype(paddle.float32)
y_train = paddle.nn.functional.alpha_dropout(x, 0.5)
y_test = paddle.nn.functional.alpha_dropout(x, 0.5, training=False)
print(y_train)
# Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[-0.10721093, -0.77919382],
# [-0.10721093, 1.66559887]]) (randomly)
print(y_test)
# Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[-1., 1.],
# [-1., 1.]])
"""
if not isinstance(p, (float, int)):
raise TypeError("p argument should be a float or int")
......@@ -2058,7 +2044,7 @@ def fold(x,
name=None):
r"""
This Op is used to combines an array of sliding local blocks into a large containing
Combines an array of sliding local blocks into a large containing
tensor. also known as col2im when operated on batched 2D image tensor. Fold calculates each
combined value in the resulting large tensor by summing all values from all containing blocks.
......@@ -2067,9 +2053,10 @@ def fold(x,
can be calculated as following.
.. math::
H_out &= output_size[0]
W_out &= output_size[1]
C_out &= C_in / kernel\_sizes[0] / kernel\_sizes[1]
H_{out} &= output\_size[0] \\
W_{out} &= output\_size[1] \\
C_{out} &= \frac{C_{in}}{kernel\_sizes[0]\times kernel\_sizes[1]} \\
Parameters:
x(Tensor): 3-D Tensor, input tensor of format [N, C, L],
......@@ -2078,17 +2065,17 @@ def fold(x,
or an interger o treated as [o, o].
kernel_sizes(int|list|tuple): The size of convolution kernel, should be [k_h, k_w]
or an integer k treated as [k, k].
strides(int|list|tuple): The strides, should be [stride_h, stride_w]
strides(int|list|tuple, optional): The strides, should be [stride_h, stride_w]
or an integer stride treated as [sride, stride].
For default, strides will be [1, 1].
paddings(int|list|tuple): The paddings of each dimension, should be
paddings(int|list|tuple, optional): The paddings of each dimension, should be
[padding_top, padding_left, padding_bottom, padding_right]
or [padding_h, padding_w] or an integer padding.
If [padding_h, padding_w] was given, it will expanded to
[padding_h, padding_w, padding_h, padding_w]. If an integer
padding was given, [padding, padding, padding, padding] will
be used. For default, paddings will be [0, 0, 0, 0]
dilations(int|list|tuple): the dilations of convolution kernel, should be
dilations(int|list|tuple, optional): the dilations of convolution kernel, should be
[dilation_h, dilation_w], or an integer dilation treated as
[dilation, dilation]. For default, it will be [1, 1].
name(str, optional): The default value is None.
......
......@@ -118,17 +118,17 @@ def one_hot(x, num_classes, name=None):
def embedding(x, weight, padding_idx=None, sparse=False, name=None):
r"""
The operator is used to lookup embeddings vector of ids provided by :attr:`x` .
Used to lookup embeddings vector of ids provided by :attr:`x` .
The shape of output Tensor is generated by appending the last dimension of the input Tensor shape
with embedding size.
**Note:** The id in :attr:`x` must satisfy :math:`0 =< id < weight.shape[0]` ,
otherwise the program will throw an exception and exit.
Note:
The id in :attr:`x` must satisfy :math:`0 =< id < weight.shape[0]` ,
otherwise the program will throw an exception and exit.
.. code-block:: text
Case 1:
x is a Tensor.
padding_idx = -1
x.data = [[1, 3], [2, 4], [4, 127]]
......@@ -151,17 +151,17 @@ def embedding(x, weight, padding_idx=None, sparse=False, name=None):
satisfy :math:`0<= id < weight.shape[0]` .
weight (Tensor): The weight. A Tensor with shape of lookup table parameter. It should have two elements which
indicates the size of the dictionary of embeddings and the size of each embedding vector respectively.
sparse(bool): The flag indicating whether to use sparse update. This parameter only
sparse(bool, optional): The flag indicating whether to use sparse update. This parameter only
affects the performance of the backwards gradient update. It is recommended to set
True because sparse update is faster. But some optimizers does not support sparse update,
such as :ref:`api_paddle_optimizer_adadelta_Adadelta` , :ref:`api_paddle_optimizer_adamax_Adamax` , :ref:`api_paddle_optimizer_lamb_Lamb`.
In these cases, sparse must be False. Default: False.
padding_idx(int|long|None): padding_idx needs to be in the interval [-weight.shape[0], weight.shape[0]).
padding_idx(int|long|None, optional): padding_idx needs to be in the interval [-weight.shape[0], weight.shape[0]).
If :math:`padding\_idx < 0`, the :math:`padding\_idx` will automatically be converted
to :math:`weight.shape[0] + padding\_idx` . It will output all-zero padding data whenever lookup
encounters :math:`padding\_idx` in id. And the padding data will not be updated while training.
If set None, it makes no effect to output. Default: None.
name(str|None): For detailed information, please refer
name(str|None, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
......@@ -171,13 +171,12 @@ def embedding(x, weight, padding_idx=None, sparse=False, name=None):
Examples:
.. code-block:: python
import numpy as np
import paddle
import paddle.nn as nn
x0 = np.arange(3, 6).reshape((3, 1)).astype(np.int64)
w0 = np.full(shape=(10, 3), fill_value=2).astype(np.float32)
x0 = paddle.arange(3, 6).reshape((3, 1)).astype(paddle.int64)
w0 = paddle.full(shape=(10, 3), fill_value=2).astype(paddle.float32)
# x.data = [[3], [4], [5]]
# x.shape = [3, 1]
......
......@@ -192,23 +192,16 @@ def avg_pool1d(x,
Returns:
Tensor: The output tensor of pooling result. The data type is same as input tensor.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ValueError: If `padding` is a list or tuple but its length is greater than 1.
ShapeError: If the input is not a 3-D tensor.
ShapeError: If the output's shape calculated is not greater than 0.
Examples:
.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
import paddle.nn as nn
data = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32]).astype(np.float32))
out = F.avg_pool1d(data, kernel_size=2, stride=2, padding=0)
# out shape: [1, 3, 16]
data = paddle.uniform([1, 3, 32], paddle.float32)
AvgPool1D = nn.AvgPool1D(kernel_size=2, stride=2, padding=0)
pool_out = AvgPool1D(data)
# pool_out shape: [1, 3, 16]
"""
"""NCL to NCHW"""
data_format = "NCHW"
......@@ -318,20 +311,14 @@ def avg_pool2d(x,
Returns:
Tensor: The output tensor of pooling result. The data type is same as input tensor.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ShapeError: If the output's shape calculated is not greater than 0.
Examples:
.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
# avg pool2d
x = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32, 32]).astype(np.float32))
x = paddle.uniform([1, 3, 32, 32], paddle.float32)
out = F.avg_pool2d(x,
kernel_size=2,
stride=2, padding=0)
......@@ -446,19 +433,13 @@ def avg_pool3d(x,
Returns:
Tensor: The output tensor of pooling result. The data type is same as input tensor.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ShapeError: If the output's shape calculated is not greater than 0.
Examples:
.. code-block:: python
import paddle
import numpy as np
x = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32, 32, 32]).astype(np.float32))
x = paddle.uniform([1, 3, 32, 32, 32], paddle.float32)
# avg pool3d
out = paddle.nn.functional.avg_pool3d(
x,
......@@ -581,9 +562,8 @@ def max_pool1d(x,
import paddle
import paddle.nn.functional as F
import numpy as np
data = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32]).astype(np.float32))
data = paddle.uniform([1, 3, 32], paddle.float32)
pool_out = F.max_pool1d(data, kernel_size=2, stride=2, padding=0)
# pool_out shape: [1, 3, 16]
pool_out, indices = F.max_pool1d(data, kernel_size=2, stride=2, padding=0, return_mask=True)
......@@ -1350,8 +1330,10 @@ def adaptive_avg_pool1d(x, output_size, name=None):
x (Tensor): The input Tensor of pooling, which is a 3-D tensor with shape :math:`[N, C, L]`, where :math:`N` is batch size, :math:`C` is the number of channels and :math:`L` is the length of the feature. The data type is float32 or float64.
output_size (int): The target output size. Its data type must be int.
name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
Returns:
Tensor: The result of 1D adaptive average pooling. Its data type is same as input.
Examples:
.. code-block:: python
......@@ -1409,8 +1391,16 @@ def adaptive_avg_pool1d(x, output_size, name=None):
def adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None):
"""
This API implements adaptive average pooling 2d operation.
See more details in :ref:`api_nn_pooling_AdaptiveAvgPool2d` .
Applies 2D adaptive avg pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size.
For avg adaptive pool2d:
.. math::
hstart &= floor(i * H_{in} / H_{out})
hend &= ceil((i + 1) * H_{in} / H_{out})
wstart &= floor(j * W_{in} / W_{out})
wend &= ceil((j + 1) * W_{in} / W_{out})
Output(i ,j) &= \frac{\sum Input[hstart:hend, wstart:wend]}{(hend - hstart) * (wend - wstart)}
Args:
x (Tensor): The input tensor of adaptive avg pool2d operator, which is a 4-D tensor.
......@@ -1426,8 +1416,7 @@ def adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None):
None by default.
Returns:
Tensor: The output tensor of avg adaptive pool2d result. The data type is same as input tensor.
Raises:
ValueError: If `data_format` is not "NCHW" or "NHWC".
Examples:
.. code-block:: python
......@@ -1515,8 +1504,19 @@ def adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None):
def adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None):
"""
This API implements adaptive average pooling 3d operation.
See more details in :ref:`api_nn_pooling_AdaptiveAvgPool3d` .
This operation applies 3D adaptive avg pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size.
For avg adaptive pool3d:
.. math::
dstart &= floor(i * D_{in} / D_{out})
dend &= ceil((i + 1) * D_{in} / D_{out})
hstart &= floor(j * H_{in} / H_{out})
hend &= ceil((j + 1) * H_{in} / H_{out})
wstart &= floor(k * W_{in} / W_{out})
wend &= ceil((k + 1) * W_{in} / W_{out})
Output(i ,j, k) &= \frac{\sum Input[dstart:dend, hstart:hend, wstart:wend]}
{(dend - dstart) * (hend - hstart) * (wend - wstart)}
Args:
x (Tensor): The input tensor of adaptive avg pool3d operator, which is a 5-D tensor.
......@@ -1532,8 +1532,7 @@ def adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None):
None by default.
Returns:
Tensor: The output tensor of avg adaptive pool3d result. The data type is same as input tensor.
Raises:
ValueError: If `data_format` is not "NCDHW" or "NDHWC".
Examples:
.. code-block:: python
......@@ -1556,12 +1555,10 @@ def adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None):
# output[:, :, i, j, k] =
# avg(input[:, :, dstart:dend, hstart: hend, wstart: wend])
import paddle
import numpy as np
input_data = np.random.rand(2, 3, 8, 32, 32)
x = paddle.to_tensor(input_data)
# x.shape is [2, 3, 8, 32, 32]
input_data = paddle.randn(shape=(2, 3, 8, 32, 32))
out = paddle.nn.functional.adaptive_avg_pool3d(
x = x,
x = input_data,
output_size=[3, 3, 3])
# out.shape is [2, 3, 3, 3, 3]
"""
......@@ -1654,9 +1651,8 @@ def adaptive_max_pool1d(x, output_size, return_mask=False, name=None):
#
import paddle
import paddle.nn.functional as F
import numpy as np
data = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32]).astype(np.float32))
data = paddle.uniform([1, 3, 32], paddle.float32)
pool_out = F.adaptive_max_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
pool_out, indices = F.adaptive_max_pool1d(data, output_size=16, return_mask=True)
......@@ -1740,13 +1736,10 @@ def adaptive_max_pool2d(x, output_size, return_mask=False, name=None):
# output[:, :, i, j] = max(input[:, :, hstart: hend, wstart: wend])
#
import paddle
import numpy as np
input_data = np.random.rand(2, 3, 32, 32)
x = paddle.to_tensor(input_data)
# x.shape is [2, 3, 32, 32]
input_data = paddle.randn(shape=(2, 3, 32, 32))
out = paddle.nn.functional.adaptive_max_pool2d(
x = x,
x = input_data,
output_size=[3, 3])
# out.shape is [2, 3, 3, 3]
"""
......@@ -1833,13 +1826,10 @@ def adaptive_max_pool3d(x, output_size, return_mask=False, name=None):
# output[:, :, i, j, k] = max(input[:, :, dstart: dend, hstart: hend, wstart: wend])
#
import paddle
import numpy as np
input_data = np.random.rand(2, 3, 8, 32, 32)
x = paddle.to_tensor(input_data)
# x.shape is [2, 3, 8, 32, 32]
input_data = paddle.randn(shape=(2, 3, 8, 32, 32))
out = paddle.nn.functional.adaptive_max_pool3d(
x = x,
x = input_data,
output_size=[3, 3, 3])
# out.shape is [2, 3, 3, 3, 3]
"""
......
......@@ -140,11 +140,10 @@ class GELU(Layer):
Examples:
.. code-block:: python
import paddle
import numpy as np
x = paddle.to_tensor(np.array([[-1, 0.5],[1, 1.5]]))
x = paddle.to_tensor([[-1, 0.5],[1, 1.5]])
m = paddle.nn.GELU()
out = m(x) # [-0.158655 0.345731 0.841345 1.39979]
......
......@@ -1279,9 +1279,8 @@ class CosineSimilarity(Layer):
class Embedding(Layer):
r"""
**Embedding Layer**
This interface is used to construct a callable object of the ``Embedding`` class.
Embedding Layer, used to construct a callable object of the ``Embedding`` class.
For specific usage, refer to code examples. It implements the function of the Embedding Layer.
This layer is used to lookup embeddings vector of ids provided by :attr:`x` .
It automatically constructs a 2D embedding matrix based on the
......@@ -1290,8 +1289,9 @@ class Embedding(Layer):
The shape of output Tensor is generated by appending an emb_size dimension to the
last dimension of the input Tensor shape.
**Note:** The id in :attr:`x` must satisfy :math:`0 =< id < num_embeddings` ,
otherwise the program will throw an exception and exit.
Note:
The id in :attr:`x` must satisfy :math:`0 =< id < num_embeddings` ,
otherwise the program will throw an exception and exit.
.. code-block:: text
......@@ -1318,23 +1318,23 @@ class Embedding(Layer):
num_embeddings (int): Just one element which indicate the size
of the dictionary of embeddings.
embedding_dim (int): Just one element which indicate the size of each embedding vector respectively.
padding_idx(int|long|None): padding_idx needs to be in the interval [-num_embeddings, num_embeddings).
padding_idx(int|long|None, optional): padding_idx needs to be in the interval [-num_embeddings, num_embeddings).
If :math:`padding\_idx < 0`, the :math:`padding\_idx` will automatically be converted
to :math:`vocab\_size + padding\_idx` . It will output all-zero padding data whenever lookup
encounters :math:`padding\_idx` in id. And the padding data will not be updated while training.
If set None, it makes no effect to output. Default: None.
sparse(bool): The flag indicating whether to use sparse update. This parameter only
sparse(bool, optional): The flag indicating whether to use sparse update. This parameter only
affects the performance of the backwards gradient update. It is recommended to set
True because sparse update is faster. But some optimizer does not support sparse update,
such as :ref:`api_paddle_optimizer_adadelta_Adadelta` , :ref:`api_paddle_optimizer_adamax_Adamax` , :ref:`api_paddle_optimizer_lamb_Lamb`.
In these case, sparse must be False. Default: False.
weight_attr(ParamAttr): To specify the weight parameter property. Default: None, which means the
weight_attr(ParamAttr, optional): To specify the weight parameter property. Default: None, which means the
default weight parameter property is used. See usage for details in :ref:`api_ParamAttr` . In addition,
user-defined or pre-trained word vectors can be loaded with the :attr:`param_attr` parameter.
The local word vector needs to be transformed into numpy format, and the shape of local word
vector should be consistent with :attr:`num_embeddings` . Then :ref:`api_initializer_NumpyArrayInitializer`
is used to load custom or pre-trained word vectors. See code example for details.
name(str|None): For detailed information, please refer
name(str|None, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
None by default.
......@@ -1514,7 +1514,7 @@ class Unfold(Layer):
class Fold(Layer):
r"""
This Op is used to combines an array of sliding local blocks into a large containing
Combines an array of sliding local blocks into a large containing
tensor. also known as col2im when operated on batched 2D image tensor. Fold calculates each
combined value in the resulting large tensor by summing all values from all containing blocks.
......@@ -1523,26 +1523,27 @@ class Fold(Layer):
can be calculated as following.
.. math::
H_out &= output_size[0]
W_out &= output_size[1]
C_out &= C_in / kernel\_sizes[0] / kernel\_sizes[1]
H_{out} &= output\_size[0] \\
W_{out} &= output\_size[1] \\
C_{out} &= \frac{C_{in}}{kernel\_sizes[0]\times kernel\_sizes[1]} \\
Parameters:
output_sizes(list): The size of output size, should be [output_size_h, output_size_w]
or an interger o treated as [o, o].
kernel_sizes(int|list|tuple): The size of convolution kernel, should be [k_h, k_w]
or an integer k treated as [k, k].
strides(int|list|tuple): The strides, should be [stride_h, stride_w]
strides(int|list|tuple, optional): The strides, should be [stride_h, stride_w]
or an integer stride treated as [sride, stride].
For default, strides will be [1, 1].
paddings(int|list|tuple): The paddings of each dimension, should be
paddings(int|list|tuple, optional): The paddings of each dimension, should be
[padding_top, padding_left, padding_bottom, padding_right]
or [padding_h, padding_w] or an integer padding.
If [padding_h, padding_w] was given, it will expanded to
[padding_h, padding_w, padding_h, padding_w]. If an integer
padding was given, [padding, padding, padding, padding] will
be used. For default, paddings will be [0, 0, 0, 0]
dilations(int|list|tuple): the dilations of convolution kernel, should be
dilations(int|list|tuple, optional): the dilations of convolution kernel, should be
[dilation_h, dilation_w], or an integer dilation treated as
[dilation, dilation]. For default, it will be [1, 1].
name(str, optional): The default value is None.
......
......@@ -53,22 +53,15 @@ class AvgPool1D(Layer):
name(str, optional): For eed to detailed information, please refer to :ref:`api_guide_Name`.
Usually name is no nset and None by default.
Returns:
A callable object of AvgPool1D.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ValueError: If `padding` is a list or tuple but its length greater than 1.
ShapeError: If the input is not a 3-D tensor.
ShapeError: If the output's shape calculated is not greater than 0.
Shape:
- x(Tensor): The input tensor of avg pool1d operator, which is a 3-D tensor.
The data type can be float32, float64.
- output(Tensor): The output tensor of avg pool1d operator, which is a 3-D tensor.
The data type is same as input x.
Returns:
A callable object of AvgPool1D.
Examples:
.. code-block:: python
......@@ -164,10 +157,7 @@ class AvgPool2D(Layer):
Returns:
A callable object of AvgPool2D.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ShapeError: If the output's shape calculated is not greater than 0.
Examples:
.. code-block:: python
......@@ -255,10 +245,6 @@ class AvgPool3D(Layer):
Returns:
A callable object of AvgPool3D.
Raises:
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is "VALID", but `ceil_mode` is True.
ShapeError: If the output's shape calculated is not greater than 0.
Shape:
- x(Tensor): The input tensor of avg pool3d operator, which is a 5-D tensor.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册