未验证 提交 3900f66c 编写于 作者: R ruri 提交者: GitHub

[API 2.0]Fix adaptive pooling bug (#26922)

* fix

* fix doc, test=document_fix
上级 3eacced9
...@@ -976,6 +976,7 @@ def adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None): ...@@ -976,6 +976,7 @@ def adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None):
if isinstance(output_size, int): if isinstance(output_size, int):
output_size = utils.convert_to_list(output_size, 2, 'output_size') output_size = utils.convert_to_list(output_size, 2, 'output_size')
else: else:
output_size = list(output_size)
if output_size[0] == None: if output_size[0] == None:
output_size[0] = in_h output_size[0] = in_h
if output_size[1] == None: if output_size[1] == None:
...@@ -1079,6 +1080,7 @@ def adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None): ...@@ -1079,6 +1080,7 @@ def adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None):
if isinstance(output_size, int): if isinstance(output_size, int):
output_size = utils.convert_to_list(output_size, 3, 'output_size') output_size = utils.convert_to_list(output_size, 3, 'output_size')
else: else:
output_size = list(output_size)
if output_size[0] == None: if output_size[0] == None:
output_size[0] = in_l output_size[0] = in_l
if output_size[1] == None: if output_size[1] == None:
...@@ -1123,8 +1125,7 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None): ...@@ -1123,8 +1125,7 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None):
with shape [N, C, L]. The format of input tensor is NCL, with shape [N, C, L]. The format of input tensor is NCL,
where N is batch size, C is the number of channels, L is the where N is batch size, C is the number of channels, L is the
length of the feature. The data type is float32 or float64. length of the feature. The data type is float32 or float64.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, output_size (int): The pool kernel size. The value should be an integer.
it must contain one int.
return_indices (bool): If true, the index of max pooling point will be returned along return_indices (bool): If true, the index of max pooling point will be returned along
with outputs. It cannot be set in average pooling type. Default False. with outputs. It cannot be set in average pooling type. Default False.
name(str, optional): For detailed information, please refer name(str, optional): For detailed information, please refer
...@@ -1134,9 +1135,10 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None): ...@@ -1134,9 +1135,10 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None):
Tensor: The output tensor of adaptive pooling result. The data type is same Tensor: The output tensor of adaptive pooling result. The data type is same
as input tensor. as input tensor.
Raises: Raises:
ValueError: 'output_size' should be a integer or list or tuple with length as 1. ValueError: 'output_size' should be an integer.
Examples: Examples:
.. code-block:: python .. code-block:: python
# max adaptive pool1d # max adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m], # suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension # output shape is [N, C, m], adaptive pool divide L dimension
...@@ -1162,7 +1164,7 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None): ...@@ -1162,7 +1164,7 @@ def adaptive_max_pool1d(x, output_size, return_indices=False, name=None):
check_variable_and_dtype(x, 'x', ['float32', 'float64'], check_variable_and_dtype(x, 'x', ['float32', 'float64'],
'adaptive_max_pool1d') 'adaptive_max_pool1d')
_check_input(x, 3) _check_input(x, 3)
check_type(output_size, 'pool_size', (int), 'adaptive_max_pool1d') check_type(output_size, 'pool_size', int, 'adaptive_max_pool1d')
check_type(return_indices, 'return_indices', bool, 'adaptive_max_pool1d') check_type(return_indices, 'return_indices', bool, 'adaptive_max_pool1d')
pool_size = [1] + utils.convert_to_list(output_size, 1, 'pool_size') pool_size = [1] + utils.convert_to_list(output_size, 1, 'pool_size')
...@@ -1201,15 +1203,19 @@ def adaptive_max_pool2d(x, output_size, return_indices=False, name=None): ...@@ -1201,15 +1203,19 @@ def adaptive_max_pool2d(x, output_size, return_indices=False, name=None):
""" """
This operation applies a 2D adaptive max pooling on input tensor. This operation applies a 2D adaptive max pooling on input tensor.
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool2d` . See more details in :ref:`api_nn_pooling_AdaptiveMaxPool2d` .
Args: Args:
x (Tensor): The input tensor of adaptive max pool2d operator, which is a 4-D tensor. The data type can be float16, float32, float64, int32 or int64. x (Tensor): The input tensor of adaptive max pool2d operator, which is a 4-D tensor. The data type can be float16, float32, float64, int32 or int64.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain two elements, (H, W). H and W can be either a int, or None which means the size will be the same as that of the input. output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain two elements, (H, W). H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False. return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default. name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
Returns: Returns:
Tensor: The output tensor of adaptive max pool2d result. The data type is same as input tensor. Tensor: The output tensor of adaptive max pool2d result. The data type is same as input tensor.
Examples: Examples:
.. code-block:: python .. code-block:: python
# max adaptive pool2d # max adaptive pool2d
# suppose input data in the shape of [N, C, H, W], `output_size` is [m, n] # suppose input data in the shape of [N, C, H, W], `output_size` is [m, n]
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions # output shape is [N, C, m, n], adaptive pool divide H and W dimensions
...@@ -1247,6 +1253,7 @@ def adaptive_max_pool2d(x, output_size, return_indices=False, name=None): ...@@ -1247,6 +1253,7 @@ def adaptive_max_pool2d(x, output_size, return_indices=False, name=None):
if isinstance(output_size, int): if isinstance(output_size, int):
output_size = utils.convert_to_list(output_size, 2, 'output_size') output_size = utils.convert_to_list(output_size, 2, 'output_size')
else: else:
output_size = list(output_size)
if output_size[0] == None: if output_size[0] == None:
output_size[0] = in_h output_size[0] = in_h
if output_size[1] == None: if output_size[1] == None:
...@@ -1283,15 +1290,19 @@ def adaptive_max_pool3d(x, output_size, return_indices=False, name=None): ...@@ -1283,15 +1290,19 @@ def adaptive_max_pool3d(x, output_size, return_indices=False, name=None):
""" """
This operation applies a 3D adaptive max pooling on input tensor. This operation applies a 3D adaptive max pooling on input tensor.
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool3d` . See more details in :ref:`api_nn_pooling_AdaptiveMaxPool3d` .
Args: Args:
x (Tensor): The input tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type can be float32, float64. x (Tensor): The input tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type can be float32, float64.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input. output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False. return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default. name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
Returns: Returns:
Tensor: The output tensor of adaptive max pool3d result. The data type is same as input tensor. Tensor: The output tensor of adaptive max pool3d result. The data type is same as input tensor.
Examples: Examples:
.. code-block:: python .. code-block:: python
# adaptive max pool3d # adaptive max pool3d
# suppose input data in the shape of [N, C, D, H, W], `output_size` is [l, m, n] # suppose input data in the shape of [N, C, D, H, W], `output_size` is [l, m, n]
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions # output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
...@@ -1333,6 +1344,7 @@ def adaptive_max_pool3d(x, output_size, return_indices=False, name=None): ...@@ -1333,6 +1344,7 @@ def adaptive_max_pool3d(x, output_size, return_indices=False, name=None):
if isinstance(output_size, int): if isinstance(output_size, int):
output_size = utils.convert_to_list(output_size, 3, 'output_size') output_size = utils.convert_to_list(output_size, 3, 'output_size')
else: else:
output_size = list(output_size)
if output_size[0] == None: if output_size[0] == None:
output_size[0] = in_l output_size[0] = in_l
if output_size[1] == None: if output_size[1] == None:
......
...@@ -87,6 +87,7 @@ class AvgPool1d(layers.Layer): ...@@ -87,6 +87,7 @@ class AvgPool1d(layers.Layer):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.nn as nn import paddle.nn as nn
paddle.disable_static() paddle.disable_static()
...@@ -176,6 +177,7 @@ class AvgPool2d(layers.Layer): ...@@ -176,6 +177,7 @@ class AvgPool2d(layers.Layer):
ShapeError: If the output's shape calculated is not greater than 0. ShapeError: If the output's shape calculated is not greater than 0.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.nn as nn import paddle.nn as nn
import numpy as np import numpy as np
...@@ -267,6 +269,7 @@ class AvgPool3d(layers.Layer): ...@@ -267,6 +269,7 @@ class AvgPool3d(layers.Layer):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.nn as nn import paddle.nn as nn
import numpy as np import numpy as np
...@@ -457,6 +460,7 @@ class MaxPool2d(layers.Layer): ...@@ -457,6 +460,7 @@ class MaxPool2d(layers.Layer):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.nn as nn import paddle.nn as nn
import numpy as np import numpy as np
...@@ -547,6 +551,7 @@ class MaxPool3d(layers.Layer): ...@@ -547,6 +551,7 @@ class MaxPool3d(layers.Layer):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.nn as nn import paddle.nn as nn
import numpy as np import numpy as np
...@@ -915,8 +920,11 @@ class AdaptiveMaxPool2d(layers.Layer): ...@@ -915,8 +920,11 @@ class AdaptiveMaxPool2d(layers.Layer):
""" """
This operation applies 2D adaptive max pooling on input tensor. The h and w dimensions This operation applies 2D adaptive max pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size. of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
For adaptive max pool2d: For adaptive max pool2d:
.. math:: .. math::
hstart &= floor(i * H_{in} / H_{out}) hstart &= floor(i * H_{in} / H_{out})
hend &= ceil((i + 1) * H_{in} / H_{out}) hend &= ceil((i + 1) * H_{in} / H_{out})
wstart &= floor(j * W_{in} / W_{out}) wstart &= floor(j * W_{in} / W_{out})
...@@ -936,6 +944,7 @@ class AdaptiveMaxPool2d(layers.Layer): ...@@ -936,6 +944,7 @@ class AdaptiveMaxPool2d(layers.Layer):
A callable object of AdaptiveMaxPool2d. A callable object of AdaptiveMaxPool2d.
Examples: Examples:
.. code-block:: python .. code-block:: python
# adaptive max pool2d # adaptive max pool2d
# suppose input data in shape of [N, C, H, W], `output_size` is [m, n], # suppose input data in shape of [N, C, H, W], `output_size` is [m, n],
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions # output shape is [N, C, m, n], adaptive pool divide H and W dimensions
...@@ -978,8 +987,11 @@ class AdaptiveMaxPool3d(layers.Layer): ...@@ -978,8 +987,11 @@ class AdaptiveMaxPool3d(layers.Layer):
""" """
This operation applies 3D adaptive max pooling on input tensor. The h and w dimensions This operation applies 3D adaptive max pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size. of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
For adaptive max pool3d: For adaptive max pool3d:
.. math:: .. math::
dstart &= floor(i * D_{in} / D_{out}) dstart &= floor(i * D_{in} / D_{out})
dend &= ceil((i + 1) * D_{in} / D_{out}) dend &= ceil((i + 1) * D_{in} / D_{out})
hstart &= floor(j * H_{in} / H_{out}) hstart &= floor(j * H_{in} / H_{out})
...@@ -987,10 +999,9 @@ class AdaptiveMaxPool3d(layers.Layer): ...@@ -987,10 +999,9 @@ class AdaptiveMaxPool3d(layers.Layer):
wstart &= floor(k * W_{in} / W_{out}) wstart &= floor(k * W_{in} / W_{out})
wend &= ceil((k + 1) * W_{in} / W_{out}) wend &= ceil((k + 1) * W_{in} / W_{out})
Output(i ,j, k) &= max(Input[dstart:dend, hstart:hend, wstart:wend]) Output(i ,j, k) &= max(Input[dstart:dend, hstart:hend, wstart:wend])
Parameters: Parameters:
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means
the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False. return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and to :ref:`api_guide_Name`. Usually name is no need to set and
...@@ -1002,6 +1013,7 @@ class AdaptiveMaxPool3d(layers.Layer): ...@@ -1002,6 +1013,7 @@ class AdaptiveMaxPool3d(layers.Layer):
A callable object of AdaptiveMaxPool3d. A callable object of AdaptiveMaxPool3d.
Examples: Examples:
.. code-block:: python .. code-block:: python
# adaptive max pool3d # adaptive max pool3d
# suppose input data in shape of [N, C, D, H, W], `output_size` is [l, m, n], # suppose input data in shape of [N, C, D, H, W], `output_size` is [l, m, n],
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions # output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
...@@ -1028,8 +1040,8 @@ class AdaptiveMaxPool3d(layers.Layer): ...@@ -1028,8 +1040,8 @@ class AdaptiveMaxPool3d(layers.Layer):
pool = paddle.nn.AdaptiveMaxPool3d(output_size=4) pool = paddle.nn.AdaptiveMaxPool3d(output_size=4)
out = pool(x) out = pool(x)
# out shape: [2, 3, 4, 4, 4] # out shape: [2, 3, 4, 4, 4]
pool, indices = paddle.nn.AdaptiveMaxPool3d(output_size=3, return_indices=True) pool = paddle.nn.AdaptiveMaxPool3d(output_size=3, return_indices=True)
out = pool(x) out, indices = pool(x)
# out shape: [2, 3, 4, 4, 4], indices shape: [2, 3, 4, 4, 4] # out shape: [2, 3, 4, 4, 4], indices shape: [2, 3, 4, 4, 4]
""" """
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册