This operation applies a 2D adaptive max pooling on input tensor.
This operation applies a 2D adaptive max pooling on input tensor.
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool2d` .
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool2d` .
Args:
Args:
x (Tensor): The input tensor of adaptive max pool2d operator, which is a 4-D tensor. The data type can be float16, float32, float64, int32 or int64.
x (Tensor): The input tensor of adaptive max pool2d operator, which is a 4-D tensor. The data type can be float16, float32, float64, int32 or int64.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain two elements, (H, W). H and W can be either a int, or None which means the size will be the same as that of the input.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain two elements, (H, W). H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
Returns:
Returns:
Tensor: The output tensor of adaptive max pool2d result. The data type is same as input tensor.
Tensor: The output tensor of adaptive max pool2d result. The data type is same as input tensor.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
# max adaptive pool2d
# max adaptive pool2d
# suppose input data in the shape of [N, C, H, W], `output_size` is [m, n]
# suppose input data in the shape of [N, C, H, W], `output_size` is [m, n]
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions
This operation applies a 3D adaptive max pooling on input tensor.
This operation applies a 3D adaptive max pooling on input tensor.
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool3d` .
See more details in :ref:`api_nn_pooling_AdaptiveMaxPool3d` .
Args:
Args:
x (Tensor): The input tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type can be float32, float64.
x (Tensor): The input tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type can be float32, float64.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
name(str, optional): For detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
Returns:
Returns:
Tensor: The output tensor of adaptive max pool3d result. The data type is same as input tensor.
Tensor: The output tensor of adaptive max pool3d result. The data type is same as input tensor.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
# adaptive max pool3d
# adaptive max pool3d
# suppose input data in the shape of [N, C, D, H, W], `output_size` is [l, m, n]
# suppose input data in the shape of [N, C, D, H, W], `output_size` is [l, m, n]
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
@@ -176,6 +177,7 @@ class AvgPool2d(layers.Layer):
...
@@ -176,6 +177,7 @@ class AvgPool2d(layers.Layer):
ShapeError: If the output's shape calculated is not greater than 0.
ShapeError: If the output's shape calculated is not greater than 0.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
import paddle.nn as nn
import paddle.nn as nn
import numpy as np
import numpy as np
...
@@ -267,6 +269,7 @@ class AvgPool3d(layers.Layer):
...
@@ -267,6 +269,7 @@ class AvgPool3d(layers.Layer):
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
import paddle.nn as nn
import paddle.nn as nn
import numpy as np
import numpy as np
...
@@ -457,6 +460,7 @@ class MaxPool2d(layers.Layer):
...
@@ -457,6 +460,7 @@ class MaxPool2d(layers.Layer):
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
import paddle.nn as nn
import paddle.nn as nn
import numpy as np
import numpy as np
...
@@ -547,6 +551,7 @@ class MaxPool3d(layers.Layer):
...
@@ -547,6 +551,7 @@ class MaxPool3d(layers.Layer):
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
import paddle.nn as nn
import paddle.nn as nn
import numpy as np
import numpy as np
...
@@ -916,8 +921,11 @@ class AdaptiveMaxPool2d(layers.Layer):
...
@@ -916,8 +921,11 @@ class AdaptiveMaxPool2d(layers.Layer):
"""
"""
This operation applies 2D adaptive max pooling on input tensor. The h and w dimensions
This operation applies 2D adaptive max pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
For adaptive max pool2d:
For adaptive max pool2d:
.. math::
.. math::
hstart &= floor(i * H_{in} / H_{out})
hstart &= floor(i * H_{in} / H_{out})
hend &= ceil((i + 1) * H_{in} / H_{out})
hend &= ceil((i + 1) * H_{in} / H_{out})
wstart &= floor(j * W_{in} / W_{out})
wstart &= floor(j * W_{in} / W_{out})
...
@@ -937,6 +945,7 @@ class AdaptiveMaxPool2d(layers.Layer):
...
@@ -937,6 +945,7 @@ class AdaptiveMaxPool2d(layers.Layer):
A callable object of AdaptiveMaxPool2d.
A callable object of AdaptiveMaxPool2d.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
# adaptive max pool2d
# adaptive max pool2d
# suppose input data in shape of [N, C, H, W], `output_size` is [m, n],
# suppose input data in shape of [N, C, H, W], `output_size` is [m, n],
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions
# output shape is [N, C, m, n], adaptive pool divide H and W dimensions
...
@@ -979,8 +988,11 @@ class AdaptiveMaxPool3d(layers.Layer):
...
@@ -979,8 +988,11 @@ class AdaptiveMaxPool3d(layers.Layer):
"""
"""
This operation applies 3D adaptive max pooling on input tensor. The h and w dimensions
This operation applies 3D adaptive max pooling on input tensor. The h and w dimensions
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
For adaptive max pool3d:
For adaptive max pool3d:
.. math::
.. math::
dstart &= floor(i * D_{in} / D_{out})
dstart &= floor(i * D_{in} / D_{out})
dend &= ceil((i + 1) * D_{in} / D_{out})
dend &= ceil((i + 1) * D_{in} / D_{out})
hstart &= floor(j * H_{in} / H_{out})
hstart &= floor(j * H_{in} / H_{out})
...
@@ -988,10 +1000,9 @@ class AdaptiveMaxPool3d(layers.Layer):
...
@@ -988,10 +1000,9 @@ class AdaptiveMaxPool3d(layers.Layer):
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list,
output_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means
the size will be the same as that of the input.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
return_indices (bool): If true, the index of max pooling point will be returned along with outputs. Default False.
name(str, optional): For detailed information, please refer
name(str, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
to :ref:`api_guide_Name`. Usually name is no need to set and
...
@@ -1003,6 +1014,7 @@ class AdaptiveMaxPool3d(layers.Layer):
...
@@ -1003,6 +1014,7 @@ class AdaptiveMaxPool3d(layers.Layer):
A callable object of AdaptiveMaxPool3d.
A callable object of AdaptiveMaxPool3d.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
# adaptive max pool3d
# adaptive max pool3d
# suppose input data in shape of [N, C, D, H, W], `output_size` is [l, m, n],
# suppose input data in shape of [N, C, D, H, W], `output_size` is [l, m, n],
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
...
@@ -1029,8 +1041,8 @@ class AdaptiveMaxPool3d(layers.Layer):
...
@@ -1029,8 +1041,8 @@ class AdaptiveMaxPool3d(layers.Layer):
pool = paddle.nn.AdaptiveMaxPool3d(output_size=4)
pool = paddle.nn.AdaptiveMaxPool3d(output_size=4)
out = pool(x)
out = pool(x)
# out shape: [2, 3, 4, 4, 4]
# out shape: [2, 3, 4, 4, 4]
pool, indices = paddle.nn.AdaptiveMaxPool3d(output_size=3, return_indices=True)
pool = paddle.nn.AdaptiveMaxPool3d(output_size=3, return_indices=True)
out = pool(x)
out, indices = pool(x)
# out shape: [2, 3, 4, 4, 4], indices shape: [2, 3, 4, 4, 4]
# out shape: [2, 3, 4, 4, 4], indices shape: [2, 3, 4, 4, 4]