未验证 提交 b6a26749 编写于 作者: H huangjun12 提交者: GitHub

fix doc of alpha_dropout/dropout/dropout2d/dropout3d/npair_loss (#29136)

* fix en doc, test=document_fix

* add blank after code declare, test=document_fix

* refine doc of dropout, test=document_fix

* refine npair_loss and dropout, test=document_fix
上级 4096ff94
......@@ -1651,16 +1651,15 @@ from .control_flow import equal
def npair_loss(anchor, positive, labels, l2_reg=0.002):
r'''
Read `Improved Deep Metric Learning with Multi class N pair Loss Objective\
<http://www.nec-labs.com/uploads/images/Department-Images/MediaAnalytics/\
papers/nips16_npairmetriclearning.pdf>`_ .
"""
Npair loss requires paired data. Npair loss has two parts: the first part is L2
regularizer on the embedding vector; the second part is cross entropy loss which
takes the similarity matrix of anchor and positive as logits.
For more information, please refer to:
`Improved Deep Metric Learning with Multi class N pair Loss Objective <http://www.nec-labs.com/uploads/images/Department-Images/MediaAnalytics/papers/nips16_npairmetriclearning.pdf>`_
Args:
anchor(Tensor): embedding vector for the anchor image. shape=[batch_size, embedding_dims],
the data type is float32 or float64.
......@@ -1669,11 +1668,12 @@ def npair_loss(anchor, positive, labels, l2_reg=0.002):
labels(Tensor): 1-D tensor. shape=[batch_size], the data type is float32 or float64 or int64.
l2_reg(float32): L2 regularization term on embedding vector, default: 0.002.
Returns:
A Tensor representing the npair loss, the data type is the same as
anchor, the shape is [1].
A Tensor representing the npair loss, the data type is the same as anchor, the shape is [1].
Examples:
.. code-block:: python
import paddle
......@@ -1685,9 +1685,9 @@ def npair_loss(anchor, positive, labels, l2_reg=0.002):
labels = paddle.rand(shape=(18,), dtype=DATATYPE)
npair_loss = paddle.nn.functional.npair_loss(anchor, positive, labels, l2_reg = 0.002)
print(npair_loss.numpy())
print(npair_loss)
'''
"""
check_variable_and_dtype(anchor, 'anchor', ['float32', 'float64'],
'npair_loss')
check_variable_and_dtype(positive, 'positive', ['float32', 'float64'],
......
......@@ -769,7 +769,7 @@ def dropout(x,
p (float | int): Probability of setting units to zero. Default 0.5.
axis (int | list): The axis along which the dropout is performed. Default None.
training (bool): A flag indicating whether it is in train phrase or not. Default True.
mode(str): ['upscale_in_train'(default) | 'downscale_in_infer']
mode(str): ['upscale_in_train'(default) | 'downscale_in_infer'].
1. upscale_in_train(default), upscale the output at training time
......@@ -785,9 +785,14 @@ def dropout(x,
Returns:
A Tensor representing the dropout, has same shape and data type as `x` .
Examples:
We use ``p=0.5`` in the following description for simplicity.
1. When ``axis=None`` , this is commonly used dropout, which dropout each element of x randomly.
.. code-block:: text
Let's see a simple case when x is a 2d tensor with shape 2*3:
[[1 2 3]
[4 5 6]]
......@@ -813,7 +818,12 @@ def dropout(x,
[[0.5 1. 1.5]
[2. 2.5 3. ]]
2. When ``axis!=None`` , this is useful for dropping whole channels from an image or sequence.
.. code-block:: text
Let's see the simple case when x is a 2d tensor with shape 2*3 again:
[[1 2 3]
[4 5 6]]
......@@ -853,18 +863,15 @@ def dropout(x,
[[0 0 0]
[0 0 0]]
Actually this is not what we want because all elements may set to zero~
When x is a 4d tensor with shape `NCHW`, we can set ``axis=[0,1]`` and the dropout will be performed
in channel `N` and `C`, `H` and `W` is tied, i.e.
paddle.nn.dropout(x, p, axis=[0,1])
Please refer to ``paddle.nn.functional.dropout2d`` for more details.
Similarly, when x is a 5d tensor with shape `NCDHW`, we can set ``axis=[0,1]`` to perform
dropout3d. Please refer to ``paddle.nn.functional.dropout3d`` for more details.
When x is a 4d tensor with shape `NCHW`, we can set ``axis=[0,1]`` and the dropout will be performed in channel `N` and `C`, `H` and `W` is tied, i.e. paddle.nn.dropout(x, p, axis=[0,1]) . Please refer to ``paddle.nn.functional.dropout2d`` for more details.
Similarly, when x is a 5d tensor with shape `NCDHW`, we can set ``axis=[0,1]`` to perform dropout3d. Please refer to ``paddle.nn.functional.dropout3d`` for more details.
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[1,2,3], [4,5,6]]).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.dropout(x, 0.5)
......@@ -872,12 +879,12 @@ def dropout(x,
y_0 = paddle.nn.functional.dropout(x, axis=0)
y_1 = paddle.nn.functional.dropout(x, axis=1)
y_01 = paddle.nn.functional.dropout(x, axis=[0,1])
print(x.numpy())
print(y_train.numpy())
print(y_test.numpy())
print(y_0.numpy())
print(y_1.numpy())
print(y_01.numpy())
print(x)
print(y_train)
print(y_test)
print(y_0)
print(y_1)
print(y_01)
"""
if not isinstance(p, (float, int)):
......@@ -987,21 +994,19 @@ def dropout2d(x, p=0.5, training=True, data_format='NCHW', name=None):
The data type is float32 or float64.
p (float): Probability of setting units to zero. Default 0.5.
training (bool): A flag indicating whether it is in train phrase or not. Default True.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from:
`NCHW` , `NHWC` . The default is `NCHW` . When it is `NCHW` , the data is
stored in the order of: [batch_size, input_channels, input_height, input_width].
data_format (str, optional): Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from `NCHW` or `NHWC` . The default is `NCHW` . When it is `NCHW` , the data is stored in the order of: [batch_size, input_channels, input_height, input_width].
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
Returns:
A Tensor representing the dropout2d, has same shape and data type as `x` .
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.random.random(size=(2, 3, 4, 5)).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.dropout2d(x) #train
......@@ -1044,21 +1049,19 @@ def dropout3d(x, p=0.5, training=True, data_format='NCDHW', name=None):
The data type is float32 or float64.
p (float): Probability of setting units to zero. Default 0.5.
training (bool): A flag indicating whether it is in train phrase or not. Default True.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from:
``NCDHW``, ``NDHWC``. The default is ``NCDHW`` . When it is ``NCDHW`` , the data is
stored in the order of: [batch_size, input_channels, input_depth, input_height, input_width].
data_format (str, optional): Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from ``NCDHW`` or ``NDHWC``. The default is ``NCDHW`` . When it is ``NCDHW`` , the data is stored in the order of: [batch_size, input_channels, input_depth, input_height, input_width].
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
Returns:
A Tensor representing the dropout3d, has same shape and data type with `x` .
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.random.random(size=(2, 3, 4, 5, 6)).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.dropout3d(x) #train
......@@ -1105,18 +1108,18 @@ def alpha_dropout(x, p=0.5, training=True, name=None):
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[-1, 1], [-1, 1]]).astype('float32')
x = paddle.to_tensor(x)
y_train = paddle.nn.functional.alpha_dropout(x, 0.5)
y_test = paddle.nn.functional.alpha_dropout(x, 0.5, training=False)
print(x.numpy())
print(y_train.numpy())
print(x)
print(y_train)
# [[-0.10721093, 1.6655989 ], [-0.7791938, -0.7791938]] (randomly)
print(y_test.numpy())
print(y_test)
"""
if not isinstance(p, (float, int)):
raise TypeError("p argument should be a float or int")
......
......@@ -655,21 +655,22 @@ class Dropout(layers.Layer):
- input: N-D tensor.
- output: N-D tensor, the same shape as input.
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[1,2,3], [4,5,6]]).astype('float32')
x = paddle.to_tensor(x)
m = paddle.nn.Dropout(p=0.5)
y_train = m(x)
m.eval() # switch the model to test phase
y_test = m(x)
print(x.numpy())
print(y_train.numpy())
print(y_test.numpy())
print(x)
print(y_train)
print(y_test)
"""
def __init__(self, p=0.5, axis=None, mode="upscale_in_train", name=None):
......@@ -705,31 +706,29 @@ class Dropout2D(layers.Layer):
Parameters:
p (float, optional): Probability of setting units to zero. Default: 0.5
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from:
`NCHW`, `NHWC`. The default is `NCHW`. When it is `NCHW`, the data is
stored in the order of: [batch_size, input_channels, input_height, input_width].
data_format (str, optional): Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from `NCHW` or `NHWC`. The default is `NCHW`. When it is `NCHW`, the data is stored in the order of: [batch_size, input_channels, input_height, input_width].
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
Shape:
- input: 4-D tensor.
- output: 4-D tensor, the same shape as input.
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.random.random(size=(2, 3, 4, 5)).astype('float32')
x = paddle.to_tensor(x)
m = paddle.nn.Dropout2D(p=0.5)
y_train = m(x)
m.eval() # switch the model to test phase
y_test = m(x)
print(x.numpy())
print(y_train.numpy())
print(y_test.numpy())
print(x)
print(y_train)
print(y_test)
"""
def __init__(self, p=0.5, data_format='NCHW', name=None):
......@@ -763,31 +762,29 @@ class Dropout3D(layers.Layer):
Parameters:
p (float | int): Probability of setting units to zero. Default: 0.5
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from:
`NCDHW`, `NDHWC`. The default is `NCDHW`. When it is `NCDHW`, the data is
stored in the order of: [batch_size, input_channels, input_depth, input_height, input_width].
data_format (str, optional): Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from `NCDHW` or `NDHWC`. The default is `NCDHW`. When it is `NCDHW`, the data is stored in the order of: [batch_size, input_channels, input_depth, input_height, input_width].
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
Shape:
- input: 5-D tensor.
- output: 5-D tensor, the same shape as input.
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.random.random(size=(2, 3, 4, 5, 6)).astype('float32')
x = paddle.to_tensor(x)
m = paddle.nn.Dropout3D(p=0.5)
y_train = m(x)
m.eval() # switch the model to test phase
y_test = m(x)
print(x.numpy())
print(y_train.numpy())
print(y_test.numpy())
print(x)
print(y_train)
print(y_test)
"""
def __init__(self, p=0.5, data_format='NCDHW', name=None):
......@@ -829,20 +826,20 @@ class AlphaDropout(layers.Layer):
Examples:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[-1, 1], [-1, 1]]).astype('float32')
x = paddle.to_tensor(x)
m = paddle.nn.AlphaDropout(p=0.5)
y_train = m(x)
m.eval() # switch the model to test phase
y_test = m(x)
print(x.numpy())
print(y_train.numpy())
print(x)
print(y_train)
# [[-0.10721093, 1.6655989 ], [-0.7791938, -0.7791938]] (randomly)
print(y_test.numpy())
print(y_test)
"""
def __init__(self, p=0.5, name=None):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册