未验证 提交 04abcab8 编写于 作者: M mrcangye 提交者: GitHub

fix some doc bug test=document_fix (#45488)

* fix some doc bug test=document_fix

* fix some docs issues, test=document_fix

* beta -> \beta in softplus

* threshold -> \varepsilon in softplus

* parameter name

* delta -> \delta in smooth_l1_loss

* fix some docs test=document_fix

* fix docs test=document_fix

* fix docs && 增加空行 test=document_fix

* Update python/paddle/nn/functional/activation.py, test=document_fix

* Update python/paddle/nn/layer/activation.py, test=document_fix
Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
上级 c5a1a4b0
...@@ -170,9 +170,9 @@ class ActivationOpGrad : public framework::OperatorWithKernel { ...@@ -170,9 +170,9 @@ class ActivationOpGrad : public framework::OperatorWithKernel {
}; };
UNUSED constexpr char SigmoidDoc[] = R"DOC( UNUSED constexpr char SigmoidDoc[] = R"DOC(
Sigmoid Activation Operator Sigmoid Activation
$$out = \\frac{1}{1 + e^{-x}}$$ $$out = \frac{1}{1 + e^{-x}}$$
)DOC"; )DOC";
......
...@@ -949,12 +949,14 @@ def silu(x, name=None): ...@@ -949,12 +949,14 @@ def silu(x, name=None):
silu(x) = \frac{x}{1 + e^{-x}} silu(x) = \frac{x}{1 + e^{-x}}
Where :math:`x` is the input Tensor.
Parameters: Parameters:
x (Tensor): The input Tensor with data type float32, float64. x (Tensor): The input Tensor with data type float32, float64.
name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None. name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
Returns: Returns:
A Tensor with the same data type and shape as ``x`` . A Tensor with the same data type and shape as :attr:`x`.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -1072,15 +1074,13 @@ def softmax(x, axis=-1, dtype=None, name=None): ...@@ -1072,15 +1074,13 @@ def softmax(x, axis=-1, dtype=None, name=None):
import paddle import paddle
import paddle.nn.functional as F import paddle.nn.functional as F
import numpy as np
x = np.array([[[2.0, 3.0, 4.0, 5.0], x = paddle.to_tensor([[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0], [3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]], [7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0], [[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0], [5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32') [6.0, 7.0, 8.0, 9.0]]],dtype='float32')
x = paddle.to_tensor(x)
out1 = F.softmax(x) out1 = F.softmax(x)
out2 = F.softmax(x, dtype='float64') out2 = F.softmax(x, dtype='float64')
# out1's data type is float32; out2's data type is float64 # out1's data type is float32; out2's data type is float64
...@@ -1167,14 +1167,15 @@ def softplus(x, beta=1, threshold=20, name=None): ...@@ -1167,14 +1167,15 @@ def softplus(x, beta=1, threshold=20, name=None):
softplus activation softplus activation
.. math:: .. math::
softplus(x)=\begin{cases}
softplus(x) = \frac{1}{beta} * \log(1 + e^{beta * x}) \\ \frac{1}{\beta} * \log(1 + e^{\beta * x}),&x\leqslant\frac{\varepsilon}{\beta};\\
\text{For numerical stability, the implementation reverts to the linear function when: beta * x > threshold.} x,&x>\frac{\varepsilon}{\beta}.
\end{cases}
Parameters: Parameters:
x (Tensor): The input Tensor with data type float32, float64. x (Tensor): The input Tensor with data type float32, float64.
beta (float, optional): The value of beta for softplus. Default is 1 beta (float, optional): The value of :math:`\beta` for softplus. Default is 1
threshold (float, optional): The value of threshold for softplus. Default is 20 threshold (float, optional): The value of :math:`\varepsilon` for softplus. Default is 20
name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None. name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
Returns: Returns:
...@@ -1185,9 +1186,8 @@ def softplus(x, beta=1, threshold=20, name=None): ...@@ -1185,9 +1186,8 @@ def softplus(x, beta=1, threshold=20, name=None):
import paddle import paddle
import paddle.nn.functional as F import paddle.nn.functional as F
import numpy as np
x = paddle.to_tensor(np.array([-0.4, -0.2, 0.1, 0.3])) x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3], dtype='float32')
out = F.softplus(x) # [0.513015, 0.598139, 0.744397, 0.854355] out = F.softplus(x) # [0.513015, 0.598139, 0.744397, 0.854355]
""" """
......
...@@ -993,17 +993,17 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None): ...@@ -993,17 +993,17 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None):
.. math:: .. math::
loss(x,y) = \frac{1}{n}\sum_{i}z_i loss(x,y) = \frac{1}{n}\sum_{i}z_i
where z_i is given by: where :math:`z_i` is given by:
.. math:: .. math::
\mathop{z_i} = \left\{\begin{array}{rcl} \mathop{z_i} = \left\{\begin{array}{rcl}
0.5(x_i - y_i)^2 & & {if |x_i - y_i| < delta} \\ 0.5(x_i - y_i)^2 & & {if |x_i - y_i| < \delta} \\
delta * |x_i - y_i| - 0.5 * delta^2 & & {otherwise} \delta * |x_i - y_i| - 0.5 * \delta^2 & & {otherwise}
\end{array} \right. \end{array} \right.
Parameters: Parameters:
input (Tensor): Input tensor, the data type is float32 or float64. Shape is input (Tensor): Input tensor, the data type is float32 or float64. Shape is
...@@ -1017,12 +1017,11 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None): ...@@ -1017,12 +1017,11 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None):
If :attr:`reduction` is ``'sum'``, the reduced sum loss is returned. If :attr:`reduction` is ``'sum'``, the reduced sum loss is returned.
If :attr:`reduction` is ``'none'``, the unreduced loss is returned. If :attr:`reduction` is ``'none'``, the unreduced loss is returned.
Default is ``'mean'``. Default is ``'mean'``.
delta (float, optional): Specifies the hyperparameter delta to be used. delta (float, optional): Specifies the hyperparameter :math:`\delta` to be used.
The value determines how large the errors need to be to use L1. Errors The value determines how large the errors need to be to use L1. Errors
smaller than delta are minimized with L2. Parameter is ignored for smaller than delta are minimized with L2. Parameter is ignored for
negative/zero values. Default = 1.0 negative/zero values. Default = 1.0
name (str, optional): Name for the operation (optional, default is name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
None). For more information, please refer to :ref:`api_guide_Name`.
Returns: Returns:
Tensor, The tensor variable storing the smooth_l1_loss of input and label. Tensor, The tensor variable storing the smooth_l1_loss of input and label.
...@@ -1031,14 +1030,12 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None): ...@@ -1031,14 +1030,12 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None):
.. code-block:: python .. code-block:: python
import paddle import paddle
import numpy as np
input_data = np.random.rand(3,3).astype("float32") input = paddle.rand([3, 3]).astype('float32')
label_data = np.random.rand(3,3).astype("float32") label = paddle.rand([3, 3]).astype('float32')
input = paddle.to_tensor(input_data)
label = paddle.to_tensor(label_data)
output = paddle.nn.functional.smooth_l1_loss(input, label) output = paddle.nn.functional.smooth_l1_loss(input, label)
print(output) print(output)
# [0.068004]
""" """
check_variable_and_dtype(input, 'input', ['float32', 'float64'], check_variable_and_dtype(input, 'input', ['float32', 'float64'],
'smooth_l1_loss') 'smooth_l1_loss')
......
...@@ -706,15 +706,15 @@ class LeakyReLU(Layer): ...@@ -706,15 +706,15 @@ class LeakyReLU(Layer):
class Sigmoid(Layer): class Sigmoid(Layer):
""" r"""
this interface is used to construct a callable object of the ``Sigmoid`` class. This layer calcluate the `sigmoid` of input x. this interface is used to construct a callable object of the ``Sigmoid`` class. This layer calcluate the `sigmoid` of input x.
.. math:: .. math::
Sigmoid(x) = \\frac{1}{1 + e^{-x}} sigmoid(x) = \frac{1}{1 + e^{-x}}
Parameters: Parameters:
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`. name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
Shape: Shape:
x: N-D tensor, available dtype is float16, float32, float64. x: N-D tensor, available dtype is float16, float32, float64.
...@@ -726,11 +726,11 @@ class Sigmoid(Layer): ...@@ -726,11 +726,11 @@ class Sigmoid(Layer):
.. code-block:: python .. code-block:: python
import paddle import paddle
m = paddle.nn.Sigmoid() m = paddle.nn.Sigmoid()
x = paddle.to_tensor([1.0, 2.0, 3.0, 4.0]) x = paddle.to_tensor([1.0, 2.0, 3.0, 4.0])
out = m(x) # [0.7310586, 0.880797, 0.95257413, 0.98201376] out = m(x) # [0.7310586, 0.880797, 0.95257413, 0.98201376]
""" """
def __init__(self, name=None): def __init__(self, name=None):
...@@ -801,15 +801,15 @@ class Softplus(Layer): ...@@ -801,15 +801,15 @@ class Softplus(Layer):
Softplus Activation Softplus Activation
.. math:: .. math::
softplus(x)=\begin{cases}
Softplus(x) = \frac{1}{beta} * \log(1 + e^{beta * x}) \\ \frac{1}{\beta} * \log(1 + e^{\beta * x}),&x\leqslant\frac{\varepsilon}{\beta};\\
\text{For numerical stability, the implementation reverts to the linear function when: beta * x > threshold.} x,&x>\frac{\varepsilon}{\beta}.
\end{cases}
Parameters: Parameters:
beta (float, optional): The value of beta for Softplus. Default is 1 beta (float, optional): The value of :math:`\beta` for Softplus. Default is 1
threshold (float, optional): The value of threshold for Softplus. Default is 20 threshold (float, optional): The value of :math:`\varepsilon` for Softplus. Default is 20
name (str, optional): Name for the operation (optional, default is None). name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
For more information, please refer to :ref:`api_guide_Name`.
Shape: Shape:
- input: Tensor with any shape. - input: Tensor with any shape.
...@@ -819,9 +819,8 @@ class Softplus(Layer): ...@@ -819,9 +819,8 @@ class Softplus(Layer):
.. code-block:: python .. code-block:: python
import paddle import paddle
import numpy as np
x = paddle.to_tensor(np.array([-0.4, -0.2, 0.1, 0.3])) x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3], dtype='float32')
m = paddle.nn.Softplus() m = paddle.nn.Softplus()
out = m(x) # [0.513015, 0.598139, 0.744397, 0.854355] out = m(x) # [0.513015, 0.598139, 0.744397, 0.854355]
""" """
...@@ -1101,16 +1100,17 @@ class ThresholdedReLU(Layer): ...@@ -1101,16 +1100,17 @@ class ThresholdedReLU(Layer):
class Silu(Layer): class Silu(Layer):
""" r"""
Silu Activation. Silu Activation
.. math:: .. math::
Silu(x) = \frac{x}{1 + e^{-x}} silu(x) = \frac{x}{1 + \mathrm{e}^{-x}}
Where :math:`x` is the input Tensor.
Parameters: Parameters:
x (Tensor): The input Tensor with data type float32, or float64. name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
Shape: Shape:
- input: Tensor with any shape. - input: Tensor with any shape.
...@@ -1271,15 +1271,13 @@ class Softmax(Layer): ...@@ -1271,15 +1271,13 @@ class Softmax(Layer):
.. code-block:: python .. code-block:: python
import paddle import paddle
import numpy as np
x = np.array([[[2.0, 3.0, 4.0, 5.0], x = paddle.to_tensor([[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0], [3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]], [7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0], [[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0], [5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32') [6.0, 7.0, 8.0, 9.0]]], dtype='float32')
x = paddle.to_tensor(x)
m = paddle.nn.Softmax() m = paddle.nn.Softmax()
out = m(x) out = m(x)
# [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426], # [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
......
...@@ -1138,16 +1138,16 @@ class SmoothL1Loss(Layer): ...@@ -1138,16 +1138,16 @@ class SmoothL1Loss(Layer):
.. math:: .. math::
loss(x,y) = \frac{1}{n}\sum_{i}z_i loss(x, y) = \frac{1}{n}\sum_{i}z_i
where z_i is given by: where :math:`z_i` is given by:
.. math:: .. math::
\mathop{z_i} = \left\{\begin{array}{rcl} \mathop{z_i} = \left\{\begin{array}{rcl}
0.5(x_i - y_i)^2 & & {if |x_i - y_i| < delta} \\ 0.5(x_i - y_i)^2 & & {if |x_i - y_i| < \delta} \\
delta * |x_i - y_i| - 0.5 * delta^2 & & {otherwise} \delta * |x_i - y_i| - 0.5 * \delta^2 & & {otherwise}
\end{array} \right. \end{array} \right.
Parameters: Parameters:
reduction (str, optional): Indicate how to average the loss by batch_size, reduction (str, optional): Indicate how to average the loss by batch_size,
...@@ -1156,12 +1156,11 @@ class SmoothL1Loss(Layer): ...@@ -1156,12 +1156,11 @@ class SmoothL1Loss(Layer):
If :attr:`reduction` is ``'sum'``, the reduced sum loss is returned. If :attr:`reduction` is ``'sum'``, the reduced sum loss is returned.
If :attr:`reduction` is ``'none'``, the unreduced loss is returned. If :attr:`reduction` is ``'none'``, the unreduced loss is returned.
Default is ``'mean'``. Default is ``'mean'``.
delta (float, optional): Specifies the hyperparameter delta to be used. delta (float, optional): Specifies the hyperparameter :math:`\delta` to be used.
The value determines how large the errors need to be to use L1. Errors The value determines how large the errors need to be to use L1. Errors
smaller than delta are minimized with L2. Parameter is ignored for smaller than delta are minimized with L2. Parameter is ignored for
negative/zero values. Default = 1.0 negative/zero values. Default value is :math:`1.0`.
name (str, optional): Name for the operation (optional, default is name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
None). For more information, please refer to :ref:`api_guide_Name`.
Call Parameters: Call Parameters:
...@@ -1179,14 +1178,12 @@ class SmoothL1Loss(Layer): ...@@ -1179,14 +1178,12 @@ class SmoothL1Loss(Layer):
.. code-block:: python .. code-block:: python
import paddle import paddle
import numpy as np input = paddle.rand([3, 3]).astype("float32")
input_data = np.random.rand(3,3).astype("float32") label = paddle.rand([3, 3]).astype("float32")
label_data = np.random.rand(3,3).astype("float32")
input = paddle.to_tensor(input_data)
label = paddle.to_tensor(label_data)
loss = paddle.nn.SmoothL1Loss() loss = paddle.nn.SmoothL1Loss()
output = loss(input, label) output = loss(input, label)
print(output) print(output)
# [0.049606]
""" """
def __init__(self, reduction='mean', delta=1.0, name=None): def __init__(self, reduction='mean', delta=1.0, name=None):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册