未验证 提交 f5bc56c7 编写于 作者: Z zhupengyang 提交者: GitHub

hardshrink and Hardshrink doc (#2371)

上级 888fc017
......@@ -5,6 +5,7 @@ paddle.nn
.. toctree::
:maxdepth: 1
nn/activation.rst
nn/adaptive_pool2d.rst
nn/adaptive_pool3d.rst
nn/add_position_encoding.rst
......@@ -60,7 +61,7 @@ paddle.nn
nn/GradientClipByValue.rst
nn/grid_sampler.rst
nn/GroupNorm.rst
nn/hard_shrink.rst
nn/hardshrink.rst
nn/hard_sigmoid.rst
nn/hard_swish.rst
nn/hash.rst
......
==========
activation
==========
.. toctree::
:maxdepth: 1
activation/Hardshrink.rst
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_nn_activation_Hardshrink:
Hardshrink
---------
.. autoclass:: paddle.nn.activation.Hardshrink
:members:
:inherited-members:
:noindex:
.. _api_nn_hard_shrink:
hard_shrink
-------------------------------
:doc_source: paddle.fluid.layers.hard_shrink
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_nn_hardshrink:
hardshrink
----------
.. autofunction:: paddle.nn.functional.hardshrink
:noindex:
......@@ -73,7 +73,7 @@ paddle.nn
nn_cn/GradientClipByValue_cn.rst
nn_cn/grid_sampler_cn.rst
nn_cn/GroupNorm_cn.rst
nn_cn/hard_shrink_cn.rst
nn_cn/hardshrink_cn.rst
nn_cn/hard_sigmoid_cn.rst
nn_cn/hard_swish_cn.rst
nn_cn/hash_cn.rst
......
......@@ -8,5 +8,6 @@ activation
.. toctree::
:maxdepth: 1
activation_cn/Hardshrink_cn.rst
activation_cn/LeakyReLU_cn.rst
activation_cn/Sigmoid_cn.rst
.. _cn_api_nn_Hardshrink:
Hardshrink
-------------------------------
.. py:class:: paddle.nn.Hardshrink(threshold=0.5, name=None)
Hardshrink激活层
.. math::
Hardshrink(x)=
\left\{
\begin{aligned}
&x, & & if \ x > threshold \\
&x, & & if \ x < -threshold \\
&0, & & if \ others
\end{aligned}
\right.
其中,:math:`x` 为输入的 Tensor
参数
::::::::::
- threshold (float, 可选) - Hardshrink激活计算公式中的threshold值。默认值为0.5。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
形状:
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。
代码示例
:::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_variable(np.array([-1, 0.3, 2.5]))
m = paddle.nn.Hardshrink()
out = m(x) # [-1., 0., 2.5]
.. _cn_api_nn_cn_hard_shrink:
hard_shrink
-------------------------------
:doc_source: paddle.fluid.layers.hard_shrink
.. _cn_api_nn_cn_hard_shrink:
hardshrink
-------------------------------
.. py:functional:: paddle.nn.functional.hardshrink(x, threshold=0.5, name=None)
hardshrink激活层。计算公式如下:
.. math::
hardshrink(x)=
\left\{
\begin{aligned}
&x, & & if \ x > threshold \\
&x, & & if \ x < -threshold \\
&0, & & if \ others
\end{aligned}
\right.
其中,:math:`x` 为输入的 Tensor
参数
::::::::::
- x (Tensor) - 输入的 ``Tensor`` ,数据类型为:float32、float64。
- threshold (float, 可选) - hard_shrink激活计算公式中的threshold值。默认值为0.5。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。
代码示例
::::::::::
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
x = paddle.to_variable(np.array([-1, 0.3, 2.5]))
out = F.hardshrink(x) # [-1., 0., 2.5]
......@@ -2,7 +2,7 @@
softmax
-------------------------------
.. py:class:: paddle.nn.functional.softmax(x, axis=-1, name=None)
.. py:functional:: paddle.nn.functional.softmax(x, axis=-1, name=None)
该OP实现了softmax层。OP的计算过程如下:
......@@ -27,9 +27,9 @@ softmax
- 示例1(矩阵一共有三维。axis = -1,表示沿着最后一维(即第三维)做softmax操作)
.. code-block:: python
.. code-block:: text
输入
# input
x.shape = [2, 3, 4]
......@@ -42,7 +42,7 @@ softmax
axis = -1
输出
# output
out.shape = [2, 3, 4]
......@@ -55,9 +55,9 @@ softmax
- 示例2(矩阵一共有三维。axis = 1,表示沿着第二维做softmax操作)
.. code-block:: python
.. code-block:: text
输入
# input
x.shape = [2, 3, 4]
......@@ -70,7 +70,7 @@ softmax
axis = 1
输出
# output
out.shape = [2, 3, 4]
......@@ -101,7 +101,7 @@ softmax
import paddle.nn.functional as F
import numpy as np
paddle.enable_imperative()
paddle.disable_static()
x = np.array([[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
......@@ -109,7 +109,7 @@ softmax
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32')
x = paddle.imperative.to_variable(x)
x = paddle.to_variable(x)
out = F.softmax(x)
# [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册