未验证 提交 18056f7d 编写于 作者: L liuwei1031 提交者: GitHub

cherry-pick (#23925): tweak doc of dot and logsumexp, test=release/2.0 (#23999)

上级 17ff46b6
...@@ -419,11 +419,13 @@ def dot(x, y, name=None): ...@@ -419,11 +419,13 @@ def dot(x, y, name=None):
Only support 1-d Tensor(vector). Only support 1-d Tensor(vector).
Parameters: Parameters:
x(Variable): 1-D ``Tensor`` or ``LoDTensor``. Its datatype should be ``float32``, ``float64``, ``int32``, ``int64`` x(Variable): 1-D ``Tensor`` or ``LoDTensor``. Its datatype should be ``float32``, ``float64``, ``int32``, ``int64``
y(Variable): 1-D ``Tensor`` or ``LoDTensor``. Its datatype soulde be ``float32``, ``float64``, ``int32``, ``int64`` y(Variable): 1-D ``Tensor`` or ``LoDTensor``. Its datatype soulde be ``float32``, ``float64``, ``int32``, ``int64``
name(str, optional): Name of the output. Default is None. It's used to print debug info for developers. Details: :ref:`api_guide_Name` name(str, optional): Name of the output. Default is None. It's used to print debug info for developers. Details: :ref:`api_guide_Name`
Returns:
Variable: the calculated result Tensor/LoDTensor.
Examples: Examples:
.. code-block:: python .. code-block:: python
......
...@@ -1012,13 +1012,13 @@ def addmm(input, x, y, alpha=1.0, beta=1.0, name=None): ...@@ -1012,13 +1012,13 @@ def addmm(input, x, y, alpha=1.0, beta=1.0, name=None):
def logsumexp(x, dim=None, keepdim=False, out=None, name=None): def logsumexp(x, dim=None, keepdim=False, out=None, name=None):
""" """
This operator calculates the log of the sum of exponentials of the input Tensor. This operator calculates the log of the sum of exponentials of the input Tensor.
.. math:: .. math::
logsumexp(x) = \log\sum exp(x) logsumexp(x) = \log\sum exp(x)
Parameters: Parameters:
x (Variable): Input LoDTensor or Tensor. Must be one of the following types: float32, float64. x (Variable): Input LoDTensor or Tensor. Must be one of the following types: float32, float64.
dim (list|int, optional): The dimensions along which the sum is performed. If :attr:`None`, dim (list|int, optional): The dimensions along which the sum is performed. If :attr:`None`,
sum all elements of :attr:`input` and return a Tensor variable with a single element, sum all elements of :attr:`input` and return a Tensor variable with a single element,
...@@ -1027,13 +1027,16 @@ Parameters: ...@@ -1027,13 +1027,16 @@ Parameters:
keep_dim (bool, optional): Whether to reserve the reduced dimension in the output Tensor. keep_dim (bool, optional): Whether to reserve the reduced dimension in the output Tensor.
The result tensor will have one fewer dimension than the :attr:`input` unless :attr:`keep_dim` The result tensor will have one fewer dimension than the :attr:`input` unless :attr:`keep_dim`
is true, default value is False. is true, default value is False.
out (Variable), optional): Enable user to explicitly specify an output variable to save result.
name (str, optional): The default value is None. Normally there is no need for user to name (str, optional): The default value is None. Normally there is no need for user to
set this property. For more information, please refer to :ref:`api_guide_Name` set this property. For more information, please refer to :ref:`api_guide_Name`
Returns:
Variable: The calcuated result Tensor/LoDTensor.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
import paddle.fluid as fluid import paddle.fluid as fluid
...@@ -1044,6 +1047,17 @@ Examples: ...@@ -1044,6 +1047,17 @@ Examples:
x = fluid.dygraph.to_variable(np_x) x = fluid.dygraph.to_variable(np_x)
print(paddle.logsumexp(x).numpy()) print(paddle.logsumexp(x).numpy())
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
np_x = np.random.uniform(0.1, 1, [2, 3, 4]).astype(np.float32)
x = fluid.dygraph.to_variable(np_x)
print(paddle.logsumexp(x, dim=1).numpy())
print(paddle.logsumexp(x, dim=[0, 2]).numpy())
""" """
op_type = 'logsumexp' op_type = 'logsumexp'
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册