未验证 提交 8537edaa 编写于 作者: ReganYue's avatar ReganYue 提交者: GitHub

【Fix en docs】 Fix document formatting*test=document_fix (#43996)

* Update creation.py

修复Returns格式问题

* 【Fix en docs】 Fix document formatting

修复Returns和Return Type格式问题。

* 修复格式,解决中英文文档不一致问题

为了解决中英文不一致的问题,统一将Returns内容第一个标点符号改成逗号。删掉Return Type

* fix大量 与 Return Type 和 Return相关的文档

* Convert Variable and LoDTensor to Tensor

* Update python/paddle/nn/functional/common.py
Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com>

* Update detection.py

* Update io.py

* Update io.py

* Update loss.py

* Update loss.py

* Update io.py

* Update io.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update nn.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update sequence_lod.py

* Update metrics.py

* Update metrics.py

* Update metrics.py

* Update metrics.py

* Update metrics.py

* Update nets.py

* Update reader.py

* Update nn.py

* Update io.py

* Update nn.py

* Update io.py

* remove Return Type to Returns

* Update sequence_lod.py

* Update sequence_lod.py

* Update common.py

* Update common.py

* Update extension.py

* Update creation.py

* update*test=document_fix

* Update io.py

* Update manipulation.py

* Update sequence_lod.py

* Update common.py

* Update common.py

* Update common.py

* Update common.py

* update*test=document_fix

* update*test=document_fix

* Update loss.py

* Update loss.py

* update*test=document_fix

* paddle.crop

* for ci;test=document_fix

* update io;test=document_fix

* Update io.py

* update sequence_expand docs;test=document_fix

* update note typos;test=document_fix
Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com>
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
上级 8573ca54
......@@ -184,8 +184,6 @@ def is_belong_to_optimizer(var):
@dygraph_not_support
def get_program_parameter(program):
"""
:api_attr: Static Graph
Get all the parameters from Program.
Args:
......@@ -212,8 +210,6 @@ def get_program_parameter(program):
@dygraph_not_support
def get_program_persistable_vars(program):
"""
:api_attr: Static Graph
Get all the persistable vars from Program.
Args:
......@@ -292,9 +288,7 @@ def save_vars(executor,
predicate=None,
filename=None):
"""
:api_attr: Static Graph
This API saves specific variables in the `Program` to files.
Save specific variables in the `Program` to files.
There are two ways to specify the variables to be saved: set variables in
a list and assign it to the `vars`, or use the `predicate` function to select
......@@ -436,9 +430,7 @@ def save_vars(executor,
@dygraph_not_support
def save_params(executor, dirname, main_program=None, filename=None):
"""
:api_attr: Static Graph
This operator saves all parameters from the :code:`main_program` to
Save all parameters from the :code:`main_program` to
the folder :code:`dirname` or file :code:`filename`. You can refer to
:ref:`api_guide_model_save_reader_en` for more details.
......@@ -670,9 +662,7 @@ def _save_distributed_persistables(executor, dirname, main_program):
@dygraph_not_support
def save_persistables(executor, dirname, main_program=None, filename=None):
"""
:api_attr: Static Graph
This operator saves all persistable variables from :code:`main_program` to
Save all persistable variables from :code:`main_program` to
the folder :code:`dirname` or file :code:`filename`. You can refer to
:ref:`api_guide_model_save_reader_en` for more details. And then
saves these persistables variables to the folder :code:`dirname` or file
......@@ -780,9 +770,6 @@ def load_vars(executor,
Returns:
None
Raises:
TypeError: If `main_program` is not an instance of Program nor None.
Examples:
.. code-block:: python
......@@ -1247,8 +1234,6 @@ def save_inference_model(dirname,
program_only=False,
clip_extra=False):
"""
:api_attr: Static Graph
Prune the given `main_program` to build a new program especially for inference,
and then save it and all related parameters to given `dirname` .
If you just want to save parameters of your trained model, please use the
......@@ -1279,7 +1264,7 @@ def save_inference_model(dirname,
params_filename(str, optional): The name of file to save all related parameters.
If it is set None, parameters will be saved
in separate files .
export_for_deployment(bool): If True, programs are modified to only support
export_for_deployment(bool, optional): If True, programs are modified to only support
direct inference deployment. Otherwise,
more information will be stored for flexible
optimization and re-training. Currently, only
......@@ -1290,14 +1275,7 @@ def save_inference_model(dirname,
Default: False.
Returns:
The fetch variables' name list
Return Type:
list
Raises:
ValueError: If `feed_var_names` is not a list of basestring, an exception is thrown.
ValueError: If `target_vars` is not a list of Variable, an exception is thrown.
list, The fetch variables' name list.
Examples:
.. code-block:: python
......@@ -1462,8 +1440,6 @@ def load_inference_model(dirname,
params_filename=None,
pserver_endpoints=None):
"""
:api_attr: Static Graph
Load the inference model from a given directory. By this API, you can get the model
structure(Inference Program) and model parameters. If you just want to load
parameters of the pre-trained model, please use the :ref:`api_fluid_io_load_params` API.
......@@ -1501,8 +1477,6 @@ def load_inference_model(dirname,
``Variable`` (refer to :ref:`api_guide_Program_en`). It contains variables from which
we can get inference results.
Raises:
ValueError: If `dirname` is not a existing directory.
Examples:
.. code-block:: python
......@@ -1659,12 +1633,6 @@ def get_parameter_value_by_name(name, executor, program=None):
Returns:
numpy.array: The parameter's values.
Raises:
TypeError: If given `name` is not an instance of basestring.
TypeError: If the parameter with the given name doesn't exist.
AssertionError: If there is a variable named `name` in the
given program but it is not a Parameter.
Examples:
.. code-block:: python
......@@ -2314,8 +2282,6 @@ def load_program_state(model_path, var_list=None):
@static_only
def set_program_state(program, state_dict):
"""
:api_attr: Static Graph
Set program parameter from state_dict
An exception will throw if shape or dtype of the parameters is not match.
......
......@@ -311,7 +311,7 @@ def cross_entropy2(input, label, ignore_index=kIgnoreIndex):
def square_error_cost(input, label):
r"""
This op accepts input predictions and target label and returns the
Accept input predictions and target label and returns the
squared error cost.
For predictions label, and target label, the equation is:
......@@ -325,10 +325,8 @@ def square_error_cost(input, label):
label (Tensor): Label tensor, the data type should be float32.
Returns:
The tensor storing the element-wise squared error \
difference between input and label.
Return type: Tensor.
Tensor, The tensor storing the element-wise squared
error difference between input and label.
Examples:
......
......@@ -9359,14 +9359,14 @@ def crop(x, shape=None, offsets=None, name=None):
Parameters:
x (Variable): Tensor, data type can be float32 or float64.
shape (Variable|list/tuple of integers): The output shape is specified
shape (Variable|list/tuple of integers, optional): The output shape is specified
by `shape`, which can be a Tensor or a list/tuple of integers.
If it is a Tensor, it's rank must be the same as `x` , only
it's shape will be used, and the value of it will be ignored. This way
is suitable for the case that the output shape may be changed each
iteration. If it is a list/tuple of integers, it's length must be the same
as the rank of `x`
offsets (Variable|list/tuple of integers|None): Specifies the cropping
offsets (Variable|list/tuple of integers|None, optional): Specifies the cropping
offsets at each dimension. It can be a Tensor or a list/tuple
of integers. If it is a Tensor, it's rank must be the same as `x`.
This way is suitable for the case that the offsets may be changed
......@@ -9377,13 +9377,7 @@ def crop(x, shape=None, offsets=None, name=None):
None by default.
Returns:
The cropped Tensor, which has the same rank and data type with `x`
Return Type:
Variable
Raises:
ValueError: If shape is not a list, tuple or Variable.
Tensor, The cropped Tensor, which has the same rank and data type with `x`.
Examples:
......@@ -9721,7 +9715,8 @@ def pad2d(input,
name (str, optional) : The default value is None. Normally there is no need for
user to set this property. For more information, please refer to :ref:`api_guide_Name` .
Returns: Tensor, a 4-D Tensor padded according to paddings and mode and data type is same as input.
Returns:
Tensor, a 4-D Tensor padded according to paddings and mode and data type is same as input.
Examples:
.. code-block:: text
......@@ -13282,14 +13277,10 @@ def space_to_depth(x, blocksize, name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The output, which should be 4 dims Tensor or LodTensor, with the shape \
Returns:
Tensor, The output, which should be 4 dims Tensor or LodTensor, with the shape \
[batch, channel * blocksize * blocksize, height/blocksize, width/blocksize]
Return Type: Variable
Raises:
TypeError: blocksize type must be int64.
Examples:
.. code-block:: python
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
......@@ -54,9 +54,9 @@ def sequence_conv(input,
act=None,
name=None):
r"""
:api_attr: Static Graph
**Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use conv2d Op.(fluid.layers.** :ref:`api_fluid_layers_conv2d` ).
Note:
Only receives LoDTensor as input. If your input is Tensor, please use conv2d Op.(fluid.layers.** :ref:`api_fluid_layers_conv2d` ).
This operator receives input sequences with variable length and other convolutional
configuration parameters(num_filters, filter_size) to apply the convolution operation.
......@@ -179,11 +179,9 @@ def sequence_conv(input,
def sequence_softmax(input, use_cudnn=False, name=None):
r"""
:api_attr: Static Graph
**Note**:
**The input type of the OP must be LoDTensor. For Tensor, use:** :ref:`api_fluid_layers_softmax`
Note:
The input type of the OP must be LoDTensor. For Tensor, use:** :ref:`api_fluid_layers_softmax`
A LoD-tensor can be regarded as several sequences, and this op apply softmax algo on each sequence.
The shape of input Tensor can be :math:`[N, 1]` or :math:`[N]`, where :math:`N`
......@@ -264,9 +262,9 @@ def sequence_softmax(input, use_cudnn=False, name=None):
def sequence_pool(input, pool_type, is_test=False, pad_value=0.0):
r"""
:api_attr: Static Graph
**Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use pool2d Op.(fluid.layers.** :ref:`api_fluid_layers_pool2d` ).
Note:
Only receives LoDTensor as input. If your input is Tensor, please use pool2d Op.(fluid.layers.** :ref:`api_fluid_layers_pool2d` ).
This operator only supports LoDTensor as input. It will apply specified pooling
operation on the input LoDTensor. It pools features of all time-steps of each
......@@ -381,9 +379,9 @@ def sequence_pool(input, pool_type, is_test=False, pad_value=0.0):
@templatedoc()
def sequence_concat(input, name=None):
"""
:api_attr: Static Graph
**Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use concat Op.(fluid.layers.** :ref:`api_fluid_layers_concat` ).
Note:
Only receives LoDTensor as input. If your input is Tensor, please use concat Op.(fluid.layers.** :ref:`api_fluid_layers_concat` ).
This operator only supports LoDTensor as input. It concatenates the multiple LoDTensor from input by the LoD information,
and outputs the concatenated LoDTensor.
......@@ -445,9 +443,8 @@ def sequence_concat(input, name=None):
def sequence_first_step(input):
"""
:api_attr: Static Graph
This operator only supports LoDTensor as input. Given the input LoDTensor, it will
Only supports LoDTensor as input. Given the input LoDTensor, it will
select first time-step feature of each sequence as output.
.. code-block:: text
......@@ -503,9 +500,8 @@ def sequence_first_step(input):
def sequence_last_step(input):
"""
:api_attr: Static Graph
This operator only supports LoDTensor as input. Given the input LoDTensor, it will
Only supports LoDTensor as input. Given the input LoDTensor, it will
select last time-step feature of each sequence as output.
.. code-block:: text
......@@ -562,7 +558,6 @@ def sequence_last_step(input):
def sequence_slice(input, offset, length, name=None):
"""
:api_attr: Static Graph
**Sequence Slice Layer**
......@@ -653,7 +648,6 @@ def sequence_slice(input, offset, length, name=None):
def sequence_expand(x, y, ref_level=-1, name=None):
r"""
:api_attr: Static Graph
Sequence Expand Layer. This layer will expand the input variable ``x`` \
according to specified level ``ref_level`` lod of ``y``. Please note that \
......@@ -662,11 +656,13 @@ def sequence_expand(x, y, ref_level=-1, name=None):
of ``y``. If the lod level of ``x`` is 0, then the first dim of ``x`` should \
be equal to the size of ``ref_level`` of ``y``. The rank of **x** is at least 2. \
When rank of ``x`` is greater than 2, then it would be viewed as a 2-D tensor.
Note:
Please note that the input ``x`` should be LodTensor or Tensor, \
Please note that the input ``x`` should be LodTensor or Tensor, \
and input ``y`` must be LodTensor.
Following examples will explain how sequence_expand works:
**Following examples will explain how sequence_expand works:**
.. code-block:: text
......@@ -722,12 +718,11 @@ def sequence_expand(x, y, ref_level=-1, name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The expanded variable which is a LoDTensor, with dims ``[N, K]``. \
Returns:
Tensor, The expanded variable which is a LoDTensor, with dims ``[N, K]``. \
``N`` depends on the lod info of ``x`` and ``y``. \
The data type is same as input.
Return Type: Variable
Examples:
.. code-block:: python
......@@ -791,7 +786,6 @@ def sequence_expand(x, y, ref_level=-1, name=None):
def sequence_expand_as(x, y, name=None):
r"""
:api_attr: Static Graph
Sequence Expand As Layer. This OP will expand the input variable ``x`` \
according to the zeroth level lod of ``y``. Current implementation requires \
......@@ -800,7 +794,8 @@ def sequence_expand_as(x, y, name=None):
the expanded LodTensor has the same lod info as ``y``. The expanded result \
has nothing to do with ``x``'s lod, so the lod of Input(X) is not considered.
Please note that the input ``x`` should be LodTensor or Tensor, \
Note:
Please note that the input ``x`` should be LodTensor or Tensor, \
and input ``y`` must be LodTensor.
Following examples will explain how sequence_expand_as works:
......@@ -845,12 +840,11 @@ def sequence_expand_as(x, y, name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The expanded variable which is a LoDTensor with the dims ``[N, K]``. \
Returns:
Tensor, The expanded variable which is a LoDTensor with the dims ``[N, K]``. \
``N`` depends on the lod of ``y``, and the lod level must be 1. \
The data type is same as input.
Return Type: Variable
Examples:
.. code-block:: python
......@@ -913,7 +907,6 @@ def sequence_expand_as(x, y, name=None):
def sequence_pad(x, pad_value, maxlen=None, name=None):
r"""
:api_attr: Static Graph
This layer padding the sequences in a same batch to a common length (according
to ``maxlen``). The padding value is defined by ``pad_value``, and will be
......@@ -921,6 +914,7 @@ def sequence_pad(x, pad_value, maxlen=None, name=None):
the LodTensor ``Out`` is the padded sequences, and LodTensor ``Length`` is
the length information of input sequences. For removing padding data (unpadding operation), See :ref:`api_fluid_layers_sequence_unpad`.
Note:
Please note that the input ``x`` should be LodTensor.
.. code-block:: text
......@@ -978,13 +972,12 @@ def sequence_pad(x, pad_value, maxlen=None, name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: A Python tuple (Out, Length): the 1st is a 0 level LodTensor \
Returns:
tuple, A Python tuple (Out, Length): the 1st is a 0 level LodTensor \
``Out``, with the shape ``[batch_size, maxlen, K]``; the second is the original \
sequences length infor ``Length``, which should be a 0-level 1D LodTensor. \
The size of ``Length`` is equal to batch size, and the data type is int64.
Return Type: tuple
Examples:
.. code-block:: python
......@@ -1031,13 +1024,11 @@ def sequence_pad(x, pad_value, maxlen=None, name=None):
def sequence_unpad(x, length, name=None):
"""
:api_attr: Static Graph
**Note**:
**The input of the OP is Tensor and the output is LoDTensor. For padding operation, See:** :ref:`api_fluid_layers_sequence_pad`
Note:
The input of the OP is Tensor and the output is LoDTensor. For padding operation, See:** :ref:`api_fluid_layers_sequence_pad`
The OP removes the padding data from the input based on the length information and returns a LoDTensor.
Remove the padding data from the input based on the length information and returns a LoDTensor.
.. code-block:: text
......@@ -1109,11 +1100,11 @@ def sequence_unpad(x, length, name=None):
def sequence_reshape(input, new_dim):
"""
:api_attr: Static Graph
**Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use reshape Op.(fluid.layers.** :ref:`api_fluid_layers_reshape` ).
Note:
Only receives LoDTensor as input. If your input is Tensor, please use reshape Op.(fluid.layers.** :ref:`api_fluid_layers_reshape` ).
This operator only supports LoDTensor as input. Given :attr:`new_dim` ,
Only supports LoDTensor as input. Given :attr:`new_dim` ,
it will compute new shape according to original length of each sequence,
original dimensions and :attr:`new_dim` . Then it will output a new LoDTensor
containing :attr:`new_dim` . Currently it only supports 1-level LoDTensor.
......@@ -1172,11 +1163,9 @@ def sequence_reshape(input, new_dim):
def sequence_scatter(input, index, updates, name=None):
"""
:api_attr: Static Graph
**Note**:
**The index and updates parameters of the OP must be LoDTensor.**
Note:
The index and updates parameters of the OP must be LoDTensor.
Plus the updates data to the corresponding input according to the index.
......@@ -1264,7 +1253,6 @@ def sequence_scatter(input, index, updates, name=None):
def sequence_enumerate(input, win_size, pad_value=0, name=None):
r"""
:api_attr: Static Graph
Generate a new sequence for the input index sequence with \
shape ``[d_1, win_size]``, which enumerates all the \
......@@ -1300,12 +1288,11 @@ def sequence_enumerate(input, win_size, pad_value=0, name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The enumerate sequence variable which is a LoDTensor with \
Returns:
Tensor, The enumerate sequence variable which is a LoDTensor with \
shape ``[d_1, win_size]`` and 1-level lod info. \
The data type is same as ``input``.
Return Type: Variable
Examples:
.. code-block:: python
......@@ -1371,12 +1358,11 @@ def sequence_mask(x, maxlen=None, dtype='int64', name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The output sequence mask. Tensor with shape [d_1, d_2, ..., d_n, maxlen] \
and data type of :code:`dtype`. The data type should be bool, float32, float64, int8, \
Returns:
Tensor, The output sequence mask. Tensor with shape [d_1, d_2, ..., d_n, maxlen]
and data type of :code:`dtype`. The data type should be bool, float32, float64, int8,
int32 or int64.
Return Type: Tensor
Examples:
.. code-block:: python
......@@ -1398,9 +1384,10 @@ def sequence_mask(x, maxlen=None, dtype='int64', name=None):
@templatedoc()
def sequence_reverse(x, name=None):
"""
**Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use reverse Op.(fluid.layers.** :ref:`api_fluid_layers_reverse` ).
Note:
Only receives LoDTensor as input. If your input is Tensor, please use reverse Op.(fluid.layers.** :ref:`api_fluid_layers_reverse` ).
This operator only supports LoDTensor as input. It will reverse each sequence for input LoDTensor.
Only supports LoDTensor as input. It will reverse each sequence for input LoDTensor.
Currently it only supports 1-level LoDTensor. This operator is very useful when building a
reverse :ref:`api_fluid_layers_DynamicRNN` network.
......
......@@ -172,7 +172,7 @@ class MetricBase(object):
None
Return types:
None
None
"""
raise NotImplementedError(
......
......@@ -44,7 +44,7 @@ __all__ = []
def unfold(x, kernel_sizes, strides=1, paddings=0, dilations=1, name=None):
r"""
This op returns a col buffer of sliding local blocks of input x, also known
Return a col buffer of sliding local blocks of input x, also known
as im2col for batched 2D image tensors. For each block under the convolution filter,
all element will be rearranged as a column. While the convolution filter sliding over
the input feature map, a series of such columns will be formed.
......@@ -91,15 +91,12 @@ def unfold(x, kernel_sizes, strides=1, paddings=0, dilations=1, name=None):
Returns:
The tensor corresponding to the sliding local blocks.
Tensor, The tensor corresponding to the sliding local blocks.
The output shape is [N, Cout, Lout] as decriabled above.
Cout is the total number of values within each block,
and Lout is the total number of such blocks.
The data type of output is the same as the input :math:`x`
Return Type:
Tensor
Examples:
.. code-block:: python
......@@ -1321,21 +1318,21 @@ def pad(x, pad, mode='constant', value=0, data_format="NCHW", name=None):
pad_right). 2. If the input dimension is 4, then the pad has the form (pad_left, pad_right,
pad_top, pad_bottom). 3. If the input dimension is 5, then the pad has the form
(pad_left, pad_right, pad_top, pad_bottom, pad_front, pad_back).
mode (str): Four modes: 'constant' (default), 'reflect', 'replicate', 'circular'.
mode (str, optional): Four modes: 'constant' (default), 'reflect', 'replicate', 'circular'.
When in 'constant' mode, this op uses a constant value to pad the input tensor.
When in 'reflect' mode, uses reflection of the input boundaries to pad the input tensor.
When in 'replicate' mode, uses input boundaries to pad the input tensor.
When in 'circular' mode, uses circular input to pad the input tensor.
Default is 'constant'
value (float32): The value to fill the padded areas in 'constant' mode . Default is 0.0
data_format (str): An string from: "NCL", "NLC", NHWC", "NCHW", "NCDHW", "NDHWC". Specify the data format of
value (float32, optional): The value to fill the padded areas in 'constant' mode . Default is 0.0
data_format (str, optional): An string from: "NCL", "NLC", NHWC", "NCHW", "NCDHW", "NDHWC". Specify the data format of
the input data.
Default is "NCHW"
name (str, optional) : The default value is None. Normally there is no need for
user to set this property. For more information, please refer to :ref:`api_guide_Name`.
Returns: a Tensor padded according to pad and mode and data type is same as input.
Return Type: Tensor
Returns:
Tensor, a Tensor padded according to pad and mode and data type is same as input.
Examples:
.. code-block:: text
......@@ -1549,12 +1546,13 @@ def zeropad2d(x, padding, data_format="NCHW", name=None):
padding(int | Tensor | List[int] | Tuple[int]): The padding size with data type int.
The input dimension should be 4 and pad has the form (pad_left, pad_right,
pad_top, pad_bottom).
data_format(str): An string from: "NHWC", "NCHW". Specify the data format of
data_format(str, optional): An string from: "NHWC", "NCHW". Specify the data format of
the input data. Default: "NCHW".
name(str, optional): The default value is None. Normally there is no need for user
to set this property.
Returns:Tensor,padded with 0 according to pad and data type is same as input.
Returns:
Tensor, padded with 0 according to pad and data type is same as input.
Examples:
.. code-block:: python
......@@ -1587,11 +1585,11 @@ def cosine_similarity(x1, x2, axis=1, eps=1e-8):
Parameters:
x1 (Tensor): First input. float32/double.
x2 (Tensor): Second input. float32/double.
axis (int): Dimension of vectors to compute cosine similarity. Default is 1.
eps(float): Small value to avoid division by zero. Default is 1e-8.
axis (int, optional): Dimension of vectors to compute cosine similarity. Default is 1.
eps(float, optional): Small value to avoid division by zero. Default is 1e-8.
Returns: a Tensor representing cosine similarity between x1 and x2 along axis.
Return Type: Tensor
Returns:
Tensor, a Tensor representing cosine similarity between x1 and x2 along axis.
Examples:
.. code-block:: text
......@@ -1614,16 +1612,14 @@ def cosine_similarity(x1, x2, axis=1, eps=1e-8):
import paddle
import paddle.nn as nn
import numpy as np
np.random.seed(0)
x1 = np.random.rand(2,3)
x2 = np.random.rand(2,3)
x1 = paddle.to_tensor(x1)
x2 = paddle.to_tensor(x2)
paddle.seed(1)
x1 = paddle.randn(shape=[2, 3])
x2 = paddle.randn(shape=[2, 3])
result = paddle.nn.functional.cosine_similarity(x1, x2, axis=0)
print(result)
# [0.99806249 0.9817672 0.94987036]
# [0.97689527, 0.99996042, -0.55138415]
"""
w12 = sum(paddle.multiply(x1, x2), axis=axis)
......
......@@ -191,12 +191,11 @@ def sequence_mask(x, maxlen=None, dtype='int64', name=None):
to :ref:`api_guide_Name`. Usually name is no need to set and \
None by default.
Returns: The output sequence mask. Tensor with shape [d_1, d_2, ..., d_n, maxlen] \
Returns:
Tensor, The output sequence mask. Tensor with shape [d_1, d_2, ..., d_n, maxlen] \
and data type of :code:`dtype`. The data type should be bool, float32, float64, int8, \
int32 or int64.
Return Type: Tensor
Examples:
.. code-block:: python
......
......@@ -396,10 +396,8 @@ def square_error_cost(input, label):
label (Tensor): Label tensor, the data type should be float32.
Returns:
The tensor storing the element-wise squared error \
difference between input and label.
Return type: Tensor.
Tensor, The tensor storing the element-wise squared error
difference between input and label.
Examples:
......@@ -981,7 +979,7 @@ def hsigmoid_loss(input,
def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None):
r"""
This operator calculates smooth_l1_loss. Creates a criterion that uses a squared
Calculate smooth_l1_loss. Creates a criterion that uses a squared
term if the absolute element-wise error falls below 1 and an L1 term otherwise.
In some cases it can prevent exploding gradients and it is more robust and less
sensitivity to outliers. Also known as the Huber loss:
......@@ -1020,9 +1018,7 @@ def smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None):
None). For more information, please refer to :ref:`api_guide_Name`.
Returns:
The tensor variable storing the smooth_l1_loss of input and label.
Return type: Tensor.
Tensor, The tensor variable storing the smooth_l1_loss of input and label.
Examples:
.. code-block:: python
......@@ -1081,7 +1077,7 @@ def margin_ranking_loss(input,
name=None):
r"""
This op the calcluate the margin rank loss between the input, other and label, use the math function as follows.
Calcluate the margin rank loss between the input, other and label, use the math function as follows.
.. math::
margin\_rank\_loss = max(0, -label * (input - other) + margin)
......@@ -1106,7 +1102,8 @@ def margin_ranking_loss(input,
reduction (str, optional): Indicate the reduction to apply to the loss, the candicates are ``'none'``, ``'mean'``, ``'sum'``.If :attr:`reduction` is ``'none'``, the unreduced loss is returned; If :attr:`reduction` is ``'mean'``, the reduced mean loss is returned. If :attr:`reduction` is ``'sum'``, the reduced sum loss is returned. Default is ``'mean'``.
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
Returns: Tensor, if :attr:`reduction` is ``'mean'`` or ``'sum'``, the out shape is :math:`[1]`, otherwise the shape is the same as `input` .The same dtype as input tensor.
Returns:
Tensor, if :attr:`reduction` is ``'mean'`` or ``'sum'``, the out shape is :math:`[1]`, otherwise the shape is the same as `input` .The same dtype as input tensor.
Examples:
......@@ -1540,7 +1537,7 @@ def kl_div(input, label, reduction='mean', name=None):
def mse_loss(input, label, reduction='mean', name=None):
r"""
This op accepts input predications and label and returns the mean square error.
Accept input predications and label and returns the mean square error.
If :attr:`reduction` is set to ``'none'``, loss is calculated as:
......@@ -1570,9 +1567,7 @@ def mse_loss(input, label, reduction='mean', name=None):
Returns:
Tensor: The tensor tensor storing the mean square error difference of input and label.
Return type: Tensor.
Tensor, The tensor tensor storing the mean square error difference of input and label.
Examples:
......
......@@ -1177,16 +1177,16 @@ class SmoothL1Loss(Layer):
None). For more information, please refer to :ref:`api_guide_Name`.
Call Parameters:
input (Tensor): Input tensor, the data type is float32 or float64. Shape is
(N, C), where C is number of classes, and if shape is more than 2D, this
is (N, C, D1, D2,..., Dk), k >= 1.
label (Tensor): Label tensor, the data type is float32 or float64. The shape of label
is the same as the shape of input.
Returns:
The tensor storing the smooth_l1_loss of input and label.
input (Tensor): Input tensor, the data type is float32 or float64. Shape is (N, C),
where C is number of classes, and if shape is more than 2D,
this is (N, C, D1, D2,..., Dk), k >= 1.
label (Tensor): Label tensor, the data type is float32 or float64.
The shape of label is the same as the shape of input.
Return type: Tensor.
Returns:
Tensor, The tensor storing the smooth_l1_loss of input and label.
Examples:
.. code-block:: python
......
......@@ -469,8 +469,11 @@ def save_inference_model(path_prefix, feed_vars, fetch_vars, executor,
executor(Executor): The executor that saves the inference model. You can refer
to :ref:`api_guide_executor_en` for more details.
kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
- clip_extra(bool): set to True if you want to clip extra information for every operator.
- program(Program): specify a program if you don't want to use default main program.
- clip_extra(bool): set to True if you want to clip extra information for every operator.
Returns:
None
......
......@@ -1628,7 +1628,8 @@ def clone(x, name=None):
x (Tensor): The input Tensor.
name(str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
Returns: A Tensor copied from ``input`` .
Returns:
Tensor, A Tensor copied from ``input``.
Examples:
.. code-block:: python
......@@ -1661,7 +1662,7 @@ def _memcpy(input, place=None, output=None):
be created as :attr:`output`. Default: None.
Returns:
Tensor: A tensor with the same shape, data type and value as :attr:`input`.
Tensor, A tensor with the same shape, data type and value as :attr:`input`.
Examples:
.. code-block:: python
......
......@@ -599,7 +599,7 @@ def crop(x, shape=None, offsets=None, name=None):
Parameters:
x (Tensor): 1-D to 6-D Tensor, the data type is float32, float64, int32 or int64.
shape (list|tuple|Tensor): The output shape is specified
shape (list|tuple|Tensor, optional): The output shape is specified
by `shape`. Its data type is int32. If a list/tuple, it's length must be
the same as the dimension size of `x`. If a Tensor, it should be a 1-D Tensor.
When it is a list, each element can be an integer or a Tensor of shape: [1].
......
......@@ -139,7 +139,8 @@ def setup(**attr):
compiler using dict type with ``{'cxx': [...], 'nvcc': [...]}`` . Default is None.
**attr(dict, optional): Specify other arguments same as ``setuptools.setup`` .
Returns: None
Returns:
None
"""
cmdclass = attr.get('cmdclass', {})
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册