未验证 提交 1bbc9b64 编写于 作者: L LoneRanger 提交者: GitHub

Fix docs(91-100) (#49109)

* 修改linspace参数"stop"的介绍

* 更新了conv2d_transpose英文文档中 1.公式异常 2.参数说明异常 3.padding=SAME和VALID的公式说明; test=docs_preview

* 解决了
1.英文版Note中的api_fluid_layers_conv2d的链接失效,
2. 英文版参数中的padding_start的参数错误
3. 参数中bias_attr和param_attr的链接失效
4. 参数缺少optional

* 更新了paddle.static.auc的英文文档的Return描述以及函数的参数

* Update python/paddle/tensor/creation.py
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>

* 更新了shard_index的ignore_value描述

* 修改了英文文档的paddle.static.nn.conv3d_transpose的公式介绍

* add py_func COPY-FROM label; test=document_fix

* Update python/paddle/tensor/manipulation.py
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>

* formula; test=document_fix

* formula; test=document_fix

* formula; test=document_fix
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
上级 e073313d
...@@ -63,7 +63,7 @@ def sequence_conv( ...@@ -63,7 +63,7 @@ def sequence_conv(
r""" r"""
Note: Note:
Only receives LoDTensor as input. If your input is Tensor, please use conv2d Op.(fluid.layers.** :ref:`api_fluid_layers_conv2d` ). Only receives LoDTensor as input. If your input is Tensor, please use conv2d Op.(fluid.layers.** :ref:`api_paddle_nn_functional_conv2d` ).
This operator receives input sequences with variable length and other convolutional This operator receives input sequences with variable length and other convolutional
configuration parameters(num_filters, filter_size) to apply the convolution operation. configuration parameters(num_filters, filter_size) to apply the convolution operation.
...@@ -114,29 +114,29 @@ def sequence_conv( ...@@ -114,29 +114,29 @@ def sequence_conv(
and K is hidden_size of input. Only lod_level of 1 is supported. The data type should be float32 or and K is hidden_size of input. Only lod_level of 1 is supported. The data type should be float32 or
float64. float64.
num_filters (int): the number of filters. num_filters (int): the number of filters.
filter_size (int): the height of filter. Specified filter width is not supported, the width is filter_size (int, optional): the height of filter. Specified filter width is not supported, the width is
hidden_size by default. Default: 3. hidden_size by default. Default: 3.
filter_stride (int): stride of the filter. Currently only supports :attr:`stride` = 1. filter_stride (int, optional): stride of the filter. Currently only supports :attr:`stride` = 1.
padding (bool): the parameter :attr:`padding` take no effect and will be discarded in the padding (bool, optional): the parameter :attr:`padding` take no effect and will be discarded in the
future. Currently, it will always pad input to make sure the length of the output is future. Currently, it will always pad input to make sure the length of the output is
the same as input whether :attr:`padding` is set true or false. Because the length of the same as input whether :attr:`padding` is set true or false. Because the length of
input sequence may be shorter than :attr:`filter\_size`, which will cause the convolution input sequence may be shorter than :attr:`filter\_size`, which will cause the convolution
result to not be computed correctly. These padding data will not be trainable or updated result to not be computed correctly. These padding data will not be trainable or updated
while training. Default: True. while training. Default: True.
padding_start (int): It is used to indicate the start index for padding the input padding_start (int, optional): It is used to indicate the start index for padding the input
sequence, which can be negative. The negative number means to pad sequence, which can be negative. The negative number means to pad
:attr:`|padding_start|` time-steps of all-zero data at the beginning of each instance. :attr:`|padding_start|` time-steps of all-zero data at the beginning of each instance.
The positive number means to skip :attr:`padding_start` time-steps of each instance, The positive number means to skip :attr:`padding_start` time-steps of each instance,
and it will pad :math:`filter\_size + padding\_start - 1` time-steps of all-zero data and it will pad :math:`filter\_size + padding\_start - 1` time-steps of all-zero data
at the end of the sequence to ensure that the output is the same length as the input. at the end of the sequence to ensure that the output is the same length as the input.
If set None, the same length :math:`\\frac{filter\_size}{2}` of data will be filled If set None, the same length :math:`\frac{filter\_size}{2}` of data will be filled
on both sides of the sequence. If set 0, the length of :math:`filter\_size - 1` data on both sides of the sequence. If set 0, the length of :math:`filter\_size - 1` data
is padded at the end of each input sequence. Default: None. is padded at the end of each input sequence. Default: None.
bias_attr (ParamAttr): To specify the bias parameter property. Default: None, which means the bias_attr (ParamAttr, optional): To specify the bias parameter property. Default: None, which means the
default bias parameter property is used. See usage for details in :ref:`api_fluid_ParamAttr` . default bias parameter property is used. See usage for details in :ref:`api_paddle_ParamAttr` .
param_attr (ParamAttr): To specify the weight parameter property. Default: None, which means the param_attr (ParamAttr, optional): To specify the weight parameter property. Default: None, which means the
default weight parameter property is used. See usage for details in :ref:`api_fluid_ParamAttr` . default weight parameter property is used. See usage for details in :ref:`api_paddle_ParamAttr` .
act (str): Activation to be applied to the output of this layer, such as tanh, softmax, act (str, optional): Activation to be applied to the output of this layer, such as tanh, softmax,
sigmoid, relu. For more information, please refer to :ref:`api_guide_activations_en` . Default: None. sigmoid, relu. For more information, please refer to :ref:`api_guide_activations_en` . Default: None.
name (str, optional): The default value is None. Normally there is no need for user to set this property. name (str, optional): The default value is None. Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name` . For more information, please refer to :ref:`api_guide_Name` .
...@@ -1302,9 +1302,9 @@ def sequence_enumerate(input, win_size, pad_value=0, name=None): ...@@ -1302,9 +1302,9 @@ def sequence_enumerate(input, win_size, pad_value=0, name=None):
r""" r"""
Generate a new sequence for the input index sequence with \ Generate a new sequence for the input index sequence with \
shape ``[d_1, win_size]``, which enumerates all the \ shape ``[d_1, win_size]``, which enumerates all the \
sub-sequences with length ``win_size`` of the input with \ sub-sequences with length ``win_size`` of the input with \
shape ``[d_1, 1]``, and padded by ``pad_value`` if necessary in generation. shape ``[d_1, 1]``, and padded by ``pad_value`` if necessary in generation.
Please note that the `input` must be LodTensor. Please note that the `input` must be LodTensor.
......
...@@ -188,7 +188,6 @@ def instance_norm( ...@@ -188,7 +188,6 @@ def instance_norm(
input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None
): ):
r""" r"""
:api_attr: Static Graph
**Instance Normalization Layer** **Instance Normalization Layer**
...@@ -390,7 +389,6 @@ def data_norm( ...@@ -390,7 +389,6 @@ def data_norm(
enable_scale_and_shift=False, enable_scale_and_shift=False,
): ):
r""" r"""
:api_attr: Static Graph
**Data Normalization Layer** **Data Normalization Layer**
...@@ -589,7 +587,6 @@ def group_norm( ...@@ -589,7 +587,6 @@ def group_norm(
name=None, name=None,
): ):
""" """
:api_attr: Static Graph
**Group Normalization Layer** **Group Normalization Layer**
...@@ -1021,7 +1018,6 @@ def conv3d( ...@@ -1021,7 +1018,6 @@ def conv3d(
data_format="NCDHW", data_format="NCDHW",
): ):
r""" r"""
:api_attr: Static Graph
The convolution3D layer calculates the output based on the input, filter The convolution3D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input(Input) and and strides, paddings, dilations, groups parameters. Input(Input) and
...@@ -1125,19 +1121,6 @@ def conv3d( ...@@ -1125,19 +1121,6 @@ def conv3d(
convolution result, and if act is not None, the tensor variable storing convolution result, and if act is not None, the tensor variable storing
convolution and non-linearity activation result. convolution and non-linearity activation result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCDHW" or "NDHWC".
ValueError: If the channel dimmention of the input is less than or equal to zero.
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ShapeError: If the input is not 5-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels * groups.
ShapeError: If the number of output channels is not be divided by groups.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -1330,7 +1313,6 @@ def conv2d_transpose( ...@@ -1330,7 +1313,6 @@ def conv2d_transpose(
data_format='NCHW', data_format='NCHW',
): ):
r""" r"""
:api_attr: Static Graph
The convolution2D transpose layer calculates the output based on the input, The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output) filter, and dilations, strides, paddings. Input(Input) and output(Output)
...@@ -1375,24 +1357,38 @@ def conv2d_transpose( ...@@ -1375,24 +1357,38 @@ def conv2d_transpose(
.. math:: .. math::
H^\prime_{out} &= (H_{in} - 1) * strides[0] - pad_height_top - pad_height_bottom + dilations[0] * (H_f - 1) + 1 \\ H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\
W^\prime_{out} &= (W_{in} - 1) * strides[1] - pad_width_left - pad_width_right + dilations[1] * (W_f - 1) + 1 \\ W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ] \\ H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ] \\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ] W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ]
Note: If `padding` = `"SAME"`:
The conv2d_transpose can be seen as the backward of the conv2d. For conv2d,
when stride > 1, conv2d maps multiple input shape to the same output shape, .. math::
so for conv2d_transpose, when stride > 1, input shape maps multiple output shape. H^\prime_{out} &= \frac{(H_{in} + stride[0] - 1)}{stride[0]} \\
If output_size is None, :math:`H_{out} = H^\prime_{out}, W_{out} = W^\prime_{out}`; W^\prime_{out} &= \frac{(H_{in} + stride[1] - 1)}{stride[1]}
else, the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[0]`, and the :math:`W_{out}` of the output size must If `padding` = `"VALID"`:
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[1]`,
conv2d_transpose can compute the kernel size automatically. .. math::
H^\prime_{out} &= (H_{in} - 1) * strides[0] + dilations[0] * (H_f - 1) + 1 \\
W^\prime_{out} &= (W_{in} − 1) * strides[1] + dilations[1] * (W_f − 1) + 1
If output_size is None, :math:`H_{out} = H^\prime_{out}, W_{out} = W^\prime_{out}`;
else, the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[0]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[1]`,
Since transposed convolution can be treated as the inverse of convolution, and according to the input-output formula for convolution,
different sized input feature layers may correspond to the same sized output feature layer,
the size of the output feature layer for a fixed sized input feature layer is not unique to transposed convolution
If `output_size` is specified, `conv2d_transpose` can compute the kernel size automatically.
Args: Args:
input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format, input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format where N is the batch_size,
its data type is float32 or float64. C is the input_channels, H is the input_height and W is the input_width.
Its data type is float32 or float64.
num_filters(int): The number of the filter. It is as same as the output num_filters(int): The number of the filter. It is as same as the output
image channel. image channel.
output_size(int|tuple, optional): The output image size. If output size is a output_size(int|tuple, optional): The output image size. If output size is a
...@@ -1406,26 +1402,23 @@ def conv2d_transpose( ...@@ -1406,26 +1402,23 @@ def conv2d_transpose(
Otherwise, filter_size_height = filter_size_width = filter_size. None if Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None. filter_size and use output size to calculate filter_size. Default: None. filter_size and
output_size should not be None at the same time. output_size should not be None at the same time.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
on both sides for each dimension. If `padding` is a string, either 'VALID' or on both sides for each dimension. If `padding` is a string, either 'VALID' or
'SAME' which is the padding algorithm. If `padding` is a tuple or list, 'SAME' which is the padding algorithm. If `padding` is a tuple or list,
it could be in three forms: `[pad_height, pad_width]` or it could be in three forms:
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`, (1) Contains 4 binary groups: when `data_format` is `"NCHW"`, `padding` can be in the form
and when `data_format` is `"NCHW"`, `padding` can be in the form
`[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`. `[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
when `data_format` is `"NHWC"`, `padding` can be in the form when `data_format` is `"NHWC"`, `padding` can be in the form
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`. `[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
Default: padding = 0. (2) Contains 4 integer values:`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`
(3) Contains 2 integer values:`[pad_height, pad_width]`, in this case, `padding_height_top = padding_height_bottom = padding_height`,
`padding_width_left = padding_width_right = padding_width`. If an integer, `padding_height = padding_width = padding`. Default: padding = 0.
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points. dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
If dilation is a tuple, it must contain two integers, (dilation_height, dilation_width). If dilation is a tuple, it must contain two integers, (dilation_height, dilation_width).
Otherwise, dilation_height = dilation_width = dilation. Default: dilation = 1. Otherwise, dilation_height = dilation_width = dilation. Default: dilation = 1.
filter_size(int|tuple, optional): The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_height, filter_size_width).
Otherwise, filter_size_height = filter_size_width = filter_size. None if
use output size to calculate filter_size. Default: None.
groups(int, optional): The groups number of the Conv2d transpose layer. Inspired by groups(int, optional): The groups number of the Conv2d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky's Deep CNN paper, in which grouped convolution in Alex Krizhevsky's Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the when group=2, the first half of the filters is only connected to the
...@@ -1436,11 +1429,10 @@ def conv2d_transpose( ...@@ -1436,11 +1429,10 @@ def conv2d_transpose(
of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose
will create ParamAttr as param_attr. If the Initializer of the param_attr will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. Default: None. is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional): The parameter attribute for the bias of conv2d_transpose. bias_attr (ParamAttr|bool, optional): Specifies the object for the bias parameter attribute.
If it is set to False, no bias will be added to the output units. The default value is None, which means that the default bias parameter attribute is used.
If it is set to None or one attribute of ParamAttr, conv2d_transpose For detailed information, please refer to :ref:`paramattr`.
will create ParamAttr as bias_attr. If the Initializer of the bias_attr The default bias initialisation for the conv2d_transpose operator is 0.0.
is not set, the bias is initialized zero. Default: None.
use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn use_cudnn(bool, optional): Use cudnn kernel or not, it is valid only when the cudnn
library is installed. Default: True. library is installed. Default: True.
act (str, optional): Activation type, if it is set to None, activation is not appended. act (str, optional): Activation type, if it is set to None, activation is not appended.
...@@ -1692,7 +1684,6 @@ def conv3d_transpose( ...@@ -1692,7 +1684,6 @@ def conv3d_transpose(
data_format='NCDHW', data_format='NCDHW',
): ):
r""" r"""
:api_attr: Static Graph
The convolution3D transpose layer calculates the output based on the input, The convolution3D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output) filter, and dilations, strides, paddings. Input(Input) and output(Output)
...@@ -1744,17 +1735,33 @@ def conv3d_transpose( ...@@ -1744,17 +1735,33 @@ def conv3d_transpose(
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[1] ] \\ H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[1] ] \\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[2] ] W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[2] ]
Note: If `padding` = `"SAME"`:
The conv3d_transpose can be seen as the backward of the conv3d. For conv3d,
when stride > 1, conv3d maps multiple input shape to the same output shape, .. math::
so for conv3d_transpose, when stride > 1, input shape maps multiple output shape. D^\prime_{out} &= \frac{(D_{in} + stride[0] - 1)}{stride[0]} \\
If output_size is None, :math:`H_{out} = H^\prime_{out}, :math:`H_{out} = \ H^\prime_{out} &= \frac{(H_{in} + stride[1] - 1)}{stride[1]} \\
H^\prime_{out}, W_{out} = W^\prime_{out}`; else, the :math:`D_{out}` of the output W^\prime_{out} &= \frac{(H_{in} + stride[2] - 1)}{stride[2]}
size must between :math:`D^\prime_{out}` and :math:`D^\prime_{out} + strides[0]`,
the :math:`H_{out}` of the output size must between :math:`H^\prime_{out}` If `padding` = `"VALID"`:
and :math:`H^\prime_{out} + strides[1]`, and the :math:`W_{out}` of the output size must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[2]`, .. math::
conv3d_transpose can compute the kernel size automatically. D^\prime_{out} &= (D_{in} - 1) * strides[0] + dilations[0] * (D_f - 1) + 1 \\
H^\prime_{out} &= (H_{in} - 1) * strides[1] + dilations[1] * (H_f - 1) + 1 \\
W^\prime_{out} &= (W_{in} − 1) * strides[2] + dilations[2] * (W_f − 1) + 1
If `output_size` is None, :math:`D_{out} = D^\prime_{out}, :math:`H_{out} = \
H^\prime_{out}, W_{out} = W^\prime_{out}`; else, the specified `output_size_depth` (the depth of the ouput feature layer) :math:`D_{out}`
must between :math:`D^\prime_{out}` and :math:`D^\prime_{out} + strides[0]`(not including :math:`D^\prime_{out} + strides[0]`),
the specified `output_size_height` (the height of the ouput feature layer) :math:`H_{out}` must between :math:`H^\prime_{out}`
and :math:`H^\prime_{out} + strides[1]`(not including :math:`H^\prime_{out} + strides[1]`),
and the the specified `output_size_width` (the width of the ouput feature layer) :math:`W_{out}` must
between :math:`W^\prime_{out}` and :math:`W^\prime_{out} + strides[2]`(not including :math:`W^\prime_{out} + strides[2]`).
Since transposed convolution can be treated as the inverse of convolution,
and since different sized input feature layers may correspond to the same sized output feature layer according to the input-output formula for convolution,
the size of the output feature layer for a fixed sized input feature layer is not unique to transposed convolution.
If `output_size` is specified, `conv3d_transpose` can compute the kernel size automatically.
Args: Args:
input(Tensor): The input is 5-D Tensor with shape [N, C, D, H, W] or [N, D, H, W, C], the data type input(Tensor): The input is 5-D Tensor with shape [N, C, D, H, W] or [N, D, H, W, C], the data type
...@@ -1824,19 +1831,6 @@ def conv3d_transpose( ...@@ -1824,19 +1831,6 @@ def conv3d_transpose(
variable storing the transposed convolution result, and if act is not None, the tensor variable storing the transposed convolution result, and if act is not None, the tensor
variable storing transposed convolution and non-linearity activation result. variable storing transposed convolution and non-linearity activation result.
Raises:
ValueError: If the type of `use_cudnn` is not bool.
ValueError: If `data_format` is not "NCDHW" or "NDHWC".
ValueError: If `padding` is a string, but not "SAME" or "VALID".
ValueError: If `padding` is a tuple, but the element corresponding to the input's batch size is not 0
or the element corresponding to the input's channel is not 0.
ValueError: If `output_size` and filter_size are None at the same time.
ShapeError: If the input is not 5-D Tensor.
ShapeError: If the input's dimension size and filter's dimension size not equal.
ShapeError: If the dimension size of input minus the size of `stride` is not 2.
ShapeError: If the number of input channels is not equal to filter's channels.
ShapeError: If the size of `output_size` is not equal to that of `stride`.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -3051,6 +3045,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -3051,6 +3045,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
``x`` will be inferred automatically. ``x`` will be inferred automatically.
This API can also be used to debug the neural network by setting the ``func`` This API can also be used to debug the neural network by setting the ``func``
as a function that only print variables. as a function that only print variables.
Args: Args:
func (callable): The forward function of the registered OP. When the network func (callable): The forward function of the registered OP. When the network
is running, the forward output ``out`` will be calculated according to this is running, the forward output ``out`` will be calculated according to this
...@@ -3074,10 +3069,15 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -3074,10 +3069,15 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
that no tensors need to be removed from ``x`` and ``out``. If it is not None, that no tensors need to be removed from ``x`` and ``out``. If it is not None,
these tensors will not be the input of ``backward_func``. This parameter is only these tensors will not be the input of ``backward_func``. This parameter is only
useful when ``backward_func`` is not None. useful when ``backward_func`` is not None.
Returns: Returns:
Tensor|tuple(Tensor)|list[Tensor]: The output ``out`` of the forward function ``func``. Tensor|tuple(Tensor)|list[Tensor], The output ``out`` of the forward function ``func``.
Examples: Examples:
.. code-block:: python .. code-block:: python
:name: code-example1
# example 1: # example 1:
import paddle import paddle
import numpy as np import numpy as np
...@@ -3123,7 +3123,10 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -3123,7 +3123,10 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
feed={'x':input1, 'y':input2}, feed={'x':input1, 'y':input2},
fetch_list=[res.name]) fetch_list=[res.name])
print(out) print(out)
.. code-block:: python .. code-block:: python
:name: code-example2
# example 2: # example 2:
# This example shows how to turn Tensor into numpy array and # This example shows how to turn Tensor into numpy array and
# use numpy API to register an Python OP # use numpy API to register an Python OP
......
...@@ -154,19 +154,25 @@ def auc( ...@@ -154,19 +154,25 @@ def auc(
label(Tensor): A 2D int Tensor indicating the label of the training label(Tensor): A 2D int Tensor indicating the label of the training
data. The height is batch size and width is always 1. data. The height is batch size and width is always 1.
A Tensor with type int32,int64. A Tensor with type int32,int64.
curve(str): Curve type, can be 'ROC' or 'PR'. Default 'ROC'. curve(str, optional): Curve type, can be 'ROC' or 'PR'. Default 'ROC'.
num_thresholds(int): The number of thresholds to use when discretizing num_thresholds(int, optional): The number of thresholds to use when discretizing
the roc curve. Default 4095. the roc curve. Default 4095.
topk(int): only topk number of prediction output will be used for auc. topk(int, optional): only topk number of prediction output will be used for auc.
slide_steps: when calc batch auc, we can not only use step currently but the previous steps can be used. slide_steps=1 means use the current step, slide_steps=3 means use current step and the previous second steps, slide_steps=0 use all of the steps. slide_steps(int, optional): when calc batch auc, we can not only use step currently but the previous steps can be used. slide_steps=1 means use the current step, slide_steps=3 means use current step and the previous second steps, slide_steps=0 use all of the steps.
ins_tag_weight(Tensor): A 2D int Tensor indicating the data's tag weight, 1 means real data, 0 means fake data. Default None, and it will be assigned to a tensor of value 1. ins_tag_weight(Tensor, optional): A 2D int Tensor indicating the data's tag weight, 1 means real data, 0 means fake data. Default None, and it will be assigned to a tensor of value 1.
A Tensor with type float32,float64. A Tensor with type float32,float64.
Returns: Returns:
Tensor: A tuple representing the current AUC. Tensor: A tuple representing the current AUC. Data type is Tensor, supporting float32, float64.
The return tuple is auc_out, batch_auc_out, [ The return tuple is auc_out, batch_auc_out, [batch_stat_pos, batch_stat_neg, stat_pos, stat_neg ]
batch_stat_pos, batch_stat_neg, stat_pos, stat_neg ]
Data type is Tensor, supporting float32, float64. auc_out: the result of the accuracy rate
batch_auc_out: the result of the batch accuracy
batch_stat_pos: the statistic value for label=1 at the time of batch calculation
batch_stat_neg: the statistic value for label=0 at the time of batch calculation
stat_pos: the statistic for label=1 at the time of calculation
stat_neg: the statistic for label=0 at the time of calculation
Examples: Examples:
.. code-block:: python .. code-block:: python
......
...@@ -272,12 +272,12 @@ def create_tensor(dtype, name=None, persistable=False): ...@@ -272,12 +272,12 @@ def create_tensor(dtype, name=None, persistable=False):
def linspace(start, stop, num, dtype=None, name=None): def linspace(start, stop, num, dtype=None, name=None):
r""" r"""
Return fixed number of evenly spaced values within a given interval. Return fixed number of evenly spaced values within a given interval. Note: no gradient calculation is performed.
Args: Args:
start(int|float|Tensor): The input :attr:`start` is start of range. It is a int, float, \ start(int|float|Tensor): The input :attr:`start` is start of range. It is a int, float, \
or a 0-D Tensor with data type int32, int64, float32 or float64. or a 0-D Tensor with data type int32, int64, float32 or float64.
stop(int|float|Tensor): The input :attr:`stop` is start variable of range. It is a int, float, \ stop(int|float|Tensor): The input :attr:`stop` is end of range. It is a int, float, \
or a 0-D Tensor with data type int32, int64, float32 or float64. or a 0-D Tensor with data type int32, int64, float32 or float64.
num(int|Tensor): The input :attr:`num` is given num of the sequence. It is an int, \ num(int|Tensor): The input :attr:`num` is given num of the sequence. It is an int, \
or a 0-D Tensor with data type int32. or a 0-D Tensor with data type int32.
......
...@@ -682,7 +682,7 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1): ...@@ -682,7 +682,7 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1):
index_num (int): An integer represents the integer above the maximum value of `input`. index_num (int): An integer represents the integer above the maximum value of `input`.
nshards (int): The number of shards. nshards (int): The number of shards.
shard_id (int): The index of the current shard. shard_id (int): The index of the current shard.
ignore_value (int): An integer value out of sharded index range. ignore_value (int, optional): An integer value out of sharded index range. The default value is -1.
Returns: Returns:
Tensor. Tensor.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册