未验证 提交 ed2bb051 编写于 作者: N Nyakku Shigure 提交者: GitHub

[CodeStyle][W191][E101] remove tabs in python files (#46288)

上级 330b1a0a
...@@ -473,7 +473,6 @@ def edit_distance(input, ...@@ -473,7 +473,6 @@ def edit_distance(input,
Returns: Returns:
Tuple: Tuple:
distance(Tensor): edit distance result, its data type is float32, and its shape is (batch_size, 1). distance(Tensor): edit distance result, its data type is float32, and its shape is (batch_size, 1).
sequence_num(Tensor): sequence number, its data type is float32, and its shape is (1,). sequence_num(Tensor): sequence number, its data type is float32, and its shape is (1,).
...@@ -3266,8 +3265,8 @@ def triplet_margin_with_distance_loss(input, ...@@ -3266,8 +3265,8 @@ def triplet_margin_with_distance_loss(input,
distance_function (callable, optional): Quantifies the distance between two tensors. if not specified, 2 norm functions will be used. distance_function (callable, optional): Quantifies the distance between two tensors. if not specified, 2 norm functions will be used.
margin (float, optional):Default: :math:`1`.A nonnegative margin representing the minimum difference margin (float, optional): A nonnegative margin representing the minimum difference
between the positive and negative distances required for the loss to be 0. between the positive and negative distances required for the loss to be 0. Default value is :math:`1`.
swap (bool, optional):The distance swap changes the negative distance to the swap distance (distance between positive samples swap (bool, optional):The distance swap changes the negative distance to the swap distance (distance between positive samples
and negative samples) if swap distance smaller than negative distance. Default: ``False``. and negative samples) if swap distance smaller than negative distance. Default: ``False``.
......
...@@ -210,7 +210,7 @@ Where `H` means height of feature map, `W` means width of feature map. ...@@ -210,7 +210,7 @@ Where `H` means height of feature map, `W` means width of feature map.
If it is set to None or one attribute of ParamAttr, instance_norm If it is set to None or one attribute of ParamAttr, instance_norm
will create ParamAttr as bias_attr, the name of bias can be set in ParamAttr. will create ParamAttr as bias_attr, the name of bias can be set in ParamAttr.
If the Initializer of the bias_attr is not set, the bias is initialized zero. If the Initializer of the bias_attr is not set, the bias is initialized zero.
If it is set to False, will not create bias_attr. Default: None. ` If it is set to False, will not create bias_attr. Default: None.
data_format(str, optional): Specify the input data format, could be "NCHW". Default: NCHW. data_format(str, optional): Specify the input data format, could be "NCHW". Default: NCHW.
name(str, optional): Name for the InstanceNorm, default is None. For more information, please refer to :ref:`api_guide_Name`.. name(str, optional): Name for the InstanceNorm, default is None. For more information, please refer to :ref:`api_guide_Name`..
......
...@@ -45,18 +45,18 @@ class Adagrad(Optimizer): ...@@ -45,18 +45,18 @@ class Adagrad(Optimizer):
It can be a float value or a ``Variable`` with a float type. It can be a float value or a ``Variable`` with a float type.
epsilon (float, optional): A small float value for numerical stability. epsilon (float, optional): A small float value for numerical stability.
The default value is 1e-06. The default value is 1e-06.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \ parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for \ This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, \ different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups \ then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate. \ represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated. The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \ weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \ It canbe a float value as coeff of L2 regularization or
:ref:`api_paddle_regularizer_L1Decay`, :ref:`api_paddle_regularizer_L2Decay`. :ref:`api_paddle_regularizer_L1Decay`, :ref:`api_paddle_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_paddle_fluid_param_attr_aramAttr` already, \ If a parameter has set regularizer using :ref:`api_paddle_fluid_param_attr_aramAttr` already,
the regularization setting here in optimizer will be ignored for this parameter. \ the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect. \ Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies, some derived class of ``GradientClipBase`` . There are three cliping strategies,
......
...@@ -67,18 +67,18 @@ class Adam(Optimizer): ...@@ -67,18 +67,18 @@ class Adam(Optimizer):
epsilon (float|Tensor, optional): A small float value for numerical stability. epsilon (float|Tensor, optional): A small float value for numerical stability.
It should be a float number or a Tensor with shape [1] and data type as float32. It should be a float number or a Tensor with shape [1] and data type as float32.
The default value is 1e-08. The default value is 1e-08.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \ parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for \ This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, \ different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups \ then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate. \ represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated. The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \ weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \ It canbe a float value as coeff of L2 regularization or
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`. :ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \ If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter. \ the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect. \ Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies some derived class of ``GradientClipBase`` . There are three cliping strategies
......
...@@ -57,18 +57,18 @@ class Adamax(Optimizer): ...@@ -57,18 +57,18 @@ class Adamax(Optimizer):
The default value is 0.999. The default value is 0.999.
epsilon (float, optional): A small float value for numerical stability. epsilon (float, optional): A small float value for numerical stability.
The default value is 1e-08. The default value is 1e-08.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \ parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for \ This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, \ different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups \ then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate. \ represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated. The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \ weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \ It canbe a float value as coeff of L2 regularization or
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`. :ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \ If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter. \ the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect. \ Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies some derived class of ``GradientClipBase`` . There are three cliping strategies
......
...@@ -54,11 +54,11 @@ class AdamW(Optimizer): ...@@ -54,11 +54,11 @@ class AdamW(Optimizer):
Args: Args:
learning_rate (float|LRScheduler, optional): The learning rate used to update ``Parameter``. learning_rate (float|LRScheduler, optional): The learning rate used to update ``Parameter``.
It can be a float value or a LRScheduler. The default value is 0.001. It can be a float value or a LRScheduler. The default value is 0.001.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` names to update to minimize ``loss``. \ parameters (list|tuple, optional): List/Tuple of ``Tensor`` names to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for \ This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, \ different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups \ then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate. \ represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated. The default value is None in static mode, at this time all parameters will be updated.
beta1 (float|Tensor, optional): The exponential decay rate for the 1st moment estimates. beta1 (float|Tensor, optional): The exponential decay rate for the 1st moment estimates.
It should be a float number or a Tensor with shape [1] and data type as float32. It should be a float number or a Tensor with shape [1] and data type as float32.
......
...@@ -63,12 +63,6 @@ def rank(input): ...@@ -63,12 +63,6 @@ def rank(input):
def shape(input): def shape(input):
""" """
:alias_main: paddle.shape
:alias: paddle.shape,paddle.tensor.shape,paddle.tensor.attribute.shape
:old_api: paddle.fluid.layers.shape
**Shape Layer**
Get the shape of the input. Get the shape of the input.
.. code-block:: text .. code-block:: text
......
...@@ -416,12 +416,6 @@ def transpose(x, perm, name=None): ...@@ -416,12 +416,6 @@ def transpose(x, perm, name=None):
def unstack(x, axis=0, num=None): def unstack(x, axis=0, num=None):
""" """
:alias_main: paddle.unstack
:alias: paddle.unstack,paddle.tensor.unstack,paddle.tensor.manipulation.unstack
:old_api: paddle.fluid.layers.unstack
**UnStack Layer**
This layer unstacks input Tensor :code:`x` into several Tensors along :code:`axis`. This layer unstacks input Tensor :code:`x` into several Tensors along :code:`axis`.
If :code:`axis` < 0, it would be replaced with :code:`axis+rank(x)`. If :code:`axis` < 0, it would be replaced with :code:`axis+rank(x)`.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册