未验证 提交 966f042d 编写于 作者: S sunzhongkai588 提交者: GitHub

Fix warning (#34875)

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix
上级 d8bfe83d
......@@ -502,17 +502,17 @@ def xpu_places(device_ids=None):
"""
**Note**:
For multi-card tasks, please use `FLAGS_selected_xpus` environment variable to set the visible XPU device.
This function creates a list of :code:`paddle.XPUPlace` objects.
If :code:`device_ids` is None, environment variable of
:code:`FLAGS_selected_xpus` would be checked first. For example, if
:code:`FLAGS_selected_xpus=0,1,2`, the returned list would
be [paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
If :code:`FLAGS_selected_xpus` is not set, all visible
xpu places would be returned.
If :code:`device_ids` is not None, it should be the device
ids of XPUs. For example, if :code:`device_ids=[0,1,2]`,
the returned list would be
[paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
This function creates a list of :code:`paddle.XPUPlace` objects.
If :code:`device_ids` is None, environment variable of
:code:`FLAGS_selected_xpus` would be checked first. For example, if
:code:`FLAGS_selected_xpus=0,1,2`, the returned list would
be [paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
If :code:`FLAGS_selected_xpus` is not set, all visible
xpu places would be returned.
If :code:`device_ids` is not None, it should be the device
ids of XPUs. For example, if :code:`device_ids=[0,1,2]`,
the returned list would be
[paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
Parameters:
device_ids (list or tuple of int, optional): list of XPU device ids.
......@@ -520,6 +520,7 @@ def xpu_places(device_ids=None):
list of paddle.XPUPlace: Created XPU place list.
Examples:
.. code-block:: python
# required: xpu
import paddle
......@@ -864,6 +865,7 @@ class Variable(object):
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
In `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ Mode:
.. code-block:: python
......@@ -1395,8 +1397,8 @@ class Variable(object):
Indicating name of the gradient Variable of current Variable.
**Notes: This is a read-only property. It simply returns name of
gradient Variable from a naming convention but doesn't guarantee
the gradient exists.**
gradient Variable from a naming convention but doesn't guarantee
the gradient exists.**
Examples:
.. code-block:: python
......
......@@ -4103,15 +4103,15 @@ def conv3d_transpose(input,
.. math::
Out = \sigma (W \\ast X + b)
Out = \sigma (W \ast X + b)
In the above equation:
* :math:`X`: Input value, a Tensor with NCDHW or NDHWC format.
* :math:`W`: Filter value, a Tensor with MCDHW format.
* :math:`\\ast`: Convolution operation.
* :math:`\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\\sigma`: Activation function.
* :math:`\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
Example:
......@@ -4166,9 +4166,9 @@ def conv3d_transpose(input,
calculate filter_size. Default: None. filter_size and output_size should not be
None at the same time.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively
adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string,
either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding`
is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or
adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string,
either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding`
is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or
`[pad_depth_front, pad_depth_back, pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `'NCDHW'`, `padding` can be in the form
`[[0,0], [0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
......@@ -9780,7 +9780,7 @@ def prelu(x, mode, param_attr=None, name=None):
prelu activation.
.. math::
prelu(x) = max(0, x) + \\alpha * min(0, x)
prelu(x) = max(0, x) + \alpha * min(0, x)
There are three modes for the activation:
......@@ -9791,13 +9791,17 @@ def prelu(x, mode, param_attr=None, name=None):
element: All elements do not share alpha. Each element has its own alpha.
Parameters:
x (Tensor): The input Tensor or LoDTensor with data type float32.
mode (str): The mode for weight sharing.
param_attr (ParamAttr|None, optional): The parameter attribute for the learnable
weight (alpha), it can be create by ParamAttr. None by default.
For detailed information, please refer to :ref:`api_fluid_ParamAttr`.
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
param_attr (ParamAttr|None, optional): The parameter attribute for the learnable \
weight (alpha), it can be create by ParamAttr. None by default. \
For detailed information, please refer to :ref:`api_fluid_ParamAttr`.
name (str, optional): Name for the operation (optional, default is None). \
For more information, please refer to :ref:`api_guide_Name`.
Returns:
Tensor: A tensor with the same shape and data type as x.
......
......@@ -909,14 +909,13 @@ def sequence_pad(x, pad_value, maxlen=None, name=None):
r"""
:api_attr: Static Graph
This layer padding the sequences in a same batch to a common length (according \
to ``maxlen``). The padding value is defined by ``pad_value``, and will be \
appended to the tail of sequences. The result is a Python tuple ``(Out, Length)``: \
the LodTensor ``Out`` is the padded sequences, and LodTensor ``Length`` is \
the length information of input sequences. For removing padding data (unpadding \
operation), See :ref:`api_fluid_layers_sequence_unpad` .
Please note that the input ``x`` should be LodTensor.
This layer padding the sequences in a same batch to a common length (according
to ``maxlen``). The padding value is defined by ``pad_value``, and will be
appended to the tail of sequences. The result is a Python tuple ``(Out, Length)``:
the LodTensor ``Out`` is the padded sequences, and LodTensor ``Length`` is
the length information of input sequences. For removing padding data (unpadding operation), See :ref:`api_fluid_layers_sequence_unpad`.
Please note that the input ``x`` should be LodTensor.
.. code-block:: text
......
......@@ -597,10 +597,11 @@ class ModelCheckpoint(Callback):
class LRScheduler(Callback):
"""Lr scheduler callback function
Args:
by_step(bool, optional): whether to update learning rate scheduler
by_step(bool, optional): whether to update learning rate scheduler
by step. Default: True.
by_epoch(bool, optional): whether to update learning rate scheduler
by_epoch(bool, optional): whether to update learning rate scheduler
by epoch. Default: False.
Examples:
......@@ -688,6 +689,7 @@ class LRScheduler(Callback):
class EarlyStopping(Callback):
"""Stop training when the given monitor stopped improving during evaluation
by setting `model.stop_training=True`.
Args:
monitor(str): Quantity to be monitored. Default: 'loss'.
mode(str|None): Mode should be one of 'auto', 'min' or 'max'. In 'min'
......
......@@ -172,22 +172,25 @@ def list(repo_dir, source='github', force_reload=False):
List all entrypoints available in `github` hubconf.
Args:
repo_dir(str): github or local path
repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified.
tag/branch. The default branch is `main` if not specified.
local path (str): local repo path
source (str): `github` | `gitee` | `local`, default is `github`
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): whether to discard the existing cache and force a fresh download, default is `False`.
Returns:
entrypoints: a list of available entrypoint names
Example:
```python
import paddle
.. code-block:: python
import paddle
paddle.hub.list('lyuwenyu/paddlehub_demo:main', source='github', force_reload=False)
paddle.hub.list('lyuwenyu/paddlehub_demo:main', source='github', force_reload=False)
```
"""
if source not in ('github', 'gitee', 'local'):
raise ValueError(
......@@ -213,22 +216,26 @@ def help(repo_dir, model, source='github', force_reload=False):
Show help information of model
Args:
repo_dir(str): github or local path
repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified.
local path (str): local repo path
model (str): model name
source (str): `github` | `gitee` | `local`, default is `github`
force_reload (bool, optional): default is `False`
tag/branch. The default branch is `main` if not specified.
local path (str): local repo path.
model (str): model name.
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): default is `False`.
Return:
docs
Example:
```python
import paddle
.. code-block:: python
import paddle
paddle.hub.help('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
paddle.hub.help('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
```
"""
if source not in ('github', 'gitee', 'local'):
raise ValueError(
......@@ -251,21 +258,25 @@ def load(repo_dir, model, source='github', force_reload=False, **kwargs):
Load model
Args:
repo_dir(str): github or local path
repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified.
local path (str): local repo path
model (str): model name
source (str): `github` | `gitee` | `local`, default is `github`
force_reload (bool, optional), default is `False`
tag/branch. The default branch is `main` if not specified.
local path (str): local repo path.
model (str): model name.
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): default is `False`.
**kwargs: parameters using for model
Return:
paddle model
Example:
```python
import paddle
paddle.hub.load('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
```
.. code-block:: python
import paddle
paddle.hub.load('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
"""
if source not in ('github', 'gitee', 'local'):
raise ValueError(
......
......@@ -62,43 +62,43 @@ class RMSProp(Optimizer):
w & = w - v(w, t)
where, :math:`\\rho` is a hyperparameter and typical values are 0.9, 0.95
and so on. :math: `beta` is the momentum term. :math: `\\epsilon` is a
where, :math:`\rho` is a hyperparameter and typical values are 0.9, 0.95
and so on. :math:`\beta` is the momentum term. :math:`\epsilon` is a
smoothing term to avoid division by zero, usually set somewhere in range
from 1e-4 to 1e-8.
Parameters:
learning_rate (float|LRScheduler): The learning rate used to update ``Parameter``.
It can be a float value or a LRScheduler.
rho(float): rho is :math: `\\rho` in equation, default is 0.95.
epsilon(float): :math: `\\epsilon` in equation is smoothing term to
avoid division by zero, default is 1e-6.
momentum(float): :math:`\\beta` in equation is the momentum term,
default is 0.0.
It can be a float value or a LRScheduler.
rho(float): rho is :math:`\rho` in equation, default is 0.95.
epsilon(float): :math:`\epsilon` in equation is smoothing term to
avoid division by zero, default is 1e-6.
momentum(float): :math:`\beta` in equation is the momentum term,
default is 0.0.
centered(bool): If True, gradients are normalized by the estimated variance of
the gradient; if False, by the uncentered second moment. Setting this to
True may help with training, but is slightly more expensive in terms of
computation and memory. Defaults to False.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \
This parameter is required in dygraph mode. And you can specify different options for \
different parameter groups such as the learning rate, weight decay, etc, \
then the parameters are list of dict. Note that the learning_rate in paramter groups \
represents the scale of base learning_rate. \
The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
the gradient; if False, by the uncentered second moment. Setting this to
True may help with training, but is slightly more expensive in terms of
computation and memory. Defaults to False.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` ,
:ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` ,
:ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information.
For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None.
......
......@@ -891,21 +891,24 @@ def bmm(x, y, name=None):
Tensor: The product Tensor.
Examples:
import paddle
.. code-block:: python
# In imperative mode:
# size x: (2, 2, 3) and y: (2, 3, 2)
x = paddle.to_tensor([[[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0]],
[[3.0, 3.0, 3.0],
[4.0, 4.0, 4.0]]])
y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]],
[[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]])
out = paddle.bmm(x, y)
#output size: (2, 2, 2)
#output value:
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
out_np = out.numpy()
import paddle
# In imperative mode:
# size x: (2, 2, 3) and y: (2, 3, 2)
x = paddle.to_tensor([[[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0]],
[[3.0, 3.0, 3.0],
[4.0, 4.0, 4.0]]])
y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]],
[[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]])
out = paddle.bmm(x, y)
#output size: (2, 2, 2)
#output value:
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
out_np = out.numpy()
"""
x_shape = x.shape
y_shape = y.shape
......
......@@ -1348,8 +1348,10 @@ def scatter(x, index, updates, overwrite=True, name=None):
index (Tensor): The index 1-D Tensor. Data type can be int32, int64. The length of index cannot exceed updates's length, and the value in index cannot exceed input's length.
updates (Tensor): update input with updates parameter based on index. shape should be the same as input, and dim value with dim > 1 should be the same as input.
overwrite (bool): The mode that updating the output when there are same indices.
If True, use the overwrite mode to update the output of the same index,
if False, use the accumulate mode to update the output of the same index.Default value is True.
If True, use the overwrite mode to update the output of the same index,
if False, use the accumulate mode to update the output of the same index.Default value is True.
name(str, optional): The default value is None. Normally there is no need for user to set this property. For more information, please refer to :ref:`api_guide_Name` .
Returns:
......
......@@ -104,8 +104,7 @@ def yolo_loss(x,
Final loss will be represented as follows.
$$
loss = (loss_{xy} + loss_{wh}) * weight_{box}
+ loss_{conf} + loss_{class}
loss = (loss_{xy} + loss_{wh}) * weight_{box} + loss_{conf} + loss_{class}
$$
While :attr:`use_label_smooth` is set to be :attr:`True`, the classification
......@@ -659,8 +658,8 @@ class DeformConv2D(Layer):
.. math::
H_{out}&= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\\\
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\
W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
Parameters:
......@@ -687,7 +686,7 @@ class DeformConv2D(Layer):
of conv2d. If it is set to None or one attribute of ParamAttr, conv2d
will create ParamAttr as param_attr. If it is set to None, the parameter
is initialized with :math:`Normal(0.0, std)`, and the :math:`std` is
:math:`(\\frac{2.0 }{filter\_elem\_num})^{0.5}`. The default value is None.
:math:`(\frac{2.0 }{filter\_elem\_num})^{0.5}`. The default value is None.
bias_attr(ParamAttr|bool, optional): The parameter attribute for the bias of conv2d.
If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv2d
......@@ -701,10 +700,14 @@ class DeformConv2D(Layer):
- offset: :math:`(N, 2 * H_f * W_f, H_{out}, W_{out})`
- mask: :math:`(N, H_f * W_f, H_{out}, W_{out})`
- output: :math:`(N, C_{out}, H_{out}, W_{out})`
Where
.. math::
H_{out}&= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (kernel\_size[0] - 1) + 1))}{strides[0]} + 1
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (kernel\_size[1] - 1) + 1))}{strides[1]} + 1
H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (kernel\_size[0] - 1) + 1))}{strides[0]} + 1 \\
W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (kernel\_size[1] - 1) + 1))}{strides[1]} + 1
Examples:
.. code-block:: python
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册