未验证 提交 966f042d 编写于 作者: S sunzhongkai588 提交者: GitHub

Fix warning (#34875)

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix

* fix warning error , test=document_fix
上级 d8bfe83d
...@@ -502,17 +502,17 @@ def xpu_places(device_ids=None): ...@@ -502,17 +502,17 @@ def xpu_places(device_ids=None):
""" """
**Note**: **Note**:
For multi-card tasks, please use `FLAGS_selected_xpus` environment variable to set the visible XPU device. For multi-card tasks, please use `FLAGS_selected_xpus` environment variable to set the visible XPU device.
This function creates a list of :code:`paddle.XPUPlace` objects. This function creates a list of :code:`paddle.XPUPlace` objects.
If :code:`device_ids` is None, environment variable of If :code:`device_ids` is None, environment variable of
:code:`FLAGS_selected_xpus` would be checked first. For example, if :code:`FLAGS_selected_xpus` would be checked first. For example, if
:code:`FLAGS_selected_xpus=0,1,2`, the returned list would :code:`FLAGS_selected_xpus=0,1,2`, the returned list would
be [paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)]. be [paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
If :code:`FLAGS_selected_xpus` is not set, all visible If :code:`FLAGS_selected_xpus` is not set, all visible
xpu places would be returned. xpu places would be returned.
If :code:`device_ids` is not None, it should be the device If :code:`device_ids` is not None, it should be the device
ids of XPUs. For example, if :code:`device_ids=[0,1,2]`, ids of XPUs. For example, if :code:`device_ids=[0,1,2]`,
the returned list would be the returned list would be
[paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)]. [paddle.XPUPlace(0), paddle.XPUPlace(1), paddle.XPUPlace(2)].
Parameters: Parameters:
device_ids (list or tuple of int, optional): list of XPU device ids. device_ids (list or tuple of int, optional): list of XPU device ids.
...@@ -520,6 +520,7 @@ def xpu_places(device_ids=None): ...@@ -520,6 +520,7 @@ def xpu_places(device_ids=None):
list of paddle.XPUPlace: Created XPU place list. list of paddle.XPUPlace: Created XPU place list.
Examples: Examples:
.. code-block:: python .. code-block:: python
# required: xpu # required: xpu
import paddle import paddle
...@@ -864,6 +865,7 @@ class Variable(object): ...@@ -864,6 +865,7 @@ class Variable(object):
new_variable = cur_block.create_var(name="X", new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48], shape=[-1, 23, 48],
dtype='float32') dtype='float32')
In `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ Mode: In `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ Mode:
.. code-block:: python .. code-block:: python
...@@ -1395,8 +1397,8 @@ class Variable(object): ...@@ -1395,8 +1397,8 @@ class Variable(object):
Indicating name of the gradient Variable of current Variable. Indicating name of the gradient Variable of current Variable.
**Notes: This is a read-only property. It simply returns name of **Notes: This is a read-only property. It simply returns name of
gradient Variable from a naming convention but doesn't guarantee gradient Variable from a naming convention but doesn't guarantee
the gradient exists.** the gradient exists.**
Examples: Examples:
.. code-block:: python .. code-block:: python
......
...@@ -4103,15 +4103,15 @@ def conv3d_transpose(input, ...@@ -4103,15 +4103,15 @@ def conv3d_transpose(input,
.. math:: .. math::
Out = \sigma (W \\ast X + b) Out = \sigma (W \ast X + b)
In the above equation: In the above equation:
* :math:`X`: Input value, a Tensor with NCDHW or NDHWC format. * :math:`X`: Input value, a Tensor with NCDHW or NDHWC format.
* :math:`W`: Filter value, a Tensor with MCDHW format. * :math:`W`: Filter value, a Tensor with MCDHW format.
* :math:`\\ast`: Convolution operation. * :math:`\ast`: Convolution operation.
* :math:`b`: Bias value, a 2-D Tensor with shape [M, 1]. * :math:`b`: Bias value, a 2-D Tensor with shape [M, 1].
* :math:`\\sigma`: Activation function. * :math:`\sigma`: Activation function.
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different. * :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
Example: Example:
...@@ -4166,9 +4166,9 @@ def conv3d_transpose(input, ...@@ -4166,9 +4166,9 @@ def conv3d_transpose(input,
calculate filter_size. Default: None. filter_size and output_size should not be calculate filter_size. Default: None. filter_size and output_size should not be
None at the same time. None at the same time.
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively padding(int|list|str|tuple, optional): The padding size. The padding argument effectively
adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string, adds `dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a string,
either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding` either 'VALID' or 'SAME' supported, which is the padding algorithm. If `padding`
is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or is a tuple or list, it could be in three forms: `[pad_depth, pad_height, pad_width]` or
`[pad_depth_front, pad_depth_back, pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`, `[pad_depth_front, pad_depth_back, pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
and when `data_format` is `'NCDHW'`, `padding` can be in the form and when `data_format` is `'NCDHW'`, `padding` can be in the form
`[[0,0], [0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`. `[[0,0], [0,0], [pad_depth_front, pad_depth_back], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
...@@ -9780,7 +9780,7 @@ def prelu(x, mode, param_attr=None, name=None): ...@@ -9780,7 +9780,7 @@ def prelu(x, mode, param_attr=None, name=None):
prelu activation. prelu activation.
.. math:: .. math::
prelu(x) = max(0, x) + \\alpha * min(0, x) prelu(x) = max(0, x) + \alpha * min(0, x)
There are three modes for the activation: There are three modes for the activation:
...@@ -9791,13 +9791,17 @@ def prelu(x, mode, param_attr=None, name=None): ...@@ -9791,13 +9791,17 @@ def prelu(x, mode, param_attr=None, name=None):
element: All elements do not share alpha. Each element has its own alpha. element: All elements do not share alpha. Each element has its own alpha.
Parameters: Parameters:
x (Tensor): The input Tensor or LoDTensor with data type float32. x (Tensor): The input Tensor or LoDTensor with data type float32.
mode (str): The mode for weight sharing. mode (str): The mode for weight sharing.
param_attr (ParamAttr|None, optional): The parameter attribute for the learnable
weight (alpha), it can be create by ParamAttr. None by default. param_attr (ParamAttr|None, optional): The parameter attribute for the learnable \
For detailed information, please refer to :ref:`api_fluid_ParamAttr`. weight (alpha), it can be create by ParamAttr. None by default. \
name (str, optional): Name for the operation (optional, default is None). For detailed information, please refer to :ref:`api_fluid_ParamAttr`.
For more information, please refer to :ref:`api_guide_Name`.
name (str, optional): Name for the operation (optional, default is None). \
For more information, please refer to :ref:`api_guide_Name`.
Returns: Returns:
Tensor: A tensor with the same shape and data type as x. Tensor: A tensor with the same shape and data type as x.
......
...@@ -909,14 +909,13 @@ def sequence_pad(x, pad_value, maxlen=None, name=None): ...@@ -909,14 +909,13 @@ def sequence_pad(x, pad_value, maxlen=None, name=None):
r""" r"""
:api_attr: Static Graph :api_attr: Static Graph
This layer padding the sequences in a same batch to a common length (according \ This layer padding the sequences in a same batch to a common length (according
to ``maxlen``). The padding value is defined by ``pad_value``, and will be \ to ``maxlen``). The padding value is defined by ``pad_value``, and will be
appended to the tail of sequences. The result is a Python tuple ``(Out, Length)``: \ appended to the tail of sequences. The result is a Python tuple ``(Out, Length)``:
the LodTensor ``Out`` is the padded sequences, and LodTensor ``Length`` is \ the LodTensor ``Out`` is the padded sequences, and LodTensor ``Length`` is
the length information of input sequences. For removing padding data (unpadding \ the length information of input sequences. For removing padding data (unpadding operation), See :ref:`api_fluid_layers_sequence_unpad`.
operation), See :ref:`api_fluid_layers_sequence_unpad` .
Please note that the input ``x`` should be LodTensor.
Please note that the input ``x`` should be LodTensor.
.. code-block:: text .. code-block:: text
......
...@@ -597,10 +597,11 @@ class ModelCheckpoint(Callback): ...@@ -597,10 +597,11 @@ class ModelCheckpoint(Callback):
class LRScheduler(Callback): class LRScheduler(Callback):
"""Lr scheduler callback function """Lr scheduler callback function
Args: Args:
by_step(bool, optional): whether to update learning rate scheduler by_step(bool, optional): whether to update learning rate scheduler
by step. Default: True. by step. Default: True.
by_epoch(bool, optional): whether to update learning rate scheduler by_epoch(bool, optional): whether to update learning rate scheduler
by epoch. Default: False. by epoch. Default: False.
Examples: Examples:
...@@ -688,6 +689,7 @@ class LRScheduler(Callback): ...@@ -688,6 +689,7 @@ class LRScheduler(Callback):
class EarlyStopping(Callback): class EarlyStopping(Callback):
"""Stop training when the given monitor stopped improving during evaluation """Stop training when the given monitor stopped improving during evaluation
by setting `model.stop_training=True`. by setting `model.stop_training=True`.
Args: Args:
monitor(str): Quantity to be monitored. Default: 'loss'. monitor(str): Quantity to be monitored. Default: 'loss'.
mode(str|None): Mode should be one of 'auto', 'min' or 'max'. In 'min' mode(str|None): Mode should be one of 'auto', 'min' or 'max'. In 'min'
......
...@@ -172,22 +172,25 @@ def list(repo_dir, source='github', force_reload=False): ...@@ -172,22 +172,25 @@ def list(repo_dir, source='github', force_reload=False):
List all entrypoints available in `github` hubconf. List all entrypoints available in `github` hubconf.
Args: Args:
repo_dir(str): github or local path repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified. tag/branch. The default branch is `main` if not specified.
local path (str): local repo path local path (str): local repo path
source (str): `github` | `gitee` | `local`, default is `github`
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): whether to discard the existing cache and force a fresh download, default is `False`. force_reload (bool, optional): whether to discard the existing cache and force a fresh download, default is `False`.
Returns: Returns:
entrypoints: a list of available entrypoint names entrypoints: a list of available entrypoint names
Example: Example:
```python .. code-block:: python
import paddle
import paddle
paddle.hub.list('lyuwenyu/paddlehub_demo:main', source='github', force_reload=False) paddle.hub.list('lyuwenyu/paddlehub_demo:main', source='github', force_reload=False)
```
""" """
if source not in ('github', 'gitee', 'local'): if source not in ('github', 'gitee', 'local'):
raise ValueError( raise ValueError(
...@@ -213,22 +216,26 @@ def help(repo_dir, model, source='github', force_reload=False): ...@@ -213,22 +216,26 @@ def help(repo_dir, model, source='github', force_reload=False):
Show help information of model Show help information of model
Args: Args:
repo_dir(str): github or local path repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified. tag/branch. The default branch is `main` if not specified.
local path (str): local repo path
model (str): model name local path (str): local repo path.
source (str): `github` | `gitee` | `local`, default is `github`
force_reload (bool, optional): default is `False` model (str): model name.
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): default is `False`.
Return: Return:
docs docs
Example: Example:
```python .. code-block:: python
import paddle
import paddle
paddle.hub.help('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
paddle.hub.help('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
```
""" """
if source not in ('github', 'gitee', 'local'): if source not in ('github', 'gitee', 'local'):
raise ValueError( raise ValueError(
...@@ -251,21 +258,25 @@ def load(repo_dir, model, source='github', force_reload=False, **kwargs): ...@@ -251,21 +258,25 @@ def load(repo_dir, model, source='github', force_reload=False, **kwargs):
Load model Load model
Args: Args:
repo_dir(str): github or local path repo_dir(str): github or local path.
github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional github path (str): a str with format "repo_owner/repo_name[:tag_name]" with an optional
tag/branch. The default branch is `main` if not specified. tag/branch. The default branch is `main` if not specified.
local path (str): local repo path
model (str): model name local path (str): local repo path.
source (str): `github` | `gitee` | `local`, default is `github`
force_reload (bool, optional), default is `False` model (str): model name.
source (str): `github` | `gitee` | `local`, default is `github`.
force_reload (bool, optional): default is `False`.
**kwargs: parameters using for model **kwargs: parameters using for model
Return: Return:
paddle model paddle model
Example: Example:
```python .. code-block:: python
import paddle
paddle.hub.load('lyuwenyu/paddlehub_demo:main', model='MM', source='github') import paddle
``` paddle.hub.load('lyuwenyu/paddlehub_demo:main', model='MM', source='github')
""" """
if source not in ('github', 'gitee', 'local'): if source not in ('github', 'gitee', 'local'):
raise ValueError( raise ValueError(
......
...@@ -62,43 +62,43 @@ class RMSProp(Optimizer): ...@@ -62,43 +62,43 @@ class RMSProp(Optimizer):
w & = w - v(w, t) w & = w - v(w, t)
where, :math:`\\rho` is a hyperparameter and typical values are 0.9, 0.95 where, :math:`\rho` is a hyperparameter and typical values are 0.9, 0.95
and so on. :math: `beta` is the momentum term. :math: `\\epsilon` is a and so on. :math:`\beta` is the momentum term. :math:`\epsilon` is a
smoothing term to avoid division by zero, usually set somewhere in range smoothing term to avoid division by zero, usually set somewhere in range
from 1e-4 to 1e-8. from 1e-4 to 1e-8.
Parameters: Parameters:
learning_rate (float|LRScheduler): The learning rate used to update ``Parameter``. learning_rate (float|LRScheduler): The learning rate used to update ``Parameter``.
It can be a float value or a LRScheduler. It can be a float value or a LRScheduler.
rho(float): rho is :math: `\\rho` in equation, default is 0.95. rho(float): rho is :math:`\rho` in equation, default is 0.95.
epsilon(float): :math: `\\epsilon` in equation is smoothing term to epsilon(float): :math:`\epsilon` in equation is smoothing term to
avoid division by zero, default is 1e-6. avoid division by zero, default is 1e-6.
momentum(float): :math:`\\beta` in equation is the momentum term, momentum(float): :math:`\beta` in equation is the momentum term,
default is 0.0. default is 0.0.
centered(bool): If True, gradients are normalized by the estimated variance of centered(bool): If True, gradients are normalized by the estimated variance of
the gradient; if False, by the uncentered second moment. Setting this to the gradient; if False, by the uncentered second moment. Setting this to
True may help with training, but is slightly more expensive in terms of True may help with training, but is slightly more expensive in terms of
computation and memory. Defaults to False. computation and memory. Defaults to False.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \ parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for \ This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, \ different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups \ then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate. \ represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated. The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \ weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \ It canbe a float value as coeff of L2 regularization or \
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`. :ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \ If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter. \ the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect. \ Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` ,
:ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping. :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information. \ name (str, optional): This parameter is used by developers to print debugging information.
For details, please refer to :ref:`api_guide_Name`. Default is None. For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises: Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None. ValueError: If learning_rate, rho, epsilon, momentum are None.
......
...@@ -891,21 +891,24 @@ def bmm(x, y, name=None): ...@@ -891,21 +891,24 @@ def bmm(x, y, name=None):
Tensor: The product Tensor. Tensor: The product Tensor.
Examples: Examples:
import paddle .. code-block:: python
# In imperative mode: import paddle
# size x: (2, 2, 3) and y: (2, 3, 2)
x = paddle.to_tensor([[[1.0, 1.0, 1.0], # In imperative mode:
[2.0, 2.0, 2.0]], # size x: (2, 2, 3) and y: (2, 3, 2)
[[3.0, 3.0, 3.0], x = paddle.to_tensor([[[1.0, 1.0, 1.0],
[4.0, 4.0, 4.0]]]) [2.0, 2.0, 2.0]],
y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]], [[3.0, 3.0, 3.0],
[[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]]) [4.0, 4.0, 4.0]]])
out = paddle.bmm(x, y) y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]],
#output size: (2, 2, 2) [[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]])
#output value: out = paddle.bmm(x, y)
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]] #output size: (2, 2, 2)
out_np = out.numpy() #output value:
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
out_np = out.numpy()
""" """
x_shape = x.shape x_shape = x.shape
y_shape = y.shape y_shape = y.shape
......
...@@ -1348,8 +1348,10 @@ def scatter(x, index, updates, overwrite=True, name=None): ...@@ -1348,8 +1348,10 @@ def scatter(x, index, updates, overwrite=True, name=None):
index (Tensor): The index 1-D Tensor. Data type can be int32, int64. The length of index cannot exceed updates's length, and the value in index cannot exceed input's length. index (Tensor): The index 1-D Tensor. Data type can be int32, int64. The length of index cannot exceed updates's length, and the value in index cannot exceed input's length.
updates (Tensor): update input with updates parameter based on index. shape should be the same as input, and dim value with dim > 1 should be the same as input. updates (Tensor): update input with updates parameter based on index. shape should be the same as input, and dim value with dim > 1 should be the same as input.
overwrite (bool): The mode that updating the output when there are same indices. overwrite (bool): The mode that updating the output when there are same indices.
If True, use the overwrite mode to update the output of the same index,
if False, use the accumulate mode to update the output of the same index.Default value is True. If True, use the overwrite mode to update the output of the same index,
if False, use the accumulate mode to update the output of the same index.Default value is True.
name(str, optional): The default value is None. Normally there is no need for user to set this property. For more information, please refer to :ref:`api_guide_Name` . name(str, optional): The default value is None. Normally there is no need for user to set this property. For more information, please refer to :ref:`api_guide_Name` .
Returns: Returns:
......
...@@ -104,8 +104,7 @@ def yolo_loss(x, ...@@ -104,8 +104,7 @@ def yolo_loss(x,
Final loss will be represented as follows. Final loss will be represented as follows.
$$ $$
loss = (loss_{xy} + loss_{wh}) * weight_{box} loss = (loss_{xy} + loss_{wh}) * weight_{box} + loss_{conf} + loss_{class}
+ loss_{conf} + loss_{class}
$$ $$
While :attr:`use_label_smooth` is set to be :attr:`True`, the classification While :attr:`use_label_smooth` is set to be :attr:`True`, the classification
...@@ -659,8 +658,8 @@ class DeformConv2D(Layer): ...@@ -659,8 +658,8 @@ class DeformConv2D(Layer):
.. math:: .. math::
H_{out}&= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\\\ H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1 W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
Parameters: Parameters:
...@@ -687,7 +686,7 @@ class DeformConv2D(Layer): ...@@ -687,7 +686,7 @@ class DeformConv2D(Layer):
of conv2d. If it is set to None or one attribute of ParamAttr, conv2d of conv2d. If it is set to None or one attribute of ParamAttr, conv2d
will create ParamAttr as param_attr. If it is set to None, the parameter will create ParamAttr as param_attr. If it is set to None, the parameter
is initialized with :math:`Normal(0.0, std)`, and the :math:`std` is is initialized with :math:`Normal(0.0, std)`, and the :math:`std` is
:math:`(\\frac{2.0 }{filter\_elem\_num})^{0.5}`. The default value is None. :math:`(\frac{2.0 }{filter\_elem\_num})^{0.5}`. The default value is None.
bias_attr(ParamAttr|bool, optional): The parameter attribute for the bias of conv2d. bias_attr(ParamAttr|bool, optional): The parameter attribute for the bias of conv2d.
If it is set to False, no bias will be added to the output units. If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv2d If it is set to None or one attribute of ParamAttr, conv2d
...@@ -701,10 +700,14 @@ class DeformConv2D(Layer): ...@@ -701,10 +700,14 @@ class DeformConv2D(Layer):
- offset: :math:`(N, 2 * H_f * W_f, H_{out}, W_{out})` - offset: :math:`(N, 2 * H_f * W_f, H_{out}, W_{out})`
- mask: :math:`(N, H_f * W_f, H_{out}, W_{out})` - mask: :math:`(N, H_f * W_f, H_{out}, W_{out})`
- output: :math:`(N, C_{out}, H_{out}, W_{out})` - output: :math:`(N, C_{out}, H_{out}, W_{out})`
Where Where
.. math:: .. math::
H_{out}&= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (kernel\_size[0] - 1) + 1))}{strides[0]} + 1
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (kernel\_size[1] - 1) + 1))}{strides[1]} + 1 H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (kernel\_size[0] - 1) + 1))}{strides[0]} + 1 \\
W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (kernel\_size[1] - 1) + 1))}{strides[1]} + 1
Examples: Examples:
.. code-block:: python .. code-block:: python
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册