未验证 提交 6dd9dd39 编写于 作者: Y Yilingyelu 提交者: GitHub

fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope,...

fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604)

* Update scope_guard; test=document_fix

* gradients; test=document_fix

* gradients; test=document_fix

* name_scope; test=document_fix

* cpu_places; test=document_fix

* WeightNormParamAttr; test=document_fix

* cuda_places; test=document_fix

* load_program_state; test=document_fix

* device_guard; test=document_fix

* device_guard; test=document_fix

* ParamAttr; test=document_fix

* scale; test=document_fix

* scale; test=document_fix

* update code example;test=document_fix
Co-authored-by: NChen Long <1300851984@qq.com>
上级 41852264
...@@ -2021,7 +2021,6 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None): ...@@ -2021,7 +2021,6 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None):
@framework.static_only @framework.static_only
def gradients(targets, inputs, target_gradients=None, no_grad_set=None): def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
""" """
:api_attr: Static Graph
Backpropagate the gradients of targets to inputs. Backpropagate the gradients of targets to inputs.
...@@ -2042,8 +2041,9 @@ def gradients(targets, inputs, target_gradients=None, no_grad_set=None): ...@@ -2042,8 +2041,9 @@ def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
will be None. will be None.
Examples: Examples:
.. code-block:: python .. code-block:: python
:name: code-example
import paddle import paddle
import paddle.nn.functional as F import paddle.nn.functional as F
......
...@@ -75,7 +75,6 @@ def _switch_scope(scope): ...@@ -75,7 +75,6 @@ def _switch_scope(scope):
@signature_safe_contextmanager @signature_safe_contextmanager
def scope_guard(scope): def scope_guard(scope):
""" """
:api_attr: Static Graph
This function switches scope through python `with` statement. This function switches scope through python `with` statement.
Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ), Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ),
...@@ -94,6 +93,7 @@ def scope_guard(scope): ...@@ -94,6 +93,7 @@ def scope_guard(scope):
None None
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
......
...@@ -729,7 +729,7 @@ def is_compiled_with_rocm(): ...@@ -729,7 +729,7 @@ def is_compiled_with_rocm():
def cuda_places(device_ids=None): def cuda_places(device_ids=None):
""" """
**Note**: Note:
For multi-card tasks, please use `FLAGS_selected_gpus` environment variable to set the visible GPU device. For multi-card tasks, please use `FLAGS_selected_gpus` environment variable to set the visible GPU device.
The next version will fix the problem with `CUDA_VISIBLE_DEVICES` environment variable. The next version will fix the problem with `CUDA_VISIBLE_DEVICES` environment variable.
...@@ -754,6 +754,7 @@ def cuda_places(device_ids=None): ...@@ -754,6 +754,7 @@ def cuda_places(device_ids=None):
list of paddle.CUDAPlace: Created GPU place list. list of paddle.CUDAPlace: Created GPU place list.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
...@@ -874,6 +875,7 @@ def cpu_places(device_count=None): ...@@ -874,6 +875,7 @@ def cpu_places(device_count=None):
list of paddle.CPUPlace: Created list of CPU places. list of paddle.CPUPlace: Created list of CPU places.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
...@@ -993,7 +995,6 @@ _name_scope = NameScope() ...@@ -993,7 +995,6 @@ _name_scope = NameScope()
@signature_safe_contextmanager @signature_safe_contextmanager
def name_scope(prefix=None): def name_scope(prefix=None):
""" """
:api_attr: Static Graph
Generate hierarchical name prefix for the operators in Static Graph. Generate hierarchical name prefix for the operators in Static Graph.
...@@ -1006,6 +1007,7 @@ def name_scope(prefix=None): ...@@ -1006,6 +1007,7 @@ def name_scope(prefix=None):
prefix(str, optional): prefix. Default is none. prefix(str, optional): prefix. Default is none.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
...@@ -6916,8 +6918,9 @@ def switch_device(device): ...@@ -6916,8 +6918,9 @@ def switch_device(device):
@signature_safe_contextmanager @signature_safe_contextmanager
def device_guard(device=None): def device_guard(device=None):
""" """
**Notes**:
**The API only supports static mode.** Note:
The API only supports static mode.
A context manager that specifies the device on which the OP will be placed. A context manager that specifies the device on which the OP will be placed.
...@@ -6931,8 +6934,10 @@ def device_guard(device=None): ...@@ -6931,8 +6934,10 @@ def device_guard(device=None):
assigned devices. assigned devices.
Examples: Examples:
.. code-block:: python .. code-block:: python
# required: gpu
import paddle import paddle
paddle.enable_static() paddle.enable_static()
......
...@@ -2154,7 +2154,6 @@ def load(program, model_path, executor=None, var_list=None): ...@@ -2154,7 +2154,6 @@ def load(program, model_path, executor=None, var_list=None):
def load_program_state(model_path, var_list=None): def load_program_state(model_path, var_list=None):
""" """
:api_attr: Static Graph
Load program state from local file Load program state from local file
...@@ -2169,6 +2168,7 @@ def load_program_state(model_path, var_list=None): ...@@ -2169,6 +2168,7 @@ def load_program_state(model_path, var_list=None):
state_dict(dict): the dict store Parameter and optimizer information state_dict(dict): the dict store Parameter and optimizer information
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
......
...@@ -11850,8 +11850,7 @@ def _elementwise_op(helper): ...@@ -11850,8 +11850,7 @@ def _elementwise_op(helper):
def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None): def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
""" """
Scale operator.
Putting scale and bias to the input Tensor as following: Putting scale and bias to the input Tensor as following:
``bias_after_scale`` is True: ``bias_after_scale`` is True:
...@@ -11876,6 +11875,7 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None): ...@@ -11876,6 +11875,7 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
Tensor: Output tensor of scale operator, with shape and data type same as input. Tensor: Output tensor of scale operator, with shape and data type same as input.
Examples: Examples:
.. code-block:: python .. code-block:: python
# scale as a float32 number # scale as a float32 number
......
...@@ -30,16 +30,17 @@ __all__ = [ ...@@ -30,16 +30,17 @@ __all__ = [
class ParamAttr(object): class ParamAttr(object):
""" """
Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.
Note: Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0. ``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope. Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` , There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` . :ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.
Parameters: Parameters:
name (str, optional): The parameter's name. Default None, meaning that the name name (str, optional): The parameter's name. Default None, meaning that the name
would be created automatically. would be created automatically.
...@@ -63,6 +64,7 @@ class ParamAttr(object): ...@@ -63,6 +64,7 @@ class ParamAttr(object):
ParamAttr Object. ParamAttr Object.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
...@@ -213,24 +215,22 @@ class ParamAttr(object): ...@@ -213,24 +215,22 @@ class ParamAttr(object):
class WeightNormParamAttr(ParamAttr): class WeightNormParamAttr(ParamAttr):
r""" r"""
:api_attr: Static Graph
Note: Note:
Please use 'paddle.nn.utils.weight_norm' in dygraph mode. Please use 'paddle.nn.utils.weight_norm' in dygraph mode.
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Parameter of weight Norm. Weight Norm is a reparameterization of the weight vectors Parameter of weight Norm. Weight Norm is a reparameterization of the weight vectors
in a neural network that decouples the magnitude of those weight vectors from in a neural network that decouples the magnitude of those weight vectors from
their direction. Weight Norm has been implemented as discussed in this their direction. Weight Norm has been implemented as discussed in this
paper: `Weight Normalization: A Simple Reparameterization to Accelerate paper: `Weight Normalization: A Simple Reparameterization to Accelerate
Training of Deep Neural Networks Training of Deep Neural Networks
<https://arxiv.org/pdf/1602.07868.pdf>`_. <https://arxiv.org/pdf/1602.07868.pdf>`_.
Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
Args: Args:
dim(int, optional): Dimension over which to compute the norm. Dim is a non-negative dim(int, optional): Dimension over which to compute the norm. Dim is a non-negative
...@@ -258,6 +258,7 @@ class WeightNormParamAttr(ParamAttr): ...@@ -258,6 +258,7 @@ class WeightNormParamAttr(ParamAttr):
need_clip (bool, optional): Whether the parameter gradient need to be cliped in optimizer. Default is True. need_clip (bool, optional): Whether the parameter gradient need to be cliped in optimizer. Default is True.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle import paddle
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册