未验证 提交 8cf00664 编写于 作者: C Candy2Tang 提交者: GitHub

[xdoctest][task 111] Reformat example code with google style in...

[xdoctest][task 111] Reformat example code with google style in python/paddle/optimizer/rmsprop.py (#56227)

* [xdoctest][task 111] test=docs_preview

* fix whitespace test=docs_preview
上级 480fbc44
...@@ -86,18 +86,18 @@ class RMSProp(Optimizer): ...@@ -86,18 +86,18 @@ class RMSProp(Optimizer):
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc, different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups then the parameters are list of dict. Note that the learning_rate in parameter groups
represents the scale of base learning_rate. represents the scale of base learning_rate.
The default value is None in static graph mode, at this time all parameters will be updated. The default value is None in static graph mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or \ It can be a float value as coeff of L2 regularization or \
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`. :ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter. the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect. Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` ,
:ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping. :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information. name (str, optional): This parameter is used by developers to print debugging information.
...@@ -106,40 +106,41 @@ class RMSProp(Optimizer): ...@@ -106,40 +106,41 @@ class RMSProp(Optimizer):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle >>> import paddle
inp = paddle.rand([10,10], dtype="float32") >>> inp = paddle.rand([10,10], dtype="float32")
linear = paddle.nn.Linear(10, 10) >>> linear = paddle.nn.Linear(10, 10)
out = linear(inp) >>> out = linear(inp)
loss = paddle.mean(out) >>> loss = paddle.mean(out)
rmsprop = paddle.optimizer.RMSProp(learning_rate=0.1, >>> rmsprop = paddle.optimizer.RMSProp(learning_rate=0.1,
parameters=linear.parameters(), ... parameters=linear.parameters(),
weight_decay=0.01) ... weight_decay=0.01)
out.backward() >>> out.backward()
rmsprop.step() >>> rmsprop.step()
rmsprop.clear_grad() >>> rmsprop.clear_grad()
#Note that the learning_rate of linear_2 is 0.01. >>> # Note that the learning_rate of linear_2 is 0.01.
linear_1 = paddle.nn.Linear(10, 10) >>> linear_1 = paddle.nn.Linear(10, 10)
linear_2 = paddle.nn.Linear(10, 10) >>> linear_2 = paddle.nn.Linear(10, 10)
inp = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1) >>> inp = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1)
out = linear_1(inp) >>> out = linear_1(inp)
out = linear_2(out) >>> out = linear_2(out)
loss = paddle.mean(out) >>> loss = paddle.mean(out)
rmsprop = paddle.optimizer.RMSProp( >>> rmsprop = paddle.optimizer.RMSProp(
learning_rate=0.1, ... learning_rate=0.1,
parameters=[{ ... parameters=[{
'params': linear_1.parameters() ... 'params': linear_1.parameters()
}, { ... }, {
'params': linear_2.parameters(), ... 'params': linear_2.parameters(),
'weight_decay': 0.001, ... 'weight_decay': 0.001,
'learning_rate': 0.1 ... 'learning_rate': 0.1
}], ... }],
weight_decay=0.01) ... weight_decay=0.01
out.backward() ... )
rmsprop.step() >>> out.backward()
rmsprop.clear_grad() >>> rmsprop.step()
>>> rmsprop.clear_grad()
""" """
_momentum_acc_str = "momentum" _momentum_acc_str = "momentum"
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册