未验证 提交 66597bac 编写于 作者: C Candy2Tang 提交者: GitHub

[xdoctest][task 110] Reformat example code with google style in...

[xdoctest][task 110] Reformat example code with google style in python/paddle/optimizer/momentum.py (#56226)

* [xdoctest][task 110] test=docs_preview

* fix whitespace test=docs_preview
上级 8cf00664
......@@ -54,18 +54,18 @@ class Momentum(Optimizer):
parameters (list|tuple, optional): List|Tuple of ``Tensor`` to update to minimize ``loss``. \
This parameter is required in dygraph mode. And you can specify different options for \
different parameter groups such as the learning rate, weight decay, etc, \
then the parameters are list of dict. Note that the learning_rate in paramter groups \
then the parameters are list of dict. Note that the learning_rate in parameter groups \
represents the scale of base learning_rate. \
The default value is None in static graph mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
It can be a float value as coeff of L2 regularization or \
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` ,
:ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
multi_precision (bool, optional): Whether to use multi-precision during weight updating. Default is false.
......@@ -79,41 +79,42 @@ class Momentum(Optimizer):
Examples:
.. code-block:: python
import paddle
inp = paddle.uniform([10, 10], dtype="float32", min=-0.1, max=0.1)
linear = paddle.nn.Linear(10, 10)
inp = paddle.to_tensor(inp)
out = linear(inp)
loss = paddle.mean(out)
beta1 = paddle.to_tensor([0.9], dtype="float32")
beta2 = paddle.to_tensor([0.99], dtype="float32")
momentum = paddle.optimizer.Momentum(learning_rate=0.1, parameters=linear.parameters(), weight_decay=0.01)
back = out.backward()
momentum.step()
momentum.clear_grad()
#Note that the learning_rate of linear_2 is 0.01.
linear_1 = paddle.nn.Linear(10, 10)
linear_2 = paddle.nn.Linear(10, 10)
inp = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1)
out = linear_1(inp)
out = linear_2(out)
loss = paddle.mean(out)
momentum = paddle.optimizer.Momentum(
learning_rate=0.1,
parameters=[{
'params': linear_1.parameters()
}, {
'params': linear_2.parameters(),
'weight_decay': 0.001,
'learning_rate': 0.1
}],
weight_decay=0.01,
momentum=0.9)
out.backward()
momentum.step()
momentum.clear_grad()
>>> import paddle
>>> inp = paddle.uniform([10, 10], dtype="float32", min=-0.1, max=0.1)
>>> linear = paddle.nn.Linear(10, 10)
>>> inp = paddle.to_tensor(inp)
>>> out = linear(inp)
>>> loss = paddle.mean(out)
>>> beta1 = paddle.to_tensor([0.9], dtype="float32")
>>> beta2 = paddle.to_tensor([0.99], dtype="float32")
>>> momentum = paddle.optimizer.Momentum(learning_rate=0.1, parameters=linear.parameters(), weight_decay=0.01)
>>> back = out.backward()
>>> momentum.step()
>>> momentum.clear_grad()
>>> # Note that the learning_rate of linear_2 is 0.01.
>>> linear_1 = paddle.nn.Linear(10, 10)
>>> linear_2 = paddle.nn.Linear(10, 10)
>>> inp = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1)
>>> out = linear_1(inp)
>>> out = linear_2(out)
>>> loss = paddle.mean(out)
>>> momentum = paddle.optimizer.Momentum(
... learning_rate=0.1,
... parameters=[{
... 'params': linear_1.parameters()
... }, {
... 'params': linear_2.parameters(),
... 'weight_decay': 0.001,
... 'learning_rate': 0.1
... }],
... weight_decay=0.01,
... momentum=0.9
... )
>>> out.backward()
>>> momentum.step()
>>> momentum.clear_grad()
"""
_velocity_acc_str = "velocity"
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册