未验证 提交 2545db89 编写于 作者: Z Zhou Wei 提交者: GitHub

add doc of new API:StepDecay, API:MultiStepDecay,test=develop (#2266)

add doc of new API:StepDecay, API:MultiStepDecay
上级 fc39f229
......@@ -15,6 +15,7 @@ fluid.dygraph
dygraph_cn/Conv2DTranspose_cn.rst
dygraph_cn/Conv3D_cn.rst
dygraph_cn/Conv3DTranspose_cn.rst
dygraph_cn/CosineAnnealingDecay_cn.rst
dygraph_cn/CosineDecay_cn.rst
dygraph_cn/declarative_cn.rst
dygraph_cn/Dropout_cn.rst
......@@ -27,11 +28,13 @@ fluid.dygraph
dygraph_cn/guard_cn.rst
dygraph_cn/InstanceNorm_cn.rst
dygraph_cn/InverseTimeDecay_cn.rst
dygraph_cn/LambdaDecay_cn.rst
dygraph_cn/Layer_cn.rst
dygraph_cn/LayerList_cn.rst
dygraph_cn/LayerNorm_cn.rst
dygraph_cn/Linear_cn.rst
dygraph_cn/load_dygraph_cn.rst
dygraph_cn/MultiStepDecay_cn.rst
dygraph_cn/NaturalExpDecay_cn.rst
dygraph_cn/NCE_cn.rst
dygraph_cn/NoamDecay_cn.rst
......@@ -48,6 +51,7 @@ fluid.dygraph
dygraph_cn/save_dygraph_cn.rst
dygraph_cn/Sequential_cn.rst
dygraph_cn/SpectralNorm_cn.rst
dygraph_cn/StepDecay_cn.rst
dygraph_cn/to_variable_cn.rst
dygraph_cn/TracedLayer_cn.rst
dygraph_cn/Tracer_cn.rst
......
.. _cn_api_fluid_dygraph_MultiStepDecay:
MultiStepDecay
-------------------------------
.. py:class:: paddle.fluid.dygraph.MultiStepDecay(learning_rate, milestones, decay_rate=0.1)
:api_attr: 命令式编程模式(动态图)
该接口提供 ``MultiStep`` 衰减学习率的功能。
算法可以描述为:
.. code-block:: text
learning_rate = 0.5
milestones = [30, 50]
decay_rate = 0.1
if epoch < 30:
learning_rate = 0.5
elif epoch < 50:
learning_rate = 0.05
else:
learning_rate = 0.005
参数:
- **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
- **milestones** (tuple|list) - 列表或元组。必须是递增的。
- **decay_rate** (float, optional) - 学习率的衰减率。 ``new_lr = origin_lr * decay_rate`` 。其值应该小于1.0。默认:0.1。
返回: 无
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.MultiStepDecay(0.5, milestones=[3, 5])
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
for epoch in range(6):
for batch_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
adam.minimize(loss)
scheduler.epoch()
print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.5
# epoch:2, current lr is 0.5
# epoch:3, current lr is 0.05
# epoch:4, current lr is 0.05
# epoch:5, current lr is 0.005
.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。
返回:
**代码示例**:
参照上述示例代码。
.. _cn_api_fluid_dygraph_StepDecay:
StepDecay
-------------------------------
.. py:class:: paddle.fluid.dygraph.StepDecay(learning_rate, step_size, decay_rate=0.1)
:api_attr: 命令式编程模式(动态图)
该接口提供 ``step_size`` 衰减学习率的功能,每经过 ``step_size`` 个 ``epoch`` 时会通过 ``decay_rate`` 衰减一次学习率。
算法可以描述为:
.. code-block:: text
learning_rate = 0.5
step_size = 30
decay_rate = 0.1
learning_rate = 0.5 if epoch < 30
learning_rate = 0.05 if 30 <= epoch < 60
learning_rate = 0.005 if 60 <= epoch < 90
...
参数:
- **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
- **step_size** (int) - 学习率每衰减一次的间隔。
- **decay_rate** (float, optional) - 学习率的衰减率。 ``new_lr = origin_lr * decay_rate`` 。其值应该小于1.0。默认:0.1。
返回: 无
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.StepDecay(0.5, step_size=3)
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
for epoch in range(9):
for batch_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
adam.minimize(loss)
scheduler.epoch()
print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.5
# epoch:2, current lr is 0.5
# epoch:3, current lr is 0.05
# epoch:4, current lr is 0.05
# epoch:5, current lr is 0.05
# epoch:6, current lr is 0.005
# epoch:7, current lr is 0.005
# epoch:8, current lr is 0.005
.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。
返回:
**代码示例**:
参照上述示例代码。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册