MultiStepDecay_cn.rst 2.3 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
.. _cn_api_fluid_dygraph_MultiStepDecay:

MultiStepDecay
-------------------------------


.. py:class:: paddle.fluid.dygraph.MultiStepDecay(learning_rate, milestones, decay_rate=0.1)

:api_attr: 命令式编程模式(动态图)


该接口提供 ``MultiStep`` 衰减学习率的功能。

算法可以描述为:

.. code-block:: text

    learning_rate = 0.5
    milestones = [30, 50]
    decay_rate = 0.1
    if epoch < 30:
        learning_rate = 0.5
    elif epoch < 50:
        learning_rate = 0.05
    else:
        learning_rate = 0.005

参数:
    - **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
    - **milestones** (tuple|list) - 列表或元组。必须是递增的。
    - **decay_rate** (float, optional) - 学习率的衰减率。 ``new_lr = origin_lr * decay_rate`` 。其值应该小于1.0。默认:0.1。

返回: 无

**代码示例**:

    .. code-block:: python
        
        import paddle.fluid as fluid
        import numpy as np
        with fluid.dygraph.guard():
            x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
            linear = fluid.dygraph.Linear(10, 10)
            input = fluid.dygraph.to_variable(x)
            scheduler = fluid.dygraph.MultiStepDecay(0.5, milestones=[3, 5])
            adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
            for epoch in range(6):
                for batch_id in range(5):
                    out = linear(input)
                    loss = fluid.layers.reduce_mean(out)
                    adam.minimize(loss)
                scheduler.epoch()
                print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
                # epoch:0, current lr is 0.5
                # epoch:1, current lr is 0.5
                # epoch:2, current lr is 0.5
                # epoch:3, current lr is 0.05
                # epoch:4, current lr is 0.05
                # epoch:5, current lr is 0.005

.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。

参数:
  - **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。

返回:


**代码示例**:

    参照上述示例代码。