未验证 提交 c86e8d14 编写于 作者: Z Zhou Wei 提交者: GitHub

add doc of optimizer.set_lr, test=develop (#2251)

add doc of optimizer.set_lr
上级 a64ab913
...@@ -101,6 +101,49 @@ Adadelta优化器,具体细节可参考论文 `ADADELTA: AN ADAPTIVE LEARNING ...@@ -101,6 +101,49 @@ Adadelta优化器,具体细节可参考论文 `ADADELTA: AN ADAPTIVE LEARNING
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -120,6 +120,49 @@ Adaptive Gradient 优化器(自适应梯度优化器,简称Adagrad)可以针 ...@@ -120,6 +120,49 @@ Adaptive Gradient 优化器(自适应梯度优化器,简称Adagrad)可以针
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -194,6 +194,49 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节 ...@@ -194,6 +194,49 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -134,6 +134,49 @@ Adamax优化器是参考 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 第7节 ...@@ -134,6 +134,49 @@ Adamax优化器是参考 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 第7节
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -114,6 +114,49 @@ Decayed Adagrad优化器,可以看做是引入了衰减率的 `Adagrad <http:/ ...@@ -114,6 +114,49 @@ Decayed Adagrad优化器,可以看做是引入了衰减率的 `Adagrad <http:/
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -125,6 +125,48 @@ FTRL 原始论文: ( `https://www.eecs.tufts.edu/~dsculley/papers/ad-click-predi ...@@ -125,6 +125,48 @@ FTRL 原始论文: ( `https://www.eecs.tufts.edu/~dsculley/papers/ad-click-predi
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -131,6 +131,49 @@ Deep Learning: Training BERT in 76 minutes <https://arxiv.org/pdf/1904.00962.pdf ...@@ -131,6 +131,49 @@ Deep Learning: Training BERT in 76 minutes <https://arxiv.org/pdf/1904.00962.pdf
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -101,6 +101,49 @@ LarsMomentumOptimizer ...@@ -101,6 +101,49 @@ LarsMomentumOptimizer
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -134,6 +134,50 @@ MomentumOptimizer ...@@ -134,6 +134,50 @@ MomentumOptimizer
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
**注意:** **注意:**
......
...@@ -151,6 +151,48 @@ RMSPropOptimizer ...@@ -151,6 +151,48 @@ RMSPropOptimizer
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
...@@ -127,6 +127,48 @@ SGDOptimizer ...@@ -127,6 +127,48 @@ SGDOptimizer
optimizer.minimize(out) optimizer.minimize(out)
optimizer.clear_gradients() optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr() .. py:method:: current_step_lr()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册