Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
23c32aa8
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
23c32aa8
编写于
7月 27, 2021
作者:
zhouweiwei2014
提交者:
GitHub
7月 27, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add args check for learning rate scheduler API (#34394)
上级
81fe3ac9
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
12 addition
and
4 deletion
+12
-4
python/paddle/optimizer/lr.py
python/paddle/optimizer/lr.py
+12
-4
未找到文件。
python/paddle/optimizer/lr.py
浏览文件 @
23c32aa8
...
...
@@ -570,7 +570,7 @@ class PolynomialDecay(LRScheduler):
Args:
learning_rate (float): The initial learning rate. It is a python float number.
decay_steps(int): The decay step size. It determines the decay cycle.
decay_steps(int): The decay step size. It determines the decay cycle.
It must be a positive integer.
end_lr(float, optional): The minimum final learning rate. Default: 0.0001.
power(float, optional): Power of polynomial. Default: 1.0.
cycle(bool, optional): Whether the learning rate rises again. If True, then the learning rate will rise when it decrease
...
...
@@ -639,6 +639,8 @@ class PolynomialDecay(LRScheduler):
cycle
=
False
,
last_epoch
=-
1
,
verbose
=
False
):
assert
decay_steps
>
0
and
isinstance
(
decay_steps
,
int
),
" 'decay_steps' must be a positive integer."
self
.
decay_steps
=
decay_steps
self
.
end_lr
=
end_lr
self
.
power
=
power
...
...
@@ -688,7 +690,7 @@ class LinearWarmup(LRScheduler):
Args:
learning_rate (float|LRScheduler): The learning rate after warm-up. It is a python float number or any subclass of ``LRScheduler`` .
warmup_steps (int): total steps of warm up.
warmup_steps (int): total steps of warm up.
It must be a positive integer.
start_lr (float): Initial learning rate of warm up.
end_lr (float): Final learning rate of warm up.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
...
...
@@ -763,6 +765,8 @@ class LinearWarmup(LRScheduler):
"the type of learning_rate should be [int, float or LRScheduler], the current type is {}"
.
format
(
learning_rate
))
self
.
learning_rate
=
learning_rate
assert
warmup_steps
>
0
and
isinstance
(
warmup_steps
,
int
),
" 'warmup_steps' must be a positive integer."
self
.
warmup_steps
=
warmup_steps
self
.
start_lr
=
start_lr
self
.
end_lr
=
end_lr
...
...
@@ -1010,7 +1014,7 @@ class StepDecay(LRScheduler):
Args:
learning_rate (float): The initial learning rate. It is a python float number.
step_size (int): the interval to update.
step_size (int): the interval to update.
It must be a positive integer.
gamma (float, optional): The Ratio that the learning rate will be reduced. ``new_lr = origin_lr * gamma`` .
It should be less than 1.0. Default: 0.1.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
...
...
@@ -1083,6 +1087,8 @@ class StepDecay(LRScheduler):
if
gamma
>=
1.0
:
raise
ValueError
(
'gamma should be < 1.0.'
)
assert
step_size
>
0
and
isinstance
(
step_size
,
int
),
" 'step_size' must be a positive integer."
self
.
step_size
=
step_size
self
.
gamma
=
gamma
super
(
StepDecay
,
self
).
__init__
(
learning_rate
,
last_epoch
,
verbose
)
...
...
@@ -1415,7 +1421,7 @@ class CosineAnnealingDecay(LRScheduler):
Args:
learning_rate (float): The initial learning rate, that is :math:`\eta_{max}` . It can be set to python float or int number.
T_max (int): Maximum number of iterations. It is half of the decay cycle of learning rate.
T_max (int): Maximum number of iterations. It is half of the decay cycle of learning rate.
It must be a positive integer.
eta_min (float|int, optional): Minimum learning rate, that is :math:`\eta_{min}` . Default: 0.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
verbose (bool, optional): If ``True``, prints a message to stdout for each update. Default: ``False`` .
...
...
@@ -1487,6 +1493,8 @@ class CosineAnnealingDecay(LRScheduler):
raise
TypeError
(
"The type of 'eta_min' in 'CosineAnnealingDecay' must be 'float, int', but received %s."
%
type
(
eta_min
))
assert
T_max
>
0
and
isinstance
(
T_max
,
int
),
" 'T_max' must be a positive integer."
self
.
T_max
=
T_max
self
.
eta_min
=
float
(
eta_min
)
super
(
CosineAnnealingDecay
,
self
).
__init__
(
learning_rate
,
last_epoch
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录