Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
74fa0cc2
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
1 年多 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
74fa0cc2
编写于
2月 17, 2023
作者:
T
tianyi1997
提交者:
HydrogenSulfate
2月 28, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Modify docstring
上级
fad8563e
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
14 addition
and
14 deletion
+14
-14
ppcls/arch/gears/metabnneck.py
ppcls/arch/gears/metabnneck.py
+3
-3
ppcls/optimizer/learning_rate.py
ppcls/optimizer/learning_rate.py
+11
-11
未找到文件。
ppcls/arch/gears/metabnneck.py
浏览文件 @
74fa0cc2
...
@@ -99,9 +99,9 @@ class MetaBNNeck(nn.Layer):
...
@@ -99,9 +99,9 @@ class MetaBNNeck(nn.Layer):
def
setup_opt
(
self
,
opt
):
def
setup_opt
(
self
,
opt
):
"""
"""
enable_inside_update: enable inside updating for `gate` in MetaBIN
Arg:
lr_gate: learning rate of `gate` during meta-train phase
opt (dict): Optional setting to change the behavior of MetaBIN during training.
bn_mode: control the running stats & updating of BN
It includes three settings which are `enable_inside_update`, `lr_gate` and `bn_mode`.
"""
"""
self
.
check_opt
(
opt
)
self
.
check_opt
(
opt
)
self
.
opt
=
copy
.
deepcopy
(
opt
)
self
.
opt
=
copy
.
deepcopy
(
opt
)
...
...
ppcls/optimizer/learning_rate.py
浏览文件 @
74fa0cc2
...
@@ -257,31 +257,31 @@ class Cyclic(LRBase):
...
@@ -257,31 +257,31 @@ class Cyclic(LRBase):
"""Cyclic learning rate decay
"""Cyclic learning rate decay
Args:
Args:
epochs (int): Total epoch(s)
epochs (int): Total epoch(s)
.
step_each_epoch (int): Number of iterations within an epoch
step_each_epoch (int): Number of iterations within an epoch
.
base_learning_rate (float): Initial learning rate, which is the lower boundary in the cycle. The paper recommends
base_learning_rate (float): Initial learning rate, which is the lower boundary in the cycle. The paper recommends
that set the base_learning_rate to 1/3 or 1/4 of max_learning_rate.
that set the base_learning_rate to 1/3 or 1/4 of max_learning_rate.
max_learning_rate (float): Maximum learning rate in the cycle. It defines the cycle amplitude as above.
max_learning_rate (float): Maximum learning rate in the cycle. It defines the cycle amplitude as above.
Since there is some scaling operation during process of learning rate adjustment,
Since there is some scaling operation during process of learning rate adjustment,
max_learning_rate may not actually be reached.
max_learning_rate may not actually be reached.
warmup_epoch (int): Number of warmup epoch(s)
warmup_epoch (int): Number of warmup epoch(s)
.
warmup_start_lr (float): Start learning rate within warmup
warmup_start_lr (float): Start learning rate within warmup
.
step_size_up (int): Number of training steps, which is used to increase learning rate in a cycle.
step_size_up (int): Number of training steps, which is used to increase learning rate in a cycle.
The step size of one cycle will be defined by step_size_up + step_size_down. According to the paper, step
The step size of one cycle will be defined by step_size_up + step_size_down. According to the paper, step
size should be set as at least 3 or 4 times steps in one epoch.
size should be set as at least 3 or 4 times steps in one epoch.
step_size_down (int, optional): Number of training steps, which is used to decrease learning rate in a cycle.
step_size_down (int, optional): Number of training steps, which is used to decrease learning rate in a cycle.
If not specified, it's value will initialize to `` step_size_up `` . Default: None
If not specified, it's value will initialize to `` step_size_up `` . Default: None
.
mode (str, optional): One of 'triangular', 'triangular2' or 'exp_range'.
mode (str, optional): One of 'triangular', 'triangular2' or 'exp_range'.
If scale_fn is specified, this argument will be ignored. Default: 'triangular'
If scale_fn is specified, this argument will be ignored. Default: 'triangular'
.
exp_gamma (float): Constant in 'exp_range' scaling function: exp_gamma**iterations. Used only when mode = 'exp_range'. Default: 1.0
exp_gamma (float): Constant in 'exp_range' scaling function: exp_gamma**iterations. Used only when mode = 'exp_range'. Default: 1.0
.
scale_fn (function, optional): A custom scaling function, which is used to replace three build-in methods.
scale_fn (function, optional): A custom scaling function, which is used to replace three build-in methods.
It should only have one argument. For all x >= 0, 0 <= scale_fn(x) <= 1.
It should only have one argument. For all x >= 0, 0 <= scale_fn(x) <= 1.
If specified, then 'mode' will be ignored. Default: None
If specified, then 'mode' will be ignored. Default: None
.
scale_mode (str, optional): One of 'cycle' or 'iterations'. Defines whether scale_fn is evaluated on cycle
scale_mode (str, optional): One of 'cycle' or 'iterations'. Defines whether scale_fn is evaluated on cycle
number or cycle iterations (total iterations since start of training). Default: 'cycle'
number or cycle iterations (total iterations since start of training). Default: 'cycle'
.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
by_epoch (bool): Learning rate decays by epoch when by_epoch is True, else by iter
by_epoch (bool): Learning rate decays by epoch when by_epoch is True, else by iter
.
verbose: (bool, optional): If True, prints a message to stdout for each update. Defaults to False
verbose: (bool, optional): If True, prints a message to stdout for each update. Defaults to False
.
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录