提交 74fa0cc2 编写于 作者: T tianyi1997 提交者: HydrogenSulfate

Modify docstring

上级 fad8563e
......@@ -99,9 +99,9 @@ class MetaBNNeck(nn.Layer):
def setup_opt(self, opt):
"""
enable_inside_update: enable inside updating for `gate` in MetaBIN
lr_gate: learning rate of `gate` during meta-train phase
bn_mode: control the running stats & updating of BN
Arg:
opt (dict): Optional setting to change the behavior of MetaBIN during training.
It includes three settings which are `enable_inside_update`, `lr_gate` and `bn_mode`.
"""
self.check_opt(opt)
self.opt = copy.deepcopy(opt)
......
......@@ -257,31 +257,31 @@ class Cyclic(LRBase):
"""Cyclic learning rate decay
Args:
epochs (int): Total epoch(s)
step_each_epoch (int): Number of iterations within an epoch
epochs (int): Total epoch(s).
step_each_epoch (int): Number of iterations within an epoch.
base_learning_rate (float): Initial learning rate, which is the lower boundary in the cycle. The paper recommends
that set the base_learning_rate to 1/3 or 1/4 of max_learning_rate.
max_learning_rate (float): Maximum learning rate in the cycle. It defines the cycle amplitude as above.
Since there is some scaling operation during process of learning rate adjustment,
max_learning_rate may not actually be reached.
warmup_epoch (int): Number of warmup epoch(s)
warmup_start_lr (float): Start learning rate within warmup
warmup_epoch (int): Number of warmup epoch(s).
warmup_start_lr (float): Start learning rate within warmup.
step_size_up (int): Number of training steps, which is used to increase learning rate in a cycle.
The step size of one cycle will be defined by step_size_up + step_size_down. According to the paper, step
size should be set as at least 3 or 4 times steps in one epoch.
step_size_down (int, optional): Number of training steps, which is used to decrease learning rate in a cycle.
If not specified, it's value will initialize to `` step_size_up `` . Default: None
If not specified, it's value will initialize to `` step_size_up `` . Default: None.
mode (str, optional): One of 'triangular', 'triangular2' or 'exp_range'.
If scale_fn is specified, this argument will be ignored. Default: 'triangular'
exp_gamma (float): Constant in 'exp_range' scaling function: exp_gamma**iterations. Used only when mode = 'exp_range'. Default: 1.0
If scale_fn is specified, this argument will be ignored. Default: 'triangular'.
exp_gamma (float): Constant in 'exp_range' scaling function: exp_gamma**iterations. Used only when mode = 'exp_range'. Default: 1.0.
scale_fn (function, optional): A custom scaling function, which is used to replace three build-in methods.
It should only have one argument. For all x >= 0, 0 <= scale_fn(x) <= 1.
If specified, then 'mode' will be ignored. Default: None
If specified, then 'mode' will be ignored. Default: None.
scale_mode (str, optional): One of 'cycle' or 'iterations'. Defines whether scale_fn is evaluated on cycle
number or cycle iterations (total iterations since start of training). Default: 'cycle'
number or cycle iterations (total iterations since start of training). Default: 'cycle'.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
by_epoch (bool): Learning rate decays by epoch when by_epoch is True, else by iter
verbose: (bool, optional): If True, prints a message to stdout for each update. Defaults to False
by_epoch (bool): Learning rate decays by epoch when by_epoch is True, else by iter.
verbose: (bool, optional): If True, prints a message to stdout for each update. Defaults to False.
"""
def __init__(self,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册