Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
4708b081
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
4708b081
编写于
8月 23, 2022
作者:
F
Feng Ni
提交者:
GitHub
8月 23, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix iters less than batchsize in warmup (#6724)
上级
e55e4194
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
18 addition
and
31 deletion
+18
-31
configs/mot/fairmot/_base_/optimizer_30e_momentum.yml
configs/mot/fairmot/_base_/optimizer_30e_momentum.yml
+2
-1
configs/mot/jde/_base_/optimizer_30e.yml
configs/mot/jde/_base_/optimizer_30e.yml
+2
-1
configs/mot/jde/_base_/optimizer_60e.yml
configs/mot/jde/_base_/optimizer_60e.yml
+2
-1
configs/mot/mcfairmot/mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_vehicle_bytetracker.yml
..._w18_dlafpn_30e_1088x608_visdrone_vehicle_bytetracker.yml
+2
-1
ppdet/engine/trainer.py
ppdet/engine/trainer.py
+4
-0
ppdet/optimizer/optimizer.py
ppdet/optimizer/optimizer.py
+6
-27
未找到文件。
configs/mot/fairmot/_base_/optimizer_30e_momentum.yml
浏览文件 @
4708b081
...
@@ -7,8 +7,9 @@ LearningRate:
...
@@ -7,8 +7,9 @@ LearningRate:
gamma
:
0.1
gamma
:
0.1
milestones
:
[
15
,
22
]
milestones
:
[
15
,
22
]
use_warmup
:
True
use_warmup
:
True
-
!
Burnin
Warmup
-
!
Exp
Warmup
steps
:
1000
steps
:
1000
power
:
4
OptimizerBuilder
:
OptimizerBuilder
:
optimizer
:
optimizer
:
...
...
configs/mot/jde/_base_/optimizer_30e.yml
浏览文件 @
4708b081
...
@@ -7,8 +7,9 @@ LearningRate:
...
@@ -7,8 +7,9 @@ LearningRate:
gamma
:
0.1
gamma
:
0.1
milestones
:
[
15
,
22
]
milestones
:
[
15
,
22
]
use_warmup
:
True
use_warmup
:
True
-
!
Burnin
Warmup
-
!
Exp
Warmup
steps
:
1000
steps
:
1000
power
:
4
OptimizerBuilder
:
OptimizerBuilder
:
optimizer
:
optimizer
:
...
...
configs/mot/jde/_base_/optimizer_60e.yml
浏览文件 @
4708b081
...
@@ -7,8 +7,9 @@ LearningRate:
...
@@ -7,8 +7,9 @@ LearningRate:
gamma
:
0.1
gamma
:
0.1
milestones
:
[
30
,
44
]
milestones
:
[
30
,
44
]
use_warmup
:
True
use_warmup
:
True
-
!
Burnin
Warmup
-
!
Exp
Warmup
steps
:
1000
steps
:
1000
power
:
4
OptimizerBuilder
:
OptimizerBuilder
:
optimizer
:
optimizer
:
...
...
configs/mot/mcfairmot/mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_vehicle_bytetracker.yml
浏览文件 @
4708b081
...
@@ -63,8 +63,9 @@ LearningRate:
...
@@ -63,8 +63,9 @@ LearningRate:
gamma
:
0.1
gamma
:
0.1
milestones
:
[
15
,
22
]
milestones
:
[
15
,
22
]
use_warmup
:
True
use_warmup
:
True
-
!
Burnin
Warmup
-
!
Exp
Warmup
steps
:
1000
steps
:
1000
power
:
4
OptimizerBuilder
:
OptimizerBuilder
:
optimizer
:
optimizer
:
...
...
ppdet/engine/trainer.py
浏览文件 @
4708b081
...
@@ -150,6 +150,10 @@ class Trainer(object):
...
@@ -150,6 +150,10 @@ class Trainer(object):
# build optimizer in train mode
# build optimizer in train mode
if
self
.
mode
==
'train'
:
if
self
.
mode
==
'train'
:
steps_per_epoch
=
len
(
self
.
loader
)
steps_per_epoch
=
len
(
self
.
loader
)
if
steps_per_epoch
<
1
:
logger
.
warning
(
"Samples in dataset are less than batch_size, please set smaller batch_size in TrainReader."
)
self
.
lr
=
create
(
'LearningRate'
)(
steps_per_epoch
)
self
.
lr
=
create
(
'LearningRate'
)(
steps_per_epoch
)
self
.
optimizer
=
create
(
'OptimizerBuilder'
)(
self
.
lr
,
self
.
model
)
self
.
optimizer
=
create
(
'OptimizerBuilder'
)(
self
.
lr
,
self
.
model
)
...
...
ppdet/optimizer/optimizer.py
浏览文件 @
4708b081
...
@@ -176,6 +176,7 @@ class LinearWarmup(object):
...
@@ -176,6 +176,7 @@ class LinearWarmup(object):
value
=
[]
value
=
[]
warmup_steps
=
self
.
epochs
*
step_per_epoch
\
warmup_steps
=
self
.
epochs
*
step_per_epoch
\
if
self
.
epochs
is
not
None
else
self
.
steps
if
self
.
epochs
is
not
None
else
self
.
steps
warmup_steps
=
max
(
warmup_steps
,
1
)
for
i
in
range
(
warmup_steps
+
1
):
for
i
in
range
(
warmup_steps
+
1
):
if
warmup_steps
>
0
:
if
warmup_steps
>
0
:
alpha
=
i
/
warmup_steps
alpha
=
i
/
warmup_steps
...
@@ -187,31 +188,6 @@ class LinearWarmup(object):
...
@@ -187,31 +188,6 @@ class LinearWarmup(object):
return
boundary
,
value
return
boundary
,
value
@
serializable
class
BurninWarmup
(
object
):
"""
Warm up learning rate in burnin mode
Args:
steps (int): warm up steps
"""
def
__init__
(
self
,
steps
=
1000
):
super
(
BurninWarmup
,
self
).
__init__
()
self
.
steps
=
steps
def
__call__
(
self
,
base_lr
,
step_per_epoch
):
boundary
=
[]
value
=
[]
burnin
=
min
(
self
.
steps
,
step_per_epoch
)
for
i
in
range
(
burnin
+
1
):
factor
=
(
i
*
1.0
/
burnin
)
**
4
lr
=
base_lr
*
factor
value
.
append
(
lr
)
if
i
>
0
:
boundary
.
append
(
i
)
return
boundary
,
value
@
serializable
@
serializable
class
ExpWarmup
(
object
):
class
ExpWarmup
(
object
):
"""
"""
...
@@ -220,19 +196,22 @@ class ExpWarmup(object):
...
@@ -220,19 +196,22 @@ class ExpWarmup(object):
steps (int): warm up steps.
steps (int): warm up steps.
epochs (int|None): use epochs as warm up steps, the priority
epochs (int|None): use epochs as warm up steps, the priority
of `epochs` is higher than `steps`. Default: None.
of `epochs` is higher than `steps`. Default: None.
power (int): Exponential coefficient. Default: 2.
"""
"""
def
__init__
(
self
,
steps
=
5
,
epochs
=
None
):
def
__init__
(
self
,
steps
=
1000
,
epochs
=
None
,
power
=
2
):
super
(
ExpWarmup
,
self
).
__init__
()
super
(
ExpWarmup
,
self
).
__init__
()
self
.
steps
=
steps
self
.
steps
=
steps
self
.
epochs
=
epochs
self
.
epochs
=
epochs
self
.
power
=
power
def
__call__
(
self
,
base_lr
,
step_per_epoch
):
def
__call__
(
self
,
base_lr
,
step_per_epoch
):
boundary
=
[]
boundary
=
[]
value
=
[]
value
=
[]
warmup_steps
=
self
.
epochs
*
step_per_epoch
if
self
.
epochs
is
not
None
else
self
.
steps
warmup_steps
=
self
.
epochs
*
step_per_epoch
if
self
.
epochs
is
not
None
else
self
.
steps
warmup_steps
=
max
(
warmup_steps
,
1
)
for
i
in
range
(
warmup_steps
+
1
):
for
i
in
range
(
warmup_steps
+
1
):
factor
=
(
i
/
float
(
warmup_steps
))
**
2
factor
=
(
i
/
float
(
warmup_steps
))
**
self
.
power
value
.
append
(
base_lr
*
factor
)
value
.
append
(
base_lr
*
factor
)
if
i
>
0
:
if
i
>
0
:
boundary
.
append
(
i
)
boundary
.
append
(
i
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录