Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleX
提交
2279d7c4
P
PaddleX
项目概览
PaddlePaddle
/
PaddleX
通知
138
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
43
列表
看板
标记
里程碑
合并请求
5
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleX
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
43
Issue
43
列表
看板
标记
里程碑
合并请求
5
合并请求
5
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
2279d7c4
编写于
6月 01, 2020
作者:
S
sunyanfang01
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add fasterrcnn loss
上级
2f89e761
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
11 addition
and
33 deletion
+11
-33
paddlex/cv/models/faster_rcnn.py
paddlex/cv/models/faster_rcnn.py
+11
-33
未找到文件。
paddlex/cv/models/faster_rcnn.py
浏览文件 @
2279d7c4
...
...
@@ -139,51 +139,29 @@ class FasterRCNN(BaseAPI):
outputs
=
model
.
build_net
(
inputs
)
return
inputs
,
outputs
# def default_optimizer(self, learning_rate, warmup_steps, warmup_start_lr,
# lr_decay_epochs, lr_decay_gamma,
# num_steps_each_epoch):
# if warmup_steps > lr_decay_epochs[0] * num_steps_each_epoch:
# raise Exception("warmup_steps should less than {}".format(
# lr_decay_epochs[0] * num_steps_each_epoch))
# boundaries = [b * num_steps_each_epoch for b in lr_decay_epochs]
# values = [(lr_decay_gamma**i) * learning_rate
# for i in range(len(lr_decay_epochs) + 1)]
# lr_decay = fluid.layers.piecewise_decay(
# boundaries=boundaries, values=values)
# lr_warmup = fluid.layers.linear_lr_warmup(
# learning_rate=lr_decay,
# warmup_steps=warmup_steps,
# start_lr=warmup_start_lr,
# end_lr=learning_rate)
# optimizer = fluid.optimizer.Momentum(
# learning_rate=lr_warmup,
# momentum=0.9,
# regularization=fluid.regularizer.L2Decay(1e-04))
# return optimizer
def
default_optimizer
(
self
,
learning_rate
,
warmup_steps
,
warmup_start_lr
,
lr_decay_epochs
,
lr_decay_gamma
,
num_steps_each_epoch
):
#
if warmup_steps > lr_decay_epochs[0] * num_steps_each_epoch:
#
raise Exception("warmup_steps should less than {}".format(
#
lr_decay_epochs[0] * num_steps_each_epoch))
if
warmup_steps
>
lr_decay_epochs
[
0
]
*
num_steps_each_epoch
:
raise
Exception
(
"warmup_steps should less than {}"
.
format
(
lr_decay_epochs
[
0
]
*
num_steps_each_epoch
))
boundaries
=
[
b
*
num_steps_each_epoch
for
b
in
lr_decay_epochs
]
values
=
[(
lr_decay_gamma
**
i
)
*
learning_rate
for
i
in
range
(
len
(
lr_decay_epochs
)
+
1
)]
lr_decay
=
fluid
.
layers
.
piecewise_decay
(
boundaries
=
boundaries
,
values
=
values
)
#lr_warmup = fluid.layers.linear_lr_warmup(
# learning_rate=lr_decay,
# warmup_steps=warmup_steps,
# start_lr=warmup_start_lr,
# end_lr=learning_rate)
optimizer
=
fluid
.
optimizer
.
Momentum
(
#learning_rate=lr_warmup,
lr_warmup
=
fluid
.
layers
.
linear_lr_warmup
(
learning_rate
=
lr_decay
,
warmup_steps
=
warmup_steps
,
start_lr
=
warmup_start_lr
,
end_lr
=
learning_rate
)
optimizer
=
fluid
.
optimizer
.
Momentum
(
learning_rate
=
lr_warmup
,
momentum
=
0.9
,
regularization
=
fluid
.
regularizer
.
L2Decay
Regularizer
(
1e-04
))
regularization
=
fluid
.
regularizer
.
L2Decay
(
1e-04
))
return
optimizer
def
train
(
self
,
num_epochs
,
train_dataset
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录