Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
77703f37
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
77703f37
编写于
4月 29, 2022
作者:
D
Double_V
提交者:
GitHub
4月 29, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #6103 from LDOUBLEV/dygraph
fix det cml + pact + distribute training bug
上级
21a0efea
6c193662
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
24 addition
and
9 deletion
+24
-9
ppocr/optimizer/optimizer.py
ppocr/optimizer/optimizer.py
+24
-9
未找到文件。
ppocr/optimizer/optimizer.py
浏览文件 @
77703f37
...
...
@@ -43,12 +43,15 @@ class Momentum(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Momentum
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -76,6 +79,9 @@ class Adam(object):
self
.
lazy_mode
=
lazy_mode
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Adam
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -85,7 +91,7 @@ class Adam(object):
grad_clip
=
self
.
grad_clip
,
name
=
self
.
name
,
lazy_mode
=
self
.
lazy_mode
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -118,6 +124,9 @@ class RMSProp(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
RMSProp
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -125,7 +134,7 @@ class RMSProp(object):
epsilon
=
self
.
epsilon
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -149,6 +158,9 @@ class Adadelta(object):
self
.
name
=
name
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Adadelta
(
learning_rate
=
self
.
learning_rate
,
epsilon
=
self
.
epsilon
,
...
...
@@ -156,7 +168,7 @@ class Adadelta(object):
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
name
=
self
.
name
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -190,17 +202,20 @@ class AdamW(object):
self
.
one_dim_param_no_weight_decay
=
one_dim_param_no_weight_decay
def
__call__
(
self
,
model
):
parameters
=
model
.
parameters
()
parameters
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
self
.
no_weight_decay_param_name_list
=
[
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
]
if
self
.
one_dim_param_no_weight_decay
:
self
.
no_weight_decay_param_name_list
+=
[
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
]
opt
=
optim
.
AdamW
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -216,4 +231,4 @@ class AdamW(object):
return
opt
def
_apply_decay_param_fun
(
self
,
name
):
return
name
not
in
self
.
no_weight_decay_param_name_list
\ No newline at end of file
return
name
not
in
self
.
no_weight_decay_param_name_list
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录