Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleOCR
提交
a2cdabd0
P
PaddleOCR
项目概览
PaddlePaddle
/
PaddleOCR
大约 1 年 前同步成功
通知
1528
Star
32962
Fork
6643
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
108
列表
看板
标记
里程碑
合并请求
7
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
108
Issue
108
列表
看板
标记
里程碑
合并请求
7
合并请求
7
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
a2cdabd0
编写于
4月 29, 2022
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix det cml + pact + distribute training bug
上级
6dff6d97
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
30 addition
and
15 deletion
+30
-15
deploy/slim/quantization/quant.py
deploy/slim/quantization/quant.py
+6
-6
ppocr/optimizer/optimizer.py
ppocr/optimizer/optimizer.py
+24
-9
未找到文件。
deploy/slim/quantization/quant.py
浏览文件 @
a2cdabd0
...
...
@@ -161,12 +161,6 @@ def main(config, device, logger, vdl_writer):
if
config
[
"Global"
][
"pretrained_model"
]
is
not
None
:
pre_best_model_dict
=
load_model
(
config
,
model
)
quanter
=
QAT
(
config
=
quant_config
,
act_preprocess
=
PACT
)
quanter
.
quantize
(
model
)
if
config
[
'Global'
][
'distributed'
]:
model
=
paddle
.
DataParallel
(
model
)
# build loss
loss_class
=
build_loss
(
config
[
'Loss'
])
...
...
@@ -181,6 +175,12 @@ def main(config, device, logger, vdl_writer):
if
config
[
"Global"
][
"checkpoints"
]
is
not
None
:
pre_best_model_dict
=
load_model
(
config
,
model
,
optimizer
)
quanter
=
QAT
(
config
=
quant_config
,
act_preprocess
=
PACT
)
quanter
.
quantize
(
model
)
if
config
[
'Global'
][
'distributed'
]:
model
=
paddle
.
DataParallel
(
model
)
# build metric
eval_class
=
build_metric
(
config
[
'Metric'
])
...
...
ppocr/optimizer/optimizer.py
浏览文件 @
a2cdabd0
...
...
@@ -43,12 +43,15 @@ class Momentum(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Momentum
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -76,6 +79,9 @@ class Adam(object):
self
.
lazy_mode
=
lazy_mode
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Adam
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -85,7 +91,7 @@ class Adam(object):
grad_clip
=
self
.
grad_clip
,
name
=
self
.
name
,
lazy_mode
=
self
.
lazy_mode
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -118,6 +124,9 @@ class RMSProp(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
RMSProp
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -125,7 +134,7 @@ class RMSProp(object):
epsilon
=
self
.
epsilon
,
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -149,6 +158,9 @@ class Adadelta(object):
self
.
name
=
name
def
__call__
(
self
,
model
):
train_params
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
opt
=
optim
.
Adadelta
(
learning_rate
=
self
.
learning_rate
,
epsilon
=
self
.
epsilon
,
...
...
@@ -156,7 +168,7 @@ class Adadelta(object):
weight_decay
=
self
.
weight_decay
,
grad_clip
=
self
.
grad_clip
,
name
=
self
.
name
,
parameters
=
model
.
parameters
()
)
parameters
=
train_params
)
return
opt
...
...
@@ -190,17 +202,20 @@ class AdamW(object):
self
.
one_dim_param_no_weight_decay
=
one_dim_param_no_weight_decay
def
__call__
(
self
,
model
):
parameters
=
model
.
parameters
()
parameters
=
[
param
for
param
in
model
.
parameters
()
if
param
.
trainable
is
True
]
self
.
no_weight_decay_param_name_list
=
[
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
]
if
self
.
one_dim_param_no_weight_decay
:
self
.
no_weight_decay_param_name_list
+=
[
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
p
.
name
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
]
opt
=
optim
.
AdamW
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -216,4 +231,4 @@ class AdamW(object):
return
opt
def
_apply_decay_param_fun
(
self
,
name
):
return
name
not
in
self
.
no_weight_decay_param_name_list
\ No newline at end of file
return
name
not
in
self
.
no_weight_decay_param_name_list
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录