Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
cc230f01
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
1 年多 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
cc230f01
编写于
10月 09, 2021
作者:
u010070587
提交者:
GitHub
10月 09, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1277 from TingquanGao/dev/fix_opt
fix: support static graph
上级
27be97d5
7dcb2d4f
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
28 addition
and
9 deletion
+28
-9
deploy/python/preprocess.py
deploy/python/preprocess.py
+1
-1
ppcls/data/preprocess/ops/operators.py
ppcls/data/preprocess/ops/operators.py
+1
-1
ppcls/optimizer/__init__.py
ppcls/optimizer/__init__.py
+2
-1
ppcls/optimizer/optimizer.py
ppcls/optimizer/optimizer.py
+24
-6
未找到文件。
deploy/python/preprocess.py
浏览文件 @
cc230f01
...
...
@@ -79,7 +79,7 @@ class UnifiedResize(object):
if
isinstance
(
interpolation
,
str
):
interpolation
=
_cv2_interp_from_str
[
interpolation
.
lower
()]
# compatible with opencv < version 4.4.0
elif
not
interpolation
:
elif
interpolation
is
None
:
interpolation
=
cv2
.
INTER_LINEAR
self
.
resize_func
=
partial
(
cv2
.
resize
,
interpolation
=
interpolation
)
elif
backend
.
lower
()
==
"pil"
:
...
...
ppcls/data/preprocess/ops/operators.py
浏览文件 @
cc230f01
...
...
@@ -60,7 +60,7 @@ class UnifiedResize(object):
if
isinstance
(
interpolation
,
str
):
interpolation
=
_cv2_interp_from_str
[
interpolation
.
lower
()]
# compatible with opencv < version 4.4.0
elif
not
interpolation
:
elif
interpolation
is
None
:
interpolation
=
cv2
.
INTER_LINEAR
self
.
resize_func
=
partial
(
cv2
.
resize
,
interpolation
=
interpolation
)
elif
backend
.
lower
()
==
"pil"
:
...
...
ppcls/optimizer/__init__.py
浏览文件 @
cc230f01
...
...
@@ -41,7 +41,8 @@ def build_lr_scheduler(lr_config, epochs, step_each_epoch):
return
lr
def
build_optimizer
(
config
,
epochs
,
step_each_epoch
,
model_list
):
# model_list is None in static graph
def
build_optimizer
(
config
,
epochs
,
step_each_epoch
,
model_list
=
None
):
config
=
copy
.
deepcopy
(
config
)
# step1 build lr
lr
=
build_lr_scheduler
(
config
.
pop
(
'lr'
),
epochs
,
step_each_epoch
)
...
...
ppcls/optimizer/optimizer.py
浏览文件 @
cc230f01
...
...
@@ -18,6 +18,8 @@ from __future__ import print_function
from
paddle
import
optimizer
as
optim
from
ppcls.utils
import
logger
class
Momentum
(
object
):
"""
...
...
@@ -43,7 +45,9 @@ class Momentum(object):
self
.
multi_precision
=
multi_precision
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
Momentum
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -79,7 +83,9 @@ class Adam(object):
self
.
multi_precision
=
multi_precision
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
Adam
(
learning_rate
=
self
.
learning_rate
,
beta1
=
self
.
beta1
,
...
...
@@ -123,7 +129,9 @@ class RMSProp(object):
self
.
grad_clip
=
grad_clip
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
opt
=
optim
.
RMSProp
(
learning_rate
=
self
.
learning_rate
,
momentum
=
self
.
momentum
,
...
...
@@ -160,18 +168,28 @@ class AdamW(object):
self
.
one_dim_param_no_weight_decay
=
one_dim_param_no_weight_decay
def
__call__
(
self
,
model_list
):
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
# model_list is None in static graph
parameters
=
sum
([
m
.
parameters
()
for
m
in
model_list
],
[])
if
model_list
else
None
# TODO(gaotingquan): model_list is None when in static graph, "no_weight_decay" not work.
if
model_list
is
None
:
if
self
.
one_dim_param_no_weight_decay
or
len
(
self
.
no_weight_decay_name_list
)
!=
0
:
msg
=
"
\"
AdamW
\"
does not support setting
\"
no_weight_decay
\"
in static graph. Please use dynamic graph."
logger
.
error
(
Exception
(
msg
))
raise
Exception
(
msg
)
self
.
no_weight_decay_param_name_list
=
[
p
.
name
for
model
in
model_list
for
n
,
p
in
model
.
named_parameters
()
if
any
(
nd
in
n
for
nd
in
self
.
no_weight_decay_name_list
)
]
]
if
model_list
else
[]
if
self
.
one_dim_param_no_weight_decay
:
self
.
no_weight_decay_param_name_list
+=
[
p
.
name
for
model
in
model_list
for
n
,
p
in
model
.
named_parameters
()
if
len
(
p
.
shape
)
==
1
]
]
if
model_list
else
[]
opt
=
optim
.
AdamW
(
learning_rate
=
self
.
learning_rate
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录