Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
a75f8359
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
1 年多 前同步成功
通知
284
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a75f8359
编写于
4月 22, 2019
作者:
W
wuzewu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add L2SP strategy
上级
ff5dd783
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
31 addition
and
33 deletion
+31
-33
paddlehub/finetune/strategy.py
paddlehub/finetune/strategy.py
+30
-32
paddlehub/version.py
paddlehub/version.py
+1
-1
未找到文件。
paddlehub/finetune/strategy.py
浏览文件 @
a75f8359
...
...
@@ -22,6 +22,7 @@ import multiprocessing
import
paddle.fluid
as
fluid
from
paddlehub.finetune.optimization
import
adam_weight_decay_optimization
from
paddlehub.finetune.regularizer
import
L2SPDecayRegularizer
def
get_pretrained_parameter
(
main_program
,
start_program
):
...
...
@@ -42,8 +43,6 @@ class DefaultStrategy(object):
def
__init__
(
self
,
learning_rate
=
1e-4
,
optimizer_name
=
"adam"
):
self
.
learning_rate
=
learning_rate
self
.
_optimizer_name
=
optimizer_name
def
execute
(
self
,
loss
):
if
self
.
_optimizer_name
.
lower
()
==
"sgd"
:
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
self
.
learning_rate
)
...
...
@@ -75,6 +74,7 @@ class DefaultStrategy(object):
self
.
optimizer
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
self
.
learning_rate
)
def
execute
(
self
,
loss
):
if
self
.
optimizer
is
not
None
:
self
.
optimizer
.
minimize
(
loss
)
else
:
...
...
@@ -153,37 +153,35 @@ class DefaultFinetuneStrategy(DefaultStrategy):
self
.
regularization_coeff
=
regularization_coeff
def
execute
(
self
,
loss
):
if
self
.
_optimizer_name
.
lower
()
==
"sgd"
:
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adamax"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adamax
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"ftrl"
:
self
.
optimizer
=
fluid
.
optimizer
.
Ftrl
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"larsmomentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
LarsMomentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"momentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
Momentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"rmsprop"
:
self
.
optimizer
=
fluid
.
optimizer
.
RMSPropOptimizer
(
learning_rate
=
self
.
learning_rate
)
# get pretrained parameters
program
=
loss
.
block
.
program
global_block
=
program
.
global_block
()
pretrained_params
=
get_pretrained_parameter
(
program
,
fluid
.
default_startup_program
())
# set parameter attrs
for
index
,
param
in
enumerate
(
pretrained_params
):
param
.
regularizer
=
fluid
.
regularizer
.
L2Decay
(
regularization_coeff
=
self
.
regularization_coeff
)
if
self
.
optimizer
is
not
None
:
self
.
optimizer
.
minimize
(
loss
)
else
:
self
.
optimizer
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
self
.
learning_rate
)
raise
ValueError
(
"DefaultFinetuneStrategy's optimizer is None"
)
class
L2SPFinetuneStrategy
(
DefaultStrategy
):
def
__init__
(
self
,
learning_rate
=
1e-4
,
optimizer_name
=
"adam"
,
regularization_coeff
=
1e-3
):
super
(
L2SPFinetuneStrategy
,
self
).
__init__
(
learning_rate
=
learning_rate
,
optimizer_name
=
optimizer_name
)
self
.
learning_rate
=
learning_rate
self
.
_optimizer_name
=
optimizer_name
self
.
regularization_coeff
=
regularization_coeff
def
execute
(
self
,
loss
):
# get pretrained parameters
program
=
loss
.
block
.
program
global_block
=
program
.
global_block
()
...
...
@@ -192,7 +190,7 @@ class DefaultFinetuneStrategy(DefaultStrategy):
# set parameter attrs
for
index
,
param
in
enumerate
(
pretrained_params
):
param
.
regularizer
=
fluid
.
regularizer
.
L2Decay
(
param
.
regularizer
=
L2SPDecayRegularizer
(
regularization_coeff
=
self
.
regularization_coeff
)
if
self
.
optimizer
is
not
None
:
...
...
paddlehub/version.py
浏览文件 @
a75f8359
...
...
@@ -12,5 +12,5 @@
# See the License for the specific language governing permissions and
# limitations under the License.
""" PaddleHub version string """
hub_version
=
"0.4.
6
.beta"
hub_version
=
"0.4.
8
.beta"
module_proto_version
=
"1.0.0"
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录