Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
c1a52d0e
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
大约 1 年 前同步成功
通知
282
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
c1a52d0e
编写于
4月 18, 2019
作者:
Z
Zeyu Chen
提交者:
GitHub
4月 18, 2019
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #22 from Steffy-zxf/add-optimizer
Add more optimizer
上级
783cacdc
9b3630f0
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
49 addition
and
1 deletion
+49
-1
paddlehub/finetune/config.py
paddlehub/finetune/config.py
+1
-1
paddlehub/finetune/strategy.py
paddlehub/finetune/strategy.py
+48
-0
未找到文件。
paddlehub/finetune/config.py
浏览文件 @
c1a52d0e
...
...
@@ -33,7 +33,7 @@ class RunConfig(object):
use_cuda
=
False
,
checkpoint_dir
=
None
,
num_epoch
=
10
,
batch_size
=
None
,
batch_size
=
8
,
enable_memory_optim
=
True
,
strategy
=
None
):
""" Construct finetune Config """
...
...
paddlehub/finetune/strategy.py
浏览文件 @
c1a52d0e
...
...
@@ -47,6 +47,30 @@ class DefaultStrategy(object):
if
self
.
_optimizer_name
.
lower
()
==
"sgd"
:
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adamax"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adamax
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"ftrl"
:
self
.
optimizer
=
fluid
.
optimizer
.
Ftrl
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"larsmomentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
LarsMomentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"momentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
Momentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"rmsprop"
:
self
.
optimizer
=
fluid
.
optimizer
.
RMSPropOptimizer
(
learning_rate
=
self
.
learning_rate
)
else
:
self
.
optimizer
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
self
.
learning_rate
)
...
...
@@ -132,6 +156,30 @@ class DefaultFinetuneStrategy(DefaultStrategy):
if
self
.
_optimizer_name
.
lower
()
==
"sgd"
:
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"adamax"
:
self
.
optimizer
=
fluid
.
optimizer
.
Adamax
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"ftrl"
:
self
.
optimizer
=
fluid
.
optimizer
.
Ftrl
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"larsmomentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
LarsMomentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"momentum"
:
self
.
optimizer
=
fluid
.
optimizer
.
Momentum
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"decayedadagrad"
:
self
.
optimizer
=
fluid
.
optimizer
.
DecayedAdagrad
(
learning_rate
=
self
.
learning_rate
)
elif
self
.
_optimizer_name
.
lower
()
==
"rmsprop"
:
self
.
optimizer
=
fluid
.
optimizer
.
RMSPropOptimizer
(
learning_rate
=
self
.
learning_rate
)
else
:
self
.
optimizer
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
self
.
learning_rate
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录