Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
6bdd362c
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
6bdd362c
编写于
2月 27, 2017
作者:
T
Tao Luo
提交者:
GitHub
2月 27, 2017
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request
#1436
from luotao1/opt
add optimizer in v2
上级
dae21d9b
51de2ded
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
52 addition
and
3 deletion
+52
-3
python/paddle/v2/optimizer.py
python/paddle/v2/optimizer.py
+52
-3
未找到文件。
python/paddle/v2/optimizer.py
浏览文件 @
6bdd362c
...
...
@@ -3,7 +3,10 @@ import paddle.trainer_config_helpers.optimizers as v1_optimizers
import
paddle.trainer_config_helpers.config_parser_utils
as
config_parser_utils
import
paddle.v2
__all__
=
[
'Adam'
,
'Adamax'
]
__all__
=
[
'Momentum'
,
'Adam'
,
'Adamax'
,
'AdaGrad'
,
'DecayedAdaGrad'
,
'AdaDelta'
,
'RMSProp'
,
'ModelAverage'
,
'L2Regularization'
]
class
Optimizer
(
object
):
...
...
@@ -38,6 +41,14 @@ class Optimizer(object):
pass_num
)
class
Momentum
(
Optimizer
):
def
__init__
(
self
,
momentum
=
None
,
sparse
=
False
,
**
kwargs
):
learning_method
=
v1_optimizers
.
MomentumOptimizer
(
momentum
=
None
,
sparse
=
False
)
super
(
Momentum
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
class
Adam
(
Optimizer
):
def
__init__
(
self
,
beta1
=
0.9
,
beta2
=
0.999
,
epsilon
=
1e-8
,
**
kwargs
):
learning_method
=
v1_optimizers
.
AdamOptimizer
(
...
...
@@ -52,7 +63,45 @@ class Adamax(Optimizer):
super
(
Adamax
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
class
AdaGrad
(
Optimizer
):
def
__init__
(
self
,
**
kwargs
):
learning_method
=
v1_optimizers
.
AdaGradOptimizer
()
super
(
AdaGrad
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
class
DecayedAdaGrad
(
Optimizer
):
def
__init__
(
self
,
rho
=
0.95
,
epsilon
=
1e-06
,
**
kwargs
):
learning_method
=
v1_optimizers
.
DecayedAdaGradOptimizer
(
rho
=
rho
,
epsilon
=
epsilon
)
super
(
DecayedAdaGrad
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
class
AdaDelta
(
Optimizer
):
def
__init__
(
self
,
rho
=
0.95
,
epsilon
=
1e-06
,
**
kwargs
):
learning_method
=
v1_optimizers
.
AdaDeltaOptimizer
(
rho
=
rho
,
epsilon
=
epsilon
)
super
(
AdaDelta
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
class
RMSProp
(
Optimizer
):
def
__init__
(
self
,
rho
=
0.95
,
epsilon
=
1e-6
,
**
kwargs
):
learning_method
=
v1_optimizers
.
RMSPropOptimizer
(
rho
=
rho
,
epsilon
=
epsilon
)
super
(
RMSProp
,
self
).
__init__
(
learning_method
=
learning_method
,
**
kwargs
)
ModelAverage
=
v1_optimizers
.
ModelAverage
L2Regularization
=
v1_optimizers
.
L2Regularization
if
__name__
==
'__main__'
:
swig_api
.
initPaddle
(
'--use_gpu=false'
)
opt
=
paddle
.
v2
.
optimizer
.
Adam
()
print
opt
.
enable_types
()
for
opt
in
[
Momentum
(),
Adam
(),
Adamax
(),
AdaGrad
(),
DecayedAdaGrad
(),
AdaDelta
(),
RMSProp
(),
Adam
(
model_average
=
ModelAverage
(
average_window
=
0.5
),
regularization
=
L2Regularization
(
rate
=
0.5
),
gradient_clipping_threshold
=
25
)
]:
print
opt
,
opt
.
enable_types
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录