Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
7e526a6c
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
7e526a6c
编写于
3月 20, 2018
作者:
W
whs
提交者:
GitHub
3月 20, 2018
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #9213 from wanghaoshuang/adadelta
Add python wrapper for Adadelta optimizer
上级
381c6a02
89c9f79c
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
93 addition
and
1 deletion
+93
-1
doc/v2/api/fluid/optimizer.rst
doc/v2/api/fluid/optimizer.rst
+7
-0
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+86
-1
未找到文件。
doc/v2/api/fluid/optimizer.rst
浏览文件 @
7e526a6c
...
...
@@ -47,3 +47,10 @@ DecayedAdagrad
:members:
:noindex:
Adadelta
--------------
.. autoclass:: paddle.fluid.optimizer.AdadeltaOptimizer
:members:
:noindex:
python/paddle/fluid/optimizer.py
浏览文件 @
7e526a6c
...
...
@@ -24,7 +24,9 @@ from layer_helper import LayerHelper
from
regularizer
import
append_regularization_ops
from
clip
import
append_gradient_clip_ops
,
error_clip_callback
__all__
=
[
'SGD'
,
'Momentum'
,
'Adagrad'
,
'Adam'
,
'Adamax'
,
'DecayedAdagrad'
]
__all__
=
[
'SGD'
,
'Momentum'
,
'Adagrad'
,
'Adam'
,
'Adamax'
,
'DecayedAdagrad'
,
'Adadelta'
]
class
Optimizer
(
object
):
...
...
@@ -580,6 +582,88 @@ class DecayedAdagradOptimizer(Optimizer):
return
decayed_adagrad_op
class
AdadeltaOptimizer
(
Optimizer
):
"""
**Adadelta Optimizer**
Simple Adadelta optimizer with average squared grad state and
average squared update state.
The details of adadelta please refer to this
`ADADELTA: AN ADAPTIVE LEARNING RATE METHOD
<http://www.matthewzeiler.com/pubs/googleTR2012/googleTR2012.pdf>`_.
.. math::
E(g_t^2) &=
\\
rho * E(g_{t-1}^2) + (1-
\\
rho) * g^2
\\\\
learning
\\
_rate &= sqrt( ( E(dx_{t-1}^2) +
\\
epsilon ) / (
\\
E(g_t^2) +
\\
epsilon ) )
\\\\
E(dx_t^2) &=
\\
rho * E(dx_{t-1}^2) + (1-
\\
rho) * (-g*learning
\\
_rate)^2
Args:
learning_rate(float): global leraning rate
rho(float): rho in equation
epsilon(float): epsilon in equation
Examples:
.. code-block:: python
optimizer = fluid.optimizer.Adadelta(
learning_rate=0.0003, epsilon=1.0e-6, rho=0.95)
_, params_grads = optimizer.minimize(cost)
"""
_avg_squared_grad_acc_str
=
"_avg_squared_grad"
_avg_squared_update_acc_str
=
"_avg_squared_update"
def
__init__
(
self
,
learning_rate
,
epsilon
=
1.0e-6
,
rho
=
0.95
,
**
kwargs
):
if
learning_rate
is
None
:
raise
ValueError
(
"learning_rate is not set."
)
if
epsilon
is
None
:
raise
ValueError
(
"epsilon is not set."
)
if
rho
is
None
:
raise
ValueError
(
"rho is not set."
)
super
(
AdadeltaOptimizer
,
self
).
__init__
(
learning_rate
=
learning_rate
,
**
kwargs
)
self
.
type
=
"adadelta"
self
.
_epsilon
=
epsilon
self
.
_rho
=
rho
def
_create_accumulators
(
self
,
block
,
parameters
):
if
not
isinstance
(
block
,
framework
.
Block
):
raise
TypeError
(
"block is not instance of framework.Block."
)
for
p
in
parameters
:
self
.
_add_accumulator
(
self
.
_avg_squared_grad_acc_str
,
p
)
self
.
_add_accumulator
(
self
.
_avg_squared_update_acc_str
,
p
)
def
_append_optimize_op
(
self
,
block
,
param_and_grad
):
if
not
isinstance
(
block
,
framework
.
Block
):
raise
TypeError
(
"block is not instance of framework.Block."
)
avg_squared_grad_acc
=
self
.
_get_accumulator
(
self
.
_avg_squared_grad_acc_str
,
param_and_grad
[
0
])
avg_squared_update_acc
=
self
.
_get_accumulator
(
self
.
_avg_squared_update_acc_str
,
param_and_grad
[
0
])
# Create the adadelta optimizer op
adadelta_op
=
block
.
append_op
(
type
=
self
.
type
,
inputs
=
{
"Param"
:
param_and_grad
[
0
],
"Grad"
:
param_and_grad
[
1
],
"AvgSquaredGrad"
:
avg_squared_grad_acc
,
"AvgSquaredUpdate"
:
avg_squared_update_acc
},
outputs
=
{
"ParamOut"
:
param_and_grad
[
0
],
"AvgSquaredGradOut"
:
avg_squared_grad_acc
,
"AvgSquaredUpdateOut"
:
avg_squared_update_acc
},
attrs
=
{
"epsilon"
:
self
.
_epsilon
,
"rho"
:
self
.
_rho
})
return
adadelta_op
# We short the class name, since users will use the optimizer with the package
# name. The sample code:
#
...
...
@@ -594,3 +678,4 @@ Adagrad = AdagradOptimizer
Adam
=
AdamOptimizer
Adamax
=
AdamaxOptimizer
DecayedAdagrad
=
DecayedAdagradOptimizer
Adadelta
=
AdadeltaOptimizer
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录