Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
7c59ac48
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
7c59ac48
编写于
3月 20, 2018
作者:
W
wanghaoshuang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Refine doc and use 'raise' instead of assert
上级
de2d7299
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
43 addition
and
6 deletion
+43
-6
doc/v2/api/fluid/optimizer.rst
doc/v2/api/fluid/optimizer.rst
+7
-0
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+36
-6
未找到文件。
doc/v2/api/fluid/optimizer.rst
浏览文件 @
7c59ac48
...
...
@@ -47,3 +47,10 @@ DecayedAdagrad
:members:
:noindex:
Adadelta
--------------
.. autoclass:: paddle.fluid.optimizer.AdadeltaOptimizer
:members:
:noindex:
python/paddle/fluid/optimizer.py
浏览文件 @
7c59ac48
...
...
@@ -583,16 +583,44 @@ class DecayedAdagradOptimizer(Optimizer):
class
AdadeltaOptimizer
(
Optimizer
):
"""Simple Adadelta optimizer with average squared grad state and
"""
**Adadelta Optimizer**
Simple Adadelta optimizer with average squared grad state and
average squared update state.
The details of adadelta please refer to this
`ADADELTA: AN ADAPTIVE LEARNING RATE METHOD
<http://www.matthewzeiler.com/pubs/googleTR2012/googleTR2012.pdf>`_.
.. math::
E(g_t^2) &=
\\
rho * E(g_{t-1}^2) + (1-
\\
rho) * g^2
\\\\
learning
\\
_rate &= sqrt( ( E(dx_{t-1}^2) +
\\
epsilon ) / (
\\
E(g_t^2) +
\\
epsilon ) )
\\\\
E(dx_t^2) &=
\\
rho * E(dx_{t-1}^2) + (1-
\\
rho) * (-g*learning
\\
_rate)^2
Args:
learning_rate(float): global leraning rate
rho(float): rho in equation
epsilon(float): epsilon in equation
Examples:
.. code-block:: python
optimizer = fluid.optimizer.Adadelta(
learning_rate=0.0003, epsilon=1.0e-6, rho=0.95)
_, params_grads = optimizer.minimize(cost)
"""
_avg_squared_grad_acc_str
=
"_avg_squared_grad"
_avg_squared_update_acc_str
=
"_avg_squared_update"
def
__init__
(
self
,
learning_rate
,
epsilon
=
1.0e-6
,
rho
=
0.95
,
**
kwargs
):
assert
learning_rate
is
not
None
assert
epsilon
is
not
None
assert
rho
is
not
None
if
learning_rate
is
None
:
raise
ValueError
(
"learning_rate is not set."
)
if
epsilon
is
None
:
raise
ValueError
(
"epsilon is not set."
)
if
rho
is
None
:
raise
ValueError
(
"rho is not set."
)
super
(
AdadeltaOptimizer
,
self
).
__init__
(
learning_rate
=
learning_rate
,
**
kwargs
)
self
.
type
=
"adadelta"
...
...
@@ -600,14 +628,16 @@ class AdadeltaOptimizer(Optimizer):
self
.
_rho
=
rho
def
_create_accumulators
(
self
,
block
,
parameters
):
assert
isinstance
(
block
,
framework
.
Block
)
if
not
isinstance
(
block
,
framework
.
Block
):
raise
TypeError
(
"block is not instance of framework.Block."
)
for
p
in
parameters
:
self
.
_add_accumulator
(
self
.
_avg_squared_grad_acc_str
,
p
)
self
.
_add_accumulator
(
self
.
_avg_squared_update_acc_str
,
p
)
def
_append_optimize_op
(
self
,
block
,
param_and_grad
):
assert
isinstance
(
block
,
framework
.
Block
)
if
not
isinstance
(
block
,
framework
.
Block
):
raise
TypeError
(
"block is not instance of framework.Block."
)
avg_squared_grad_acc
=
self
.
_get_accumulator
(
self
.
_avg_squared_grad_acc_str
,
param_and_grad
[
0
])
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录