Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
5db10743
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5db10743
编写于
12月 11, 2017
作者:
T
Travis CI
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Deploy to GitHub Pages:
0ca62744
上级
8bcf7dab
变更
4
展开全部
隐藏空白更改
内联
并排
Showing
4 changed file
with
16 addition
and
16 deletion
+16
-16
develop/doc/api/v2/fluid/optimizer.html
develop/doc/api/v2/fluid/optimizer.html
+7
-7
develop/doc/searchindex.js
develop/doc/searchindex.js
+1
-1
develop/doc_cn/api/v2/fluid/optimizer.html
develop/doc_cn/api/v2/fluid/optimizer.html
+7
-7
develop/doc_cn/searchindex.js
develop/doc_cn/searchindex.js
+1
-1
未找到文件。
develop/doc/api/v2/fluid/optimizer.html
浏览文件 @
5db10743
...
...
@@ -216,7 +216,7 @@
<h2>
Optimizer
<a
class=
"headerlink"
href=
"#id1"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
Optimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
Optimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
global_step=None
</em>
,
<em>
regularization=None
</em>
<span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Optimizer Base class.
</p>
<p>
Define the common interface of an optimizer.
User should not use this class directly,
...
...
@@ -264,7 +264,7 @@ their internal state.
<h2>
SGDOptimizer
<a
class=
"headerlink"
href=
"#sgdoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
SGDOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
SGDOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple SGD optimizer without any state.
</p>
</dd></dl>
...
...
@@ -273,7 +273,7 @@ their internal state.
<h2>
MomentumOptimizer
<a
class=
"headerlink"
href=
"#momentumoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
MomentumOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
momentum
</em>
,
<em>
use_nesterov=False
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
MomentumOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
momentum
</em>
,
<em>
use_nesterov=False
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Momentum optimizer with velocity state
</p>
</dd></dl>
...
...
@@ -282,7 +282,7 @@ their internal state.
<h2>
AdagradOptimizer
<a
class=
"headerlink"
href=
"#adagradoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Adagrad optimizer with moment state
</p>
</dd></dl>
...
...
@@ -291,7 +291,7 @@ their internal state.
<h2>
AdamOptimizer
<a
class=
"headerlink"
href=
"#adamoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Implements the Adam Optimizer
</p>
</dd></dl>
...
...
@@ -300,7 +300,7 @@ their internal state.
<h2>
AdamaxOptimizer
<a
class=
"headerlink"
href=
"#adamaxoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamaxOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamaxOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Implements the Adamax Optimizer
</p>
</dd></dl>
...
...
@@ -309,7 +309,7 @@ their internal state.
<h2>
DecayedAdagradOptimizer
<a
class=
"headerlink"
href=
"#decayedadagradoptimizer"
title=
"Permalink to this headline"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
DecayedAdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
decay=0.95
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
DecayedAdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
decay=0.95
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Decayed Adagrad optimizer with moment state
</p>
</dd></dl>
...
...
develop/doc/searchindex.js
浏览文件 @
5db10743
因为 它太大了无法显示 source diff 。你可以改为
查看blob
。
develop/doc_cn/api/v2/fluid/optimizer.html
浏览文件 @
5db10743
...
...
@@ -210,7 +210,7 @@
<h2>
Optimizer
<a
class=
"headerlink"
href=
"#id1"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
Optimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
Optimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
global_step=None
</em>
,
<em>
regularization=None
</em>
<span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Optimizer Base class.
</p>
<p>
Define the common interface of an optimizer.
User should not use this class directly,
...
...
@@ -258,7 +258,7 @@ their internal state.
<h2>
SGDOptimizer
<a
class=
"headerlink"
href=
"#sgdoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
SGDOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
SGDOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple SGD optimizer without any state.
</p>
</dd></dl>
...
...
@@ -267,7 +267,7 @@ their internal state.
<h2>
MomentumOptimizer
<a
class=
"headerlink"
href=
"#momentumoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
MomentumOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
momentum
</em>
,
<em>
use_nesterov=False
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
MomentumOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
momentum
</em>
,
<em>
use_nesterov=False
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Momentum optimizer with velocity state
</p>
</dd></dl>
...
...
@@ -276,7 +276,7 @@ their internal state.
<h2>
AdagradOptimizer
<a
class=
"headerlink"
href=
"#adagradoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Adagrad optimizer with moment state
</p>
</dd></dl>
...
...
@@ -285,7 +285,7 @@ their internal state.
<h2>
AdamOptimizer
<a
class=
"headerlink"
href=
"#adamoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Implements the Adam Optimizer
</p>
</dd></dl>
...
...
@@ -294,7 +294,7 @@ their internal state.
<h2>
AdamaxOptimizer
<a
class=
"headerlink"
href=
"#adamaxoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamaxOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
AdamaxOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate=0.001
</em>
,
<em>
beta1=0.9
</em>
,
<em>
beta2=0.999
</em>
,
<em>
epsilon=1e-08
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Implements the Adamax Optimizer
</p>
</dd></dl>
...
...
@@ -303,7 +303,7 @@ their internal state.
<h2>
DecayedAdagradOptimizer
<a
class=
"headerlink"
href=
"#decayedadagradoptimizer"
title=
"永久链接至标题"
>
¶
</a></h2>
<dl
class=
"class"
>
<dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
DecayedAdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
decay=0.95
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
global_step=None
</em><span
class=
"sig-paren"
>
)
</span></dt>
<em
class=
"property"
>
class
</em><code
class=
"descclassname"
>
paddle.v2.fluid.optimizer.
</code><code
class=
"descname"
>
DecayedAdagradOptimizer
</code><span
class=
"sig-paren"
>
(
</span><em>
learning_rate
</em>
,
<em>
decay=0.95
</em>
,
<em>
epsilon=1e-06
</em>
,
<em>
**kwargs
</em><span
class=
"sig-paren"
>
)
</span></dt>
<dd><p>
Simple Decayed Adagrad optimizer with moment state
</p>
</dd></dl>
...
...
develop/doc_cn/searchindex.js
浏览文件 @
5db10743
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录