Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
8ea490f1
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
8ea490f1
编写于
10月 11, 2019
作者:
W
wopeizl
提交者:
GitHub
10月 11, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
cherry-pick optimize the api test=develop test=release/1.6 (#20350)
上级
ed0c721b
变更
2
显示空白变更内容
内联
并排
Showing
2 changed file
with
65 addition
and
60 deletion
+65
-60
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+20
-19
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+45
-41
未找到文件。
python/paddle/fluid/layers/tensor.py
浏览文件 @
8ea490f1
...
...
@@ -65,24 +65,26 @@ def create_parameter(shape,
is_bias
=
False
,
default_initializer
=
None
):
"""
Create
a parameter. The parameter is a learnable variable, which can have
This function creates
a parameter. The parameter is a learnable variable, which can have
gradient, and can be optimized.
NOTE: this is a very low-level API. This API is useful when you create
operator by your self. instead of using layers.
Args:
shape(list[int]): shape of the parameter
dtype(string): element type of the parameter
attr(ParamAttr): attributes of the parameter
is_bias(bool): This can affect which default initializer is chosen
Parameters:
shape (list of int): Shape of the parameter
dtype (str): Data type of the parameter
name (str, optional): For detailed information, please refer to
:ref:`api_guide_Name` . Usually name is no need to set and None by default.
attr (ParamAttr, optional): Attributes of the parameter
is_bias (bool, optional): This can affect which default initializer is chosen
when default_initializer is None. If is_bias,
initializer.Constant(0.0) will be used. Otherwise,
Xavier() will be used.
default_initializer
(Initializer): i
nitializer for the parameter
default_initializer
(Initializer, optional): I
nitializer for the parameter
Returns:
t
he created parameter.
T
he created parameter.
Examples:
.. code-block:: python
...
...
@@ -105,23 +107,22 @@ def create_global_var(shape,
force_cpu
=
False
,
name
=
None
):
"""
Create
a new tensor variable with value in the global block(block 0).
This function creates
a new tensor variable with value in the global block(block 0).
Arg
s:
shape
(list[int]): s
hape of the variable
value
(float): t
he value of the variable. The new created
Parameter
s:
shape
(list of int): S
hape of the variable
value
(float): T
he value of the variable. The new created
variable will be filled with it.
dtype
(string): d
ata type of the variable
persistable
(bool): i
f this variable is persistable.
dtype
(str): D
ata type of the variable
persistable
(bool, optional): I
f this variable is persistable.
Default: False
force_cpu
(bool): f
orce this variable to be on CPU.
force_cpu
(bool, optional): F
orce this variable to be on CPU.
Default: False
name(str|None): The name of the variable. If set to None the variable
name will be generated automatically.
Default: None
name (str, optional): For detailed information, please refer to
:ref:`api_guide_Name` . Usually name is no need to set and None by default.
Returns:
Variable:
t
he created Variable
Variable:
T
he created Variable
Examples:
.. code-block:: python
...
...
python/paddle/fluid/optimizer.py
浏览文件 @
8ea490f1
...
...
@@ -695,12 +695,13 @@ class SGDOptimizer(Optimizer):
param\_out = param - learning\_rate * grad
Arg
s:
learning_rate (float|Variable):
t
he learning rate used to update parameters.
\
Parameter
s:
learning_rate (float|Variable):
T
he learning rate used to update parameters.
\
Can be a float value or a Variable with one float value as data element.
regularization: A Regularizer, such as
fluid.regularizer.L2DecayRegularizer.
name: A optional name prefix.
regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`.
\
Optional, default is None.
name (str, optional): This parameter is used by developers to print debugging information.
\
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples:
.. code-block:: python
...
...
@@ -778,14 +779,15 @@ class MomentumOptimizer(Optimizer):
&\quad param = param - learning\_rate * velocity
Arg
s:
learning_rate (float|Variable):
t
he learning rate used to update parameters.
\
Parameter
s:
learning_rate (float|Variable):
T
he learning rate used to update parameters.
\
Can be a float value or a Variable with one float value as data element.
momentum (float): momentum factor
use_nesterov (bool): enables Nesterov momentum
regularization: A Regularizer, such as
fluid.regularizer.L2DecayRegularizer.
name: A optional name prefix.
momentum (float): Momentum factor
use_nesterov (bool, optional): Enables Nesterov momentum, default is false.
regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`.
\
Optional, default is None.
name (str, optional): This parameter is used by developers to print debugging information.
\
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples:
.. code-block:: python
...
...
@@ -1142,16 +1144,16 @@ class LarsMomentumOptimizer(Optimizer):
& param = param - velocity
Arg
s:
learning_rate (float|Variable):
t
he learning rate used to update parameters.
\
Can be a float value or a Variable with one float value as data element.
Parameter
s:
learning_rate (float|Variable):
T
he learning rate used to update parameters.
\
Can be a float value or a Variable with one float value as data element.
\
momentum (float): momentum factor
lars_coeff (float):
d
efines how much we trust the layer to change its weights.
lars_weight_decay (float):
w
eight decay coefficient for decaying using LARS.
regularization: A Regularizer, such as
fluid.regularizer.L2DecayRegularizer
.
name
: A optional name prefix.
lars_coeff (float):
D
efines how much we trust the layer to change its weights.
lars_weight_decay (float):
W
eight decay coefficient for decaying using LARS.
regularization: A Regularizer, such as
:ref:`api_fluid_regularizer_L2DecayRegularizer`.
Optional, default is None
.
name
(str, optional): This parameter is used by developers to print debugging information.
\
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples:
.. code-block:: python
...
...
@@ -2015,20 +2017,21 @@ class RMSPropOptimizer(Optimizer):
from 1e-4 to 1e-8.
Arg
s:
learning_rate(float):
g
lobal learning rate.
rho(float): rho is :math: `
\\
rho` in equation,
set 0.95 by default
.
Parameter
s:
learning_rate(float):
G
lobal learning rate.
rho(float): rho is :math: `
\\
rho` in equation,
default is 0.95
.
epsilon(float): :math: `
\\
epsilon` in equation is smoothing term to
avoid division by zero,
set 1e-6 by default
.
avoid division by zero,
default is 1e-6
.
momentum(float): :math:`
\\
beta` in equation is the momentum term,
set 0.0 by default
.
default is 0.0
.
centered(bool): If True, gradients are normalized by the estimated variance of
the gradient; if False, by the uncentered second moment. Setting this to
True may help with training, but is slightly more expensive in terms of
computation and memory. Defaults to False.
regularization: A Regularizer, such as
fluid.regularizer.L2DecayRegularizer.
name: A optional name prefix.
regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`.
\
Optional, default is None.
name (str, optional): This parameter is used by developers to print debugging information.
\
For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None.
...
...
@@ -2180,14 +2183,15 @@ class FtrlOptimizer(Optimizer):
&squared\_accum += grad^2
Args:
learning_rate (float|Variable): global learning rate.
l1 (float): L1 regularization strength.
l2 (float): L2 regularization strength.
lr_power (float): Learning Rate Power.
regularization: A Regularizer, such as
fluid.regularizer.L2DecayRegularizer.
name: A optional name prefix.
Parameters:
learning_rate (float|Variable): Global learning rate.
l1 (float): L1 regularization strength, default is 0.0.
l2 (float): L2 regularization strength, default is 0.0.
lr_power (float): Learning Rate Power, default is -0.5.
regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`.
\
Optional, default is None.
name (str, optional): This parameter is used by developers to print debugging information.
\
For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None.
...
...
@@ -2220,7 +2224,7 @@ class FtrlOptimizer(Optimizer):
for data in train_reader():
exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list)
N
otes
:
N
OTE
:
Currently, FtrlOptimizer doesn't support sparse parameter optimization.
"""
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录