未验证 提交 8ea490f1 编写于 作者: W wopeizl 提交者: GitHub

cherry-pick optimize the api test=develop test=release/1.6 (#20350)

上级 ed0c721b
...@@ -65,24 +65,26 @@ def create_parameter(shape, ...@@ -65,24 +65,26 @@ def create_parameter(shape,
is_bias=False, is_bias=False,
default_initializer=None): default_initializer=None):
""" """
Create a parameter. The parameter is a learnable variable, which can have This function creates a parameter. The parameter is a learnable variable, which can have
gradient, and can be optimized. gradient, and can be optimized.
NOTE: this is a very low-level API. This API is useful when you create NOTE: this is a very low-level API. This API is useful when you create
operator by your self. instead of using layers. operator by your self. instead of using layers.
Args: Parameters:
shape(list[int]): shape of the parameter shape (list of int): Shape of the parameter
dtype(string): element type of the parameter dtype (str): Data type of the parameter
attr(ParamAttr): attributes of the parameter name (str, optional): For detailed information, please refer to
is_bias(bool): This can affect which default initializer is chosen :ref:`api_guide_Name` . Usually name is no need to set and None by default.
attr (ParamAttr, optional): Attributes of the parameter
is_bias (bool, optional): This can affect which default initializer is chosen
when default_initializer is None. If is_bias, when default_initializer is None. If is_bias,
initializer.Constant(0.0) will be used. Otherwise, initializer.Constant(0.0) will be used. Otherwise,
Xavier() will be used. Xavier() will be used.
default_initializer(Initializer): initializer for the parameter default_initializer (Initializer, optional): Initializer for the parameter
Returns: Returns:
the created parameter. The created parameter.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -105,23 +107,22 @@ def create_global_var(shape, ...@@ -105,23 +107,22 @@ def create_global_var(shape,
force_cpu=False, force_cpu=False,
name=None): name=None):
""" """
Create a new tensor variable with value in the global block(block 0). This function creates a new tensor variable with value in the global block(block 0).
Args: Parameters:
shape(list[int]): shape of the variable shape (list of int): Shape of the variable
value(float): the value of the variable. The new created value (float): The value of the variable. The new created
variable will be filled with it. variable will be filled with it.
dtype(string): data type of the variable dtype (str): Data type of the variable
persistable(bool): if this variable is persistable. persistable (bool, optional): If this variable is persistable.
Default: False Default: False
force_cpu(bool): force this variable to be on CPU. force_cpu (bool, optional): Force this variable to be on CPU.
Default: False Default: False
name(str|None): The name of the variable. If set to None the variable name (str, optional): For detailed information, please refer to
name will be generated automatically. :ref:`api_guide_Name` . Usually name is no need to set and None by default.
Default: None
Returns: Returns:
Variable: the created Variable Variable: The created Variable
Examples: Examples:
.. code-block:: python .. code-block:: python
......
...@@ -695,12 +695,13 @@ class SGDOptimizer(Optimizer): ...@@ -695,12 +695,13 @@ class SGDOptimizer(Optimizer):
param\_out = param - learning\_rate * grad param\_out = param - learning\_rate * grad
Args: Parameters:
learning_rate (float|Variable): the learning rate used to update parameters. \ learning_rate (float|Variable): The learning rate used to update parameters. \
Can be a float value or a Variable with one float value as data element. Can be a float value or a Variable with one float value as data element.
regularization: A Regularizer, such as regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`. \
fluid.regularizer.L2DecayRegularizer. Optional, default is None.
name: A optional name prefix. name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -778,14 +779,15 @@ class MomentumOptimizer(Optimizer): ...@@ -778,14 +779,15 @@ class MomentumOptimizer(Optimizer):
&\quad param = param - learning\_rate * velocity &\quad param = param - learning\_rate * velocity
Args: Parameters:
learning_rate (float|Variable): the learning rate used to update parameters. \ learning_rate (float|Variable): The learning rate used to update parameters. \
Can be a float value or a Variable with one float value as data element. Can be a float value or a Variable with one float value as data element.
momentum (float): momentum factor momentum (float): Momentum factor
use_nesterov (bool): enables Nesterov momentum use_nesterov (bool, optional): Enables Nesterov momentum, default is false.
regularization: A Regularizer, such as regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`. \
fluid.regularizer.L2DecayRegularizer. Optional, default is None.
name: A optional name prefix. name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -1142,16 +1144,16 @@ class LarsMomentumOptimizer(Optimizer): ...@@ -1142,16 +1144,16 @@ class LarsMomentumOptimizer(Optimizer):
& param = param - velocity & param = param - velocity
Args: Parameters:
learning_rate (float|Variable): the learning rate used to update parameters. \ learning_rate (float|Variable): The learning rate used to update parameters. \
Can be a float value or a Variable with one float value as data element. Can be a float value or a Variable with one float value as data element. \
momentum (float): momentum factor momentum (float): momentum factor
lars_coeff (float): defines how much we trust the layer to change its weights. lars_coeff (float): Defines how much we trust the layer to change its weights.
lars_weight_decay (float): weight decay coefficient for decaying using LARS. lars_weight_decay (float): Weight decay coefficient for decaying using LARS.
regularization: A Regularizer, such as regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`.
fluid.regularizer.L2DecayRegularizer. Optional, default is None.
name: A optional name prefix. name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -2015,20 +2017,21 @@ class RMSPropOptimizer(Optimizer): ...@@ -2015,20 +2017,21 @@ class RMSPropOptimizer(Optimizer):
from 1e-4 to 1e-8. from 1e-4 to 1e-8.
Args: Parameters:
learning_rate(float): global learning rate. learning_rate(float): Global learning rate.
rho(float): rho is :math: `\\rho` in equation, set 0.95 by default. rho(float): rho is :math: `\\rho` in equation, default is 0.95.
epsilon(float): :math: `\\epsilon` in equation is smoothing term to epsilon(float): :math: `\\epsilon` in equation is smoothing term to
avoid division by zero, set 1e-6 by default. avoid division by zero, default is 1e-6.
momentum(float): :math:`\\beta` in equation is the momentum term, momentum(float): :math:`\\beta` in equation is the momentum term,
set 0.0 by default. default is 0.0.
centered(bool): If True, gradients are normalized by the estimated variance of centered(bool): If True, gradients are normalized by the estimated variance of
the gradient; if False, by the uncentered second moment. Setting this to the gradient; if False, by the uncentered second moment. Setting this to
True may help with training, but is slightly more expensive in terms of True may help with training, but is slightly more expensive in terms of
computation and memory. Defaults to False. computation and memory. Defaults to False.
regularization: A Regularizer, such as regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`. \
fluid.regularizer.L2DecayRegularizer. Optional, default is None.
name: A optional name prefix. name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises: Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None. ValueError: If learning_rate, rho, epsilon, momentum are None.
...@@ -2180,14 +2183,15 @@ class FtrlOptimizer(Optimizer): ...@@ -2180,14 +2183,15 @@ class FtrlOptimizer(Optimizer):
&squared\_accum += grad^2 &squared\_accum += grad^2
Args: Parameters:
learning_rate (float|Variable): global learning rate. learning_rate (float|Variable): Global learning rate.
l1 (float): L1 regularization strength. l1 (float): L1 regularization strength, default is 0.0.
l2 (float): L2 regularization strength. l2 (float): L2 regularization strength, default is 0.0.
lr_power (float): Learning Rate Power. lr_power (float): Learning Rate Power, default is -0.5.
regularization: A Regularizer, such as regularization: A Regularizer, such as :ref:`api_fluid_regularizer_L2DecayRegularizer`. \
fluid.regularizer.L2DecayRegularizer. Optional, default is None.
name: A optional name prefix. name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
Raises: Raises:
ValueError: If learning_rate, rho, epsilon, momentum are None. ValueError: If learning_rate, rho, epsilon, momentum are None.
...@@ -2220,7 +2224,7 @@ class FtrlOptimizer(Optimizer): ...@@ -2220,7 +2224,7 @@ class FtrlOptimizer(Optimizer):
for data in train_reader(): for data in train_reader():
exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list) exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list)
Notes: NOTE:
Currently, FtrlOptimizer doesn't support sparse parameter optimization. Currently, FtrlOptimizer doesn't support sparse parameter optimization.
""" """
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册