提交 7e01966c 编写于 作者: J JuncaiPeng

Change docs of WeightNormParamAttr to dygraph model, test=develop, test=document_fix

上级 c143326d
...@@ -222,23 +222,24 @@ class WeightNormParamAttr(ParamAttr): ...@@ -222,23 +222,24 @@ class WeightNormParamAttr(ParamAttr):
Args: Args:
dim(int): Dimension over which to compute the norm. Dim is a non-negative dim(int, optional): Dimension over which to compute the norm. Dim is a non-negative
number which is less than the rank of weight Tensor. For Example, dim can number which is less than the rank of weight Tensor. For Example, dim can
be chosen from 0, 1, 2, 3 for convolution whose weight shape is [cout, cin, kh, kw] be chosen from 0, 1, 2, 3 for convolution whose weight shape is [cout, cin, kh, kw]
and rank is 4. Default None, meaning that all elements will be normalized. and rank is 4. Default None, meaning that all elements will be normalized.
name(str, optional): The parameter's name. Default None, meaning that the name would name(str, optional): The parameter's name. Default None, meaning that the name would
be created automatically. Please refer to :ref:`api_guide_Name` for more details. be created automatically. Please refer to :ref:`api_guide_Name` for more details.
initializer(Initializer): The method to initialize this parameter, such as initializer(Initializer, optional): The method to initialize this parameter, such as
``initializer = fluid.initializer.ConstantInitializer(1.0)``. Default None, ``initializer = paddle.nn.initializer.Constant(1.0)``. Default None,
meaning that the weight parameter is initialized by Xavier initializer, and meaning that the weight parameter is initialized by Xavier initializer, and
the bias parameter is initialized by 0. the bias parameter is initialized by 0.
learning_rate(float32): The parameter's learning rate when learning_rate(float32, optional): The parameter's learning rate when
optimizer is :math:`global\_lr * parameter\_lr * scheduler\_factor`. optimizer is :math:`global\_lr * parameter\_lr * scheduler\_factor`.
Default 1.0. Default 1.0.
regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are two method: regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are
:ref:`api_fluid_regularizer_L1Decay` , :ref:`api_fluid_regularizer_L2Decay` . If regularizer two method: :ref:`api_paddle_regularizer_L1Decay` , :ref:`api_paddle_regularizer_L2Decay`.
is also set in ``optimizer`` (such as :ref:`api_fluid_optimizer_SGDOptimizer` ), that regularizer If regularizer isralso set in ``optimizer``
setting in optimizer will be ignored. Default None, meaning there is no regularization. (such as :ref:`api_paddle_optimizer_SGD` ), that regularizer setting in
optimizer will be ignored. Default None, meaning there is no regularization.
trainable(bool, optional): Whether this parameter is trainable. Default True. trainable(bool, optional): Whether this parameter is trainable. Default True.
do_model_average(bool, optional): Whether this parameter should do model average. do_model_average(bool, optional): Whether this parameter should do model average.
Default False. Default False.
...@@ -246,18 +247,22 @@ class WeightNormParamAttr(ParamAttr): ...@@ -246,18 +247,22 @@ class WeightNormParamAttr(ParamAttr):
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid import paddle
data = fluid.layers.data(name="data", shape=[3, 32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, paddle.enable_static()
size=1000,
param_attr=fluid.WeightNormParamAttr( data = paddle.static.data(name="data", shape=[3, 32, 32], dtype="float32")
dim=None,
name='weight_norm_param', fc = paddle.static.nn.fc(input=data,
initializer=fluid.initializer.ConstantInitializer(1.0), size=1000,
learning_rate=1.0, param_attr=paddle.static.WeightNormParamAttr(
regularizer=fluid.regularizer.L2DecayRegularizer(regularization_coeff=0.1), dim=None,
trainable=True, name='weight_norm_param',
do_model_average=False)) initializer=paddle.nn.initializer.Constant(1.0),
learning_rate=1.0,
regularizer=paddle.regularizer.L2Decay(0.1),
trainable=True,
do_model_average=False))
""" """
# List to record the parameters reparameterized by weight normalization. # List to record the parameters reparameterized by weight normalization.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册