未验证 提交 93b90e7e 编写于 作者: J juncaipeng 提交者: GitHub

Modify L1DecayRegularizer, L1Decay, L2DecayRegularizer and L2Decay (#1217)

* modify L1DecayRegularizer, L1Decay, L2DecayRegularizer and L2Decay, test=develop

* add black lines, test=develop
上级 ff209496
...@@ -5,22 +5,22 @@ L1DecayRegularizer ...@@ -5,22 +5,22 @@ L1DecayRegularizer
.. py:class:: paddle.fluid.regularizer.L1DecayRegularizer(regularization_coeff=0.0) .. py:class:: paddle.fluid.regularizer.L1DecayRegularizer(regularization_coeff=0.0)
实现 L1 权重衰减正则化。 L1DecayRegularizer实现L1权重衰减正则化,用于模型训练,使得权重矩阵稀疏。
L1正则将会稀疏化权重矩阵。
具体实现中,L1权重衰减正则化的计算公式如下:
.. math:: .. math::
\\L1WeightDecay=reg\_coeff∗sign(parameter)\\ \\L1WeightDecay=reg\_coeff∗sign(parameter)\\
参数: 参数
- **regularization_coeff** (float) – 正则化系数 - **regularization_coeff** (float) – L1正则化系数,默认值为0.0。
**代码示例** **代码示例**
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid import paddle.fluid as fluid
main_prog = fluid.Program() main_prog = fluid.Program()
startup_prog = fluid.Program() startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog): with fluid.program_guard(main_prog, startup_prog):
...@@ -35,11 +35,3 @@ L1正则将会稀疏化权重矩阵。 ...@@ -35,11 +35,3 @@ L1正则将会稀疏化权重矩阵。
regularization=fluid.regularizer.L1DecayRegularizer( regularization=fluid.regularizer.L1DecayRegularizer(
regularization_coeff=0.1)) regularization_coeff=0.1))
optimizer.minimize(avg_loss) optimizer.minimize(avg_loss)
.. _cn_api_fluid_regularizer_L1Decay: .. _cn_api_fluid_regularizer_L1Decay:
L1Decay L1Decay
------------------------------- -------------------------------
.. py:attribute:: paddle.fluid.regularizer.L1Decay .. py:attribute:: paddle.fluid.regularizer.L1Decay(regularization_coeff=0.0)
``L1Decay`` 是 ``L1DecayRegularizer`` 的别名。
L1Decay实现L1权重衰减正则化,用于模型训练,使得权重矩阵稀疏。
具体实现中,L1权重衰减正则化的计算公式如下:
.. math::
\\L1WeightDecay=reg\_coeff∗sign(parameter)\\
参数:
- **regularization_coeff** (float) – L1正则化系数,默认值为0.0。
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
``L1DecayRegularizer`` 的别名 main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog):
data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
hidden = fluid.layers.fc(input=data, size=128, act='relu')
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
avg_loss = fluid.layers.mean(loss)
optimizer = fluid.optimizer.Adagrad(
learning_rate=1e-4,
regularization=fluid.regularizer.L1Decay(
regularization_coeff=0.1))
optimizer.minimize(avg_loss)
...@@ -5,21 +5,22 @@ L2DecayRegularizer ...@@ -5,21 +5,22 @@ L2DecayRegularizer
.. py:class:: paddle.fluid.regularizer.L2DecayRegularizer(regularization_coeff=0.0) .. py:class:: paddle.fluid.regularizer.L2DecayRegularizer(regularization_coeff=0.0)
实现L2 权重衰减正则化。 L2DecayRegularizer实现L2权重衰减正则化,用于模型训练,有助于防止模型对训练数据过拟合。
较小的 L2 的有助于防止对训练数据的过度拟合。 具体实现中,L2权重衰减正则化的计算公式如下:
.. math:: .. math::
\\L2WeightDecay=reg\_coeff*parameter\\ \\L2WeightDecay=reg\_coeff*parameter\\
参数: 参数:
- **regularization_coeff** (float) – 正则化系数 - **regularization_coeff** (float) – 正则化系数,默认值为0.0。
**代码示例** **代码示例**
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid import paddle.fluid as fluid
main_prog = fluid.Program() main_prog = fluid.Program()
startup_prog = fluid.Program() startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog): with fluid.program_guard(main_prog, startup_prog):
...@@ -34,10 +35,3 @@ L2DecayRegularizer ...@@ -34,10 +35,3 @@ L2DecayRegularizer
regularization=fluid.regularizer.L2DecayRegularizer( regularization=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=0.1)) regularization_coeff=0.1))
optimizer.minimize(avg_loss) optimizer.minimize(avg_loss)
...@@ -5,9 +5,38 @@ L2Decay ...@@ -5,9 +5,38 @@ L2Decay
.. py:attribute:: paddle.fluid.regularizer.L2Decay .. py:attribute:: paddle.fluid.regularizer.L2Decay
``L2DecayRegularizer`` 的别名 ``L2Decay`` 是 ``L2DecayRegularizer`` 的别名。
L2Decay实现L2权重衰减正则化,用于模型训练,有助于防止模型对训练数据过拟合。
具体实现中,L2权重衰减正则化的计算公式如下:
.. math::
\\L2WeightDecay=reg\_coeff*parameter\\
参数:
- **regularization_coeff** (float) – 正则化系数,默认值为0.0。
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog):
data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
hidden = fluid.layers.fc(input=data, size=128, act='relu')
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
avg_loss = fluid.layers.mean(loss)
optimizer = fluid.optimizer.Adagrad(
learning_rate=1e-4,
regularization=fluid.regularizer.L2Decay(
regularization_coeff=0.1))
optimizer.minimize(avg_loss)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册