未验证 提交 ce7da936 编写于 作者: Z zhang wenhui 提交者: GitHub

Fix norm cn (#2532)

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix python code

* fix norm bug cn doc, test=develop

* fix norm bug cn doc, test=develop
上级 e9280064
......@@ -70,5 +70,5 @@ BatchNorm1d
batch_norm = paddle.nn.BatchNorm1d(1)
batch_norm_out = batch_norm(x)
print(batch_norm_out.numpy)
print(batch_norm_out.numpy())
......@@ -70,5 +70,5 @@ BatchNorm2d
batch_norm = paddle.nn.BatchNorm2d(1)
batch_norm_out = batch_norm(x)
print(batch_norm_out.numpy)
print(batch_norm_out.numpy())
......@@ -70,5 +70,5 @@ BatchNorm3d
batch_norm = paddle.nn.BatchNorm3d(1)
batch_norm_out = batch_norm(x)
print(batch_norm_out.numpy)
print(batch_norm_out.numpy())
......@@ -3,15 +3,15 @@
GroupNorm
-------------------------------
.. py:class:: paddle.nn.GroupNorm(num_channels, num_groups, epsilon=1e-05, weight_attr=None, bias_attr=None, data_layout='NCHW, 'name=None)
.. py:class:: paddle.nn.GroupNorm(num_groups, num_channels, epsilon=1e-05, weight_attr=None, bias_attr=None, data_layout='NCHW, 'name=None)
**Group Normalization层**
该接口用于构建 ``GroupNorm`` 类的一个可调用对象,具体用法参照 ``代码示例`` 。其中实现了组归一化层的功能。更多详情请参考: `Group Normalization <https://arxiv.org/abs/1803.08494>`_ 。
参数:
- **num_channels** (int) - 输入的通道数。
- **num_groups** (int) - 从通道中分离出来的 ``group`` 的数目。
- **num_channels** (int) - 输入的通道数。
- **epsilon** (float, 可选) - 为防止方差除零,增加一个很小的值。默认值:1e-05。
- **weight_attr** (ParamAttr|bool, 可选) - 指定权重参数属性的对象。如果为False, 表示参数不学习。默认值为None,表示使用默认的权重参数属性。具体用法请参见 :ref:`cn_api_fluid_ParamAttr` 。
- **bias_attr** (ParamAttr|bool, 可选) - 指定偏置参数属性的对象。如果为False, 表示参数不学习。默认值为None,表示使用默认的偏置参数属性。具体用法请参见 :ref:`cn_api_fluid_ParamAttr` 。
......@@ -35,7 +35,7 @@ GroupNorm
np.random.seed(123)
x_data = np.random.random(size=(2, 6, 2, 2)).astype('float32')
x = paddle.to_tensor(x_data)
group_norm = paddle.nn.GroupNorm(num_channels=3, num_groups=6)
group_norm = paddle.nn.GroupNorm(num_channels=6, num_groups=6)
group_norm_out = group_norm(x)
print(group_norm_out.numpy)
print(group_norm_out.numpy())
......@@ -56,5 +56,5 @@ Note:
instance_norm = paddle.nn.InstanceNorm1d(2)
instance_norm_out = instance_norm(x)
print(instance_norm_out.numpy)
print(instance_norm_out.numpy())
......@@ -55,6 +55,6 @@ Note:
instance_norm = paddle.nn.InstanceNorm2d(2)
instance_norm_out = instance_norm(x)
print(instance_norm_out.numpy)
print(instance_norm_out.numpy())
......@@ -54,5 +54,5 @@ Note:
instance_norm = paddle.nn.InstanceNorm3d(2)
instance_norm_out = instance_norm(x)
print(instance_norm_out.numpy)
print(instance_norm_out.numpy())
......@@ -50,5 +50,5 @@ LayerNorm
layer_norm = paddle.nn.LayerNorm(x_data.shape[1:])
layer_norm_out = layer_norm(x)
print(layer_norm_out.numpy)
print(layer_norm_out.numpy())
......@@ -44,4 +44,4 @@ batch_norm
w = paddle.to_tensor(weight_data)
b = paddle.to_tensor(bias_data)
batch_norm_out = paddle.nn.functional.batch_norm(x, rm, rv, w, b)
print(batch_norm_out)
print(batch_norm_out.numpy())
......@@ -44,4 +44,4 @@ instance_norm
w = paddle.to_tensor(weight_data)
b = paddle.to_tensor(bias_data)
instance_norm_out = paddle.nn.functional.instance_norm(x, rm, rv, w, b)
print(instance_norm_out)
print(instance_norm_out.numpy())
......@@ -33,5 +33,5 @@ layer_norm
x = paddle.to_tensor(x_data)
layer_norm_out = paddle.nn.functional.layer_norm(x, x.shape[1:])
print(layer_norm_out.numpy)
print(layer_norm_out.numpy())
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册