Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
56890dc7
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
56890dc7
编写于
8月 19, 2020
作者:
C
ceci3
提交者:
GitHub
8月 19, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add SyncBatchNorm (#26032)
* add SyncBatchNorm,test=develop
上级
f3ea6156
变更
10
隐藏空白更改
内联
并排
Showing
10 changed file
with
409 addition
and
2 deletion
+409
-2
paddle/fluid/pybind/op_function_generator.cc
paddle/fluid/pybind/op_function_generator.cc
+4
-0
python/paddle/fluid/dygraph/nn.py
python/paddle/fluid/dygraph/nn.py
+215
-1
python/paddle/fluid/tests/unittests/CMakeLists.txt
python/paddle/fluid/tests/unittests/CMakeLists.txt
+1
-0
python/paddle/fluid/tests/unittests/parallel_dygraph_sync_batch_norm.py
...fluid/tests/unittests/parallel_dygraph_sync_batch_norm.py
+108
-0
python/paddle/fluid/tests/unittests/test_layers.py
python/paddle/fluid/tests/unittests/test_layers.py
+18
-0
python/paddle/fluid/tests/unittests/test_parallel_dygraph_sync_batch_norm.py
.../tests/unittests/test_parallel_dygraph_sync_batch_norm.py
+40
-0
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
...n/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
+18
-0
python/paddle/nn/__init__.py
python/paddle/nn/__init__.py
+1
-0
python/paddle/nn/layer/__init__.py
python/paddle/nn/layer/__init__.py
+1
-0
python/paddle/nn/layer/norm.py
python/paddle/nn/layer/norm.py
+3
-1
未找到文件。
paddle/fluid/pybind/op_function_generator.cc
浏览文件 @
56890dc7
...
...
@@ -57,6 +57,9 @@ std::map<std::string, std::set<std::string>> op_outs_map = {
{
"batch_norm"
,
{
"Y"
,
"MeanOut"
,
"VarianceOut"
,
"SavedMean"
,
"SavedVariance"
,
"ReserveSpace"
}},
{
"sync_batch_norm"
,
{
"Y"
,
"MeanOut"
,
"VarianceOut"
,
"SavedMean"
,
"SavedVariance"
,
"ReserveSpace"
}},
};
// NOTE(zhiqiu): Commonly, the outputs in auto-generated OP function are
...
...
@@ -76,6 +79,7 @@ std::map<std::string, std::set<std::string>> op_passing_outs_map = {
{
"ParamOut"
,
"Moment1Out"
,
"Moment2Out"
,
"Beta1PowOut"
,
"Beta2PowOut"
}},
{
"momentum"
,
{
"ParamOut"
,
"VelocityOut"
}},
{
"batch_norm"
,
{
"MeanOut"
,
"VarianceOut"
}},
{
"sync_batch_norm"
,
{
"MeanOut"
,
"VarianceOut"
}},
{
"accuracy"
,
{
"Correct"
,
"Total"
}},
{
"fill_constant"
,
{
"Out"
}},
{
"matmul"
,
{
"Out"
}},
...
...
python/paddle/fluid/dygraph/nn.py
浏览文件 @
56890dc7
...
...
@@ -35,7 +35,7 @@ __all__ = [
'Conv2D'
,
'Conv3D'
,
'Pool2D'
,
'Linear'
,
'BatchNorm'
,
'Dropout'
,
'Embedding'
,
'GRUUnit'
,
'InstanceNorm'
,
'LayerNorm'
,
'NCE'
,
'PRelu'
,
'BilinearTensorProduct'
,
'Conv2DTranspose'
,
'Conv3DTranspose'
,
'GroupNorm'
,
'SpectralNorm'
,
'TreeConv'
,
'Flatten'
'SpectralNorm'
,
'TreeConv'
,
'Flatten'
,
'SyncBatchNorm'
]
...
...
@@ -3202,6 +3202,220 @@ class TreeConv(layers.Layer):
return
self
.
_helper
.
append_activation
(
pre_activation
,
act
=
self
.
_act
)
class
SyncBatchNorm
(
layers
.
Layer
):
"""
This interface is used to construct a callable object of the ``SyncBatchNorm`` class.
It implements the function of the Cross-GPU Synchronized Batch Normalization Layer, and can
be used as a normalizer function for other operations, such as conv2d and fully connected
operations.
The data is normalized by the mean and variance of the channel based on whole mini-batch
, which including data in all gpus.
Refer to `Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift <https://arxiv.org/pdf/1502.03167.pdf>`_
for more details.
When model in training mode, the :math:`
\\
mu_{
\\
beta}`
and :math:`
\\
sigma_{
\\
beta}^{2}` are the statistics of whole mini-batch data in all gpus.
Calculated as follows:
.. math::
\\
mu_{
\\
beta} &
\\
gets
\\
frac{1}{m}
\\
sum_{i=1}^{m} x_i
\\
qquad &//
\\
\ mini-batch\ mean
\\\\
\\
sigma_{
\\
beta}^{2} &
\\
gets
\\
frac{1}{m}
\\
sum_{i=1}^{m}(x_i -
\\
\\
mu_{
\\
beta})^2
\\
qquad &//\ mini-batch\ variance
\\\\
- :math:`x` : whole mini-batch data in all gpus
- :math:`m` : the size of the whole mini-batch data
When model in evaluation mode, the :math:`
\\
mu_{
\\
beta}`
and :math:`
\\
sigma_{
\\
beta}^{2}` are global statistics (moving_mean and moving_variance,
which usually got from the pre-trained model). Global statistics calculated as follows:
.. math::
moving\_mean = moving\_mean * momentum + \mu_{
\b
eta} * (1. - momentum) \quad &// global mean
\\
moving\_variance = moving\_variance * momentum + \sigma_{
\b
eta}^{2} * (1. - momentum) \quad &// global variance
\\
The formula of normalization is as follows:
.. math::
\\
hat{x_i} &
\\
gets
\\
frac{x_i -
\\
mu_
\\
beta} {
\\
sqrt{
\\
\\
sigma_{
\\
beta}^{2} +
\\
eps}}
\\
qquad &//\ normalize
\\\\
y_i &
\\
gets
\\
gamma
\\
hat{x_i} +
\\
beta
\\
qquad &//\ scale\ and\ shift
- :math:`
\\
eps` : add a smaller value to the variance to prevent division by zero
- :math:`
\\
gamma` : trainable scale parameter vector
- :math:`
\\
beta` : trainable shift parameter vector
Parameters:
num_features(int): Indicate the number of channels of the input ``Tensor``.
epsilon(float, optional): The small value added to the variance to prevent division by zero. Default: 1e-5.
momentum(float, optional): The value used for the moving_mean and moving_var computation. Default: 0.9.
weight_attr(ParamAttr|bool, optional): The parameter attribute for Parameter `scale`
of this layer. If it is set to None or one attribute of ParamAttr, this layerr
will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with Xavier. If it is set to False,
this layer will not have trainable scale parameter. Default: None.
bias_attr(ParamAttr|bool, optional): The parameter attribute for the bias of this layer.
If it is set to None or one attribute of ParamAttr, this layer
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. If it is set to False, this layer will not
have trainable bias parameter. Default: None.
track_running_stats(bool, optional): Whether to compute global stats, which including running mean and
running variance. Default: True.
Returns:
None
Examples:
.. code-block:: python
import paddle
import paddle.nn as nn
import numpy as np
x = np.array([[[[0.3, 0.4], [0.3, 0.07]], [[0.83, 0.37], [0.18, 0.93]]]]).astype('float32')
paddle.disable_static()
x = paddle.to_tensor(x)
if paddle.fluid.is_compiled_with_cuda():
sync_batch_norm = nn.SyncBatchNorm(2)
hidden1 = sync_batch_norm(x)
print(hidden1.numpy())
# [[[[0.26824948, 1.0936325],[0.26824948, -1.6301316]],[[ 0.8095662, -0.665287],[-1.2744656, 1.1301866 ]]]]
"""
def
__init__
(
self
,
num_features
,
epsilon
=
1e-05
,
momentum
=
0.9
,
track_running_stats
=
True
,
weight_attr
=
None
,
bias_attr
=
None
,
data_format
=
'NCHW'
,
name
=
None
):
super
(
SyncBatchNorm
,
self
).
__init__
()
self
.
_weight_attr
=
weight_attr
self
.
_bias_attr
=
bias_attr
self
.
_num_features
=
num_features
self
.
_data_layout
=
data_format
self
.
_momentum
=
momentum
self
.
_epsilon
=
epsilon
self
.
_track_running_stats
=
track_running_stats
if
self
.
_track_running_stats
==
False
:
logging
.
warn
(
"moving mean and moving variance will be calculated whether `track_running_stats` is set to `True` or `False`, we will fix it in the next version."
)
param_shape
=
[
self
.
_num_features
]
# create parameter
if
weight_attr
==
False
:
self
.
weight
=
self
.
create_parameter
(
attr
=
None
,
shape
=
param_shape
,
default_initializer
=
Constant
(
1.0
))
self
.
weight
.
stop_gradient
=
True
else
:
self
.
weight
=
self
.
create_parameter
(
attr
=
self
.
_weight_attr
,
shape
=
param_shape
,
default_initializer
=
Constant
(
1.0
))
self
.
weight
.
stop_gradient
=
self
.
_weight_attr
!=
None
and
self
.
_weight_attr
.
learning_rate
==
0.
if
bias_attr
==
False
:
self
.
bias
=
self
.
create_parameter
(
attr
=
None
,
shape
=
param_shape
,
default_initializer
=
Constant
(
0.0
),
is_bias
=
True
)
self
.
bias
.
stop_gradient
=
True
else
:
self
.
bias
=
self
.
create_parameter
(
attr
=
self
.
_bias_attr
,
shape
=
param_shape
,
is_bias
=
True
)
self
.
bias
.
stop_gradient
=
self
.
_weight_attr
!=
None
and
self
.
_weight_attr
.
learning_rate
==
0.
self
.
_mean
=
self
.
create_parameter
(
attr
=
ParamAttr
(
name
=
None
,
initializer
=
Constant
(
0.0
),
trainable
=
False
,
do_model_average
=
True
),
shape
=
param_shape
,
dtype
=
self
.
_dtype
)
self
.
_mean
.
stop_gradient
=
True
self
.
_variance
=
self
.
create_parameter
(
attr
=
ParamAttr
(
name
=
None
,
initializer
=
Constant
(
1.0
),
trainable
=
False
,
do_model_average
=
True
),
shape
=
param_shape
,
dtype
=
self
.
_dtype
)
self
.
_variance
.
stop_gradient
=
True
def
forward
(
self
,
x
):
# create output
# mean and mean_out share the same memory
mean_out
=
self
.
_mean
# variance and variance out share the same memory
variance_out
=
self
.
_variance
### train mode: use mini-batch stats, eval mode: use global stats
if
in_dygraph_mode
():
attrs
=
(
"momentum"
,
self
.
_momentum
,
"epsilon"
,
self
.
_epsilon
,
"is_test"
,
not
self
.
training
,
"data_layout"
,
self
.
_data_layout
,
"use_mkldnn"
,
False
,
"fuse_with_relu"
,
False
,
"use_global_stats"
,
not
self
.
training
,
'trainable_statistics'
,
False
)
sync_batch_norm_out
,
_
,
_
,
_
,
_
,
_
=
core
.
ops
.
sync_batch_norm
(
x
,
self
.
weight
,
self
.
bias
,
self
.
_mean
,
self
.
_variance
,
mean_out
,
variance_out
,
*
attrs
)
return
sync_batch_norm_out
check_variable_and_dtype
(
x
,
'input'
,
[
'float16'
,
'float32'
,
'float64'
],
'BatchNorm'
)
attrs
=
{
"momentum"
:
self
.
_momentum
,
"epsilon"
:
self
.
_epsilon
,
"is_test"
:
not
self
.
training
,
"data_layout"
:
self
.
_data_layout
,
"use_mkldnn"
:
False
,
"fuse_with_relu"
:
False
,
"use_global_stats"
:
not
self
.
training
,
"trainable_statistics"
:
False
,
}
inputs
=
{
"X"
:
[
x
],
"Scale"
:
[
self
.
weight
],
"Bias"
:
[
self
.
bias
],
"Mean"
:
[
self
.
_mean
],
"Variance"
:
[
self
.
_variance
]
}
saved_mean
=
self
.
_helper
.
create_variable_for_type_inference
(
dtype
=
self
.
_dtype
,
stop_gradient
=
True
)
saved_variance
=
self
.
_helper
.
create_variable_for_type_inference
(
dtype
=
self
.
_dtype
,
stop_gradient
=
True
)
sync_batch_norm_out
=
self
.
_helper
.
create_variable_for_type_inference
(
self
.
_dtype
)
outputs
=
{
"Y"
:
[
sync_batch_norm_out
],
"MeanOut"
:
[
mean_out
],
"VarianceOut"
:
[
variance_out
],
"SavedMean"
:
[
saved_mean
],
"SavedVariance"
:
[
saved_variance
]
}
self
.
_helper
.
append_op
(
type
=
"sync_batch_norm"
,
inputs
=
inputs
,
outputs
=
outputs
,
attrs
=
attrs
)
return
sync_batch_norm_out
class
Flatten
(
layers
.
Layer
):
"""
:alias_main: paddle.nn.Flatten
...
...
python/paddle/fluid/tests/unittests/CMakeLists.txt
浏览文件 @
56890dc7
...
...
@@ -106,6 +106,7 @@ if (NOT ${WITH_GPU})
list
(
REMOVE_ITEM TEST_OPS test_parallel_dygraph_se_resnext
)
LIST
(
REMOVE_ITEM TEST_OPS test_parallel_dygraph_sparse_embedding
)
LIST
(
REMOVE_ITEM TEST_OPS test_parallel_dygraph_transformer
)
LIST
(
REMOVE_ITEM TEST_OPS test_parallel_dygraph_sync_batch_norm
)
LIST
(
REMOVE_ITEM TEST_OPS test_imperative_auto_mixed_precision
)
elseif
(
${
CUDNN_VERSION
}
VERSION_LESS 7100
)
LIST
(
REMOVE_ITEM TEST_OPS test_conv2d_fusion_op
)
...
...
python/paddle/fluid/tests/unittests/parallel_dygraph_sync_batch_norm.py
0 → 100644
浏览文件 @
56890dc7
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
print_function
import
os
import
contextlib
import
unittest
import
numpy
as
np
import
six
import
pickle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid.dygraph
as
dygraph
from
paddle.fluid
import
core
from
paddle.fluid.optimizer
import
SGDOptimizer
from
paddle.nn
import
Conv2D
,
Pool2D
,
Linear
,
SyncBatchNorm
from
paddle.fluid.dygraph.base
import
to_variable
from
test_dist_base
import
runtime_main
,
TestParallelDyGraphRunnerBase
class
TestLayer
(
fluid
.
dygraph
.
Layer
):
def
__init__
(
self
,
num_channels
,
num_filters
,
filter_size
,
stride
=
1
,
groups
=
1
,
act
=
None
):
super
(
TestLayer
,
self
).
__init__
()
self
.
_conv
=
Conv2D
(
num_channels
=
num_channels
,
num_filters
=
num_filters
,
filter_size
=
filter_size
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
act
=
None
,
bias_attr
=
False
)
self
.
_sync_batch_norm
=
SyncBatchNorm
(
num_filters
)
self
.
_conv2
=
Conv2D
(
num_channels
=
num_filters
,
num_filters
=
num_filters
,
filter_size
=
filter_size
,
stride
=
stride
,
padding
=
(
filter_size
-
1
)
//
2
,
groups
=
groups
,
act
=
None
,
bias_attr
=
False
)
self
.
_sync_batch_norm2
=
SyncBatchNorm
(
num_filters
,
weight_attr
=
False
,
bias_attr
=
False
,
track_running_stats
=
False
)
def
forward
(
self
,
inputs
):
y
=
self
.
_conv
(
inputs
)
y
=
self
.
_sync_batch_norm
(
y
)
y
=
self
.
_conv2
(
y
)
y
=
self
.
_sync_batch_norm2
(
y
)
return
y
class
TestSyncBatchNorm
(
TestParallelDyGraphRunnerBase
):
def
get_model
(
self
):
model
=
TestLayer
(
3
,
64
,
7
)
train_reader
=
paddle
.
batch
(
paddle
.
dataset
.
flowers
.
test
(
use_xmap
=
False
),
batch_size
=
32
,
drop_last
=
True
)
opt
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
1e-3
,
parameter_list
=
model
.
parameters
())
return
model
,
train_reader
,
opt
def
run_one_loop
(
self
,
model
,
opt
,
data
):
batch_size
=
len
(
data
)
dy_x_data
=
np
.
array
([
x
[
0
].
reshape
(
3
,
224
,
224
)
for
x
in
data
]).
astype
(
'float32'
)
img
=
to_variable
(
dy_x_data
)
img
.
stop_gradient
=
False
out
=
model
(
img
)
out
=
fluid
.
layers
.
mean
(
out
)
return
out
if
__name__
==
"__main__"
:
runtime_main
(
TestSyncBatchNorm
)
python/paddle/fluid/tests/unittests/test_layers.py
浏览文件 @
56890dc7
...
...
@@ -283,6 +283,24 @@ class TestLayer(LayerTest):
with
self
.
assertRaises
(
ValueError
):
lm
(
base
.
to_variable
(
inp
))
def
test_SyncBatchNorm
(
self
):
if
core
.
is_compiled_with_cuda
():
with
self
.
static_graph
():
t
=
layers
.
data
(
name
=
't'
,
shape
=
[
-
1
,
3
,
5
,
5
],
dtype
=
'float32'
)
my_sync_bn
=
nn
.
SyncBatchNorm
(
3
)
ret
=
my_sync_bn
(
t
)
static_ret
=
self
.
get_static_graph_result
(
feed
=
{
't'
:
np
.
ones
(
[
3
,
3
,
5
,
5
],
dtype
=
'float32'
)},
fetch_list
=
[
ret
])[
0
]
with
self
.
dynamic_graph
():
t
=
np
.
ones
([
3
,
3
,
5
,
5
],
dtype
=
'float32'
)
my_syncbn
=
paddle
.
nn
.
SyncBatchNorm
(
3
)
dy_ret
=
my_syncbn
(
base
.
to_variable
(
t
))
dy_ret_value
=
dy_ret
.
numpy
()
self
.
assertTrue
(
np
.
array_equal
(
static_ret
,
static_ret
))
def
test_relu
(
self
):
with
self
.
static_graph
():
t
=
layers
.
data
(
name
=
't'
,
shape
=
[
3
,
3
],
dtype
=
'float32'
)
...
...
python/paddle/fluid/tests/unittests/test_parallel_dygraph_sync_batch_norm.py
0 → 100644
浏览文件 @
56890dc7
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
print_function
import
unittest
from
test_dist_base
import
TestDistBase
import
paddle.fluid
as
fluid
import
os
flag_name
=
os
.
path
.
splitext
(
__file__
)[
0
]
class
TestParallelDygraphMnist
(
TestDistBase
):
def
_setup_config
(
self
):
self
.
_sync_mode
=
False
self
.
_nccl2_mode
=
True
self
.
_dygraph
=
False
#True
def
test_mnist
(
self
):
if
fluid
.
core
.
is_compiled_with_cuda
():
self
.
check_with_place
(
"parallel_dygraph_sync_batch_norm.py"
,
delta
=
1e-5
,
check_error_log
=
True
,
log_name
=
flag_name
)
if
__name__
==
"__main__"
:
unittest
.
main
()
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
浏览文件 @
56890dc7
...
...
@@ -25,6 +25,7 @@ import six
import
paddle.fluid.core
as
core
import
paddle.fluid
as
fluid
from
paddle.fluid
import
compiler
from
paddle.fluid
import
Program
,
program_guard
from
op_test
import
OpTest
,
_set_use_system_allocator
...
...
@@ -202,5 +203,22 @@ class TestFP16SyncBatchNormOpTraining(TestSyncBatchNormOpTraining):
self
.
atol
=
1e-2
class
TestDygraphSyncBatchNormAPIError
(
unittest
.
TestCase
):
def
test_errors
(
self
):
if
not
core
.
is_compiled_with_cuda
():
return
with
program_guard
(
Program
(),
Program
()):
my_sync_batch_norm
=
fluid
.
dygraph
.
SyncBatchNorm
(
10
)
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([
-
1
,
3
,
5
,
5
]),
[[
1
,
1
,
1
,
1
]],
fluid
.
CUDAPlace
(
0
))
self
.
assertRaises
(
TypeError
,
my_sync_batch_norm
,
x1
)
# the input dtype of SyncBatchNorm must be float16 or float32 or float64
# float16 only can be set on GPU place
x2
=
fluid
.
layers
.
data
(
name
=
'x2'
,
shape
=
[
3
,
4
,
5
,
6
],
dtype
=
"int32"
)
self
.
assertRaises
(
TypeError
,
my_sync_batch_norm
,
x2
)
if
__name__
==
'__main__'
:
unittest
.
main
()
python/paddle/nn/__init__.py
浏览文件 @
56890dc7
...
...
@@ -92,6 +92,7 @@ from .layer.loss import BCELoss #DEFINE_ALIAS
from
.layer.loss
import
KLDivLoss
#DEFINE_ALIAS
from
.layer.loss
import
MarginRankingLoss
#DEFINE_ALIAS
from
.layer.norm
import
BatchNorm
#DEFINE_ALIAS
from
.layer.norm
import
SyncBatchNorm
#DEFINE_ALIAS
from
.layer.norm
import
GroupNorm
#DEFINE_ALIAS
from
.layer.norm
import
LayerNorm
#DEFINE_ALIAS
from
.layer.norm
import
SpectralNorm
#DEFINE_ALIAS
...
...
python/paddle/nn/layer/__init__.py
浏览文件 @
56890dc7
...
...
@@ -65,6 +65,7 @@ from .loss import BCELoss #DEFINE_ALIAS
from
.loss
import
KLDivLoss
#DEFINE_ALIAS
from
.loss
import
MarginRankingLoss
#DEFINE_ALIAS
from
.norm
import
BatchNorm
#DEFINE_ALIAS
from
.norm
import
SyncBatchNorm
#DEFINE_ALIAS
from
.norm
import
GroupNorm
#DEFINE_ALIAS
from
.norm
import
LayerNorm
#DEFINE_ALIAS
from
.norm
import
SpectralNorm
#DEFINE_ALIAS
...
...
python/paddle/nn/layer/norm.py
浏览文件 @
56890dc7
...
...
@@ -20,7 +20,9 @@ from ...fluid.dygraph import BatchNorm #DEFINE_ALIAS
from
...fluid.dygraph
import
GroupNorm
#DEFINE_ALIAS
from
...fluid.dygraph
import
LayerNorm
#DEFINE_ALIAS
from
...fluid.dygraph
import
SpectralNorm
#DEFINE_ALIAS
from
...fluid.dygraph
import
SyncBatchNorm
#DEFINE_ALIAS
__all__
=
[
'BatchNorm'
,
'GroupNorm'
,
'LayerNorm'
,
'SpectralNorm'
,
'InstanceNorm'
'BatchNorm'
,
'GroupNorm'
,
'LayerNorm'
,
'SpectralNorm'
,
'InstanceNorm'
,
'SyncBatchNorm'
]
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录