<li><strong>name</strong> (<em>basestring</em>) – The name of this layer. It is optional.</li>
<li><strong>input</strong> (<em>paddle.v2.config_base.Layer</em>) – batch normalization input. Better be linear activation.
Because there is an activation inside batch_normalization.</li>
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>"batch_norm"</em><em> or </em><em>"cudnn_batch_norm"</em>) – We have batch_norm and cudnn_batch_norm. batch_norm
supports both CPU and GPU. cudnn_batch_norm requires
cuDNN version greater or equal to v4 (>=v4). But
cudnn_batch_norm is faster and needs less memory
than batch_norm. By default (None), we will
automaticly select cudnn_batch_norm for GPU and
batch_norm for CPU. Otherwise, select batch norm
type based on the specified type. If you use cudnn_batch_norm,
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>"batch_norm"</em><em> or </em><em>"cudnn_batch_norm"</em><em>
or </em><em>"mkldnn_batch_norm"</em>) – We have batch_norm, mkldnn_batch_norm and cudnn_batch_norm.
batch_norm supports CPU, MKLDNN and GPU. cudnn_batch_norm
requires cuDNN version greater or equal to v4 (>=v4).
But cudnn_batch_norm is faster and needs less
memory than batch_norm. mkldnn_batch_norm requires
enable use_mkldnn. By default (None), we will
automaticly select cudnn_batch_norm for GPU,
mkldnn_batch_norm for MKLDNN and batch_norm for CPU.
Otherwise, select batch norm type based on the
specified type. If you use cudnn_batch_norm,
we suggested you use latest version, such as v5.1.</li>
<li><strong>act</strong> (<em>paddle.v2.activation.Base</em>) – Activation Type. Better be relu. Because batch
normalization will normalize input near zero.</li>
<li><strong>name</strong> (<em>basestring</em>) – The name of this layer. It is optional.</li>
<li><strong>input</strong> (<em>paddle.v2.config_base.Layer</em>) – batch normalization input. Better be linear activation.
Because there is an activation inside batch_normalization.</li>
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>"batch_norm"</em><em> or </em><em>"cudnn_batch_norm"</em>) – We have batch_norm and cudnn_batch_norm. batch_norm
supports both CPU and GPU. cudnn_batch_norm requires
cuDNN version greater or equal to v4 (>=v4). But
cudnn_batch_norm is faster and needs less memory
than batch_norm. By default (None), we will
automaticly select cudnn_batch_norm for GPU and
batch_norm for CPU. Otherwise, select batch norm
type based on the specified type. If you use cudnn_batch_norm,
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>"batch_norm"</em><em> or </em><em>"cudnn_batch_norm"</em><em>
or </em><em>"mkldnn_batch_norm"</em>) – We have batch_norm, mkldnn_batch_norm and cudnn_batch_norm.
batch_norm supports CPU, MKLDNN and GPU. cudnn_batch_norm
requires cuDNN version greater or equal to v4 (>=v4).
But cudnn_batch_norm is faster and needs less
memory than batch_norm. mkldnn_batch_norm requires
enable use_mkldnn. By default (None), we will
automaticly select cudnn_batch_norm for GPU,
mkldnn_batch_norm for MKLDNN and batch_norm for CPU.
Otherwise, select batch norm type based on the
specified type. If you use cudnn_batch_norm,
we suggested you use latest version, such as v5.1.</li>
<li><strong>act</strong> (<em>paddle.v2.activation.Base</em>) – Activation Type. Better be relu. Because batch
normalization will normalize input near zero.</li>