提交 ecf559aa 编写于 作者: T Travis CI

Deploy to GitHub Pages: b68f2d20

上级 a4b6c6ae
......@@ -866,14 +866,17 @@ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{sp
<li><strong>name</strong> (<em>basestring</em>) &#8211; The name of this layer. It is optional.</li>
<li><strong>input</strong> (<em>paddle.v2.config_base.Layer</em>) &#8211; batch normalization input. Better be linear activation.
Because there is an activation inside batch_normalization.</li>
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>&quot;batch_norm&quot;</em><em> or </em><em>&quot;cudnn_batch_norm&quot;</em>) &#8211; We have batch_norm and cudnn_batch_norm. batch_norm
supports both CPU and GPU. cudnn_batch_norm requires
cuDNN version greater or equal to v4 (&gt;=v4). But
cudnn_batch_norm is faster and needs less memory
than batch_norm. By default (None), we will
automaticly select cudnn_batch_norm for GPU and
batch_norm for CPU. Otherwise, select batch norm
type based on the specified type. If you use cudnn_batch_norm,
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>&quot;batch_norm&quot;</em><em> or </em><em>&quot;cudnn_batch_norm&quot;</em><em>
or </em><em>&quot;mkldnn_batch_norm&quot;</em>) &#8211; We have batch_norm, mkldnn_batch_norm and cudnn_batch_norm.
batch_norm supports CPU, MKLDNN and GPU. cudnn_batch_norm
requires cuDNN version greater or equal to v4 (&gt;=v4).
But cudnn_batch_norm is faster and needs less
memory than batch_norm. mkldnn_batch_norm requires
enable use_mkldnn. By default (None), we will
automaticly select cudnn_batch_norm for GPU,
mkldnn_batch_norm for MKLDNN and batch_norm for CPU.
Otherwise, select batch norm type based on the
specified type. If you use cudnn_batch_norm,
we suggested you use latest version, such as v5.1.</li>
<li><strong>act</strong> (<em>paddle.v2.activation.Base</em>) &#8211; Activation Type. Better be relu. Because batch
normalization will normalize input near zero.</li>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -880,14 +880,17 @@ y_i &amp;\gets \gamma \hat{x_i} + \beta \qquad &amp;//\ scale\ and\ shift\end{sp
<li><strong>name</strong> (<em>basestring</em>) &#8211; The name of this layer. It is optional.</li>
<li><strong>input</strong> (<em>paddle.v2.config_base.Layer</em>) &#8211; batch normalization input. Better be linear activation.
Because there is an activation inside batch_normalization.</li>
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>&quot;batch_norm&quot;</em><em> or </em><em>&quot;cudnn_batch_norm&quot;</em>) &#8211; We have batch_norm and cudnn_batch_norm. batch_norm
supports both CPU and GPU. cudnn_batch_norm requires
cuDNN version greater or equal to v4 (&gt;=v4). But
cudnn_batch_norm is faster and needs less memory
than batch_norm. By default (None), we will
automaticly select cudnn_batch_norm for GPU and
batch_norm for CPU. Otherwise, select batch norm
type based on the specified type. If you use cudnn_batch_norm,
<li><strong>batch_norm_type</strong> (<em>None | string</em><em>, </em><em>None</em><em> or </em><em>&quot;batch_norm&quot;</em><em> or </em><em>&quot;cudnn_batch_norm&quot;</em><em>
or </em><em>&quot;mkldnn_batch_norm&quot;</em>) &#8211; We have batch_norm, mkldnn_batch_norm and cudnn_batch_norm.
batch_norm supports CPU, MKLDNN and GPU. cudnn_batch_norm
requires cuDNN version greater or equal to v4 (&gt;=v4).
But cudnn_batch_norm is faster and needs less
memory than batch_norm. mkldnn_batch_norm requires
enable use_mkldnn. By default (None), we will
automaticly select cudnn_batch_norm for GPU,
mkldnn_batch_norm for MKLDNN and batch_norm for CPU.
Otherwise, select batch norm type based on the
specified type. If you use cudnn_batch_norm,
we suggested you use latest version, such as v5.1.</li>
<li><strong>act</strong> (<em>paddle.v2.activation.Base</em>) &#8211; Activation Type. Better be relu. Because batch
normalization will normalize input near zero.</li>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册