提交 0af1c4a9 编写于 作者: G guosheng

Follow comments and refine annotations on ScaleShiftLayer

上级 83abbce8
...@@ -17,15 +17,15 @@ limitations under the License. */ ...@@ -17,15 +17,15 @@ limitations under the License. */
namespace paddle { namespace paddle {
/** /**
* A layer applies a slope and an intercept to the input element-wise for * A layer applies a linear transformation to each element in each row of
* scaling and shifting. Noting that this layer is trainable which differs * the input matrix. For each element, the layer first re-scale it and then
* from the SlopeInterceptLayer. * adds a bias to it.
* *
* \f[ * \f[
* y = wx + b * y = wx + b
* \f] * \f]
* *
* Here, w is scale and b is offset, which are scalars and trainable. * Here, w is the scale and b is the bias. Both w and b are trainable scalars.
* *
*/ */
......
...@@ -6219,9 +6219,13 @@ def kmax_sequence_score_layer(input, name=None, beam_size=1): ...@@ -6219,9 +6219,13 @@ def kmax_sequence_score_layer(input, name=None, beam_size=1):
@wrap_bias_attr_default() @wrap_bias_attr_default()
def scale_shift_layer(input, name=None, param_attr=None, bias_attr=None): def scale_shift_layer(input, name=None, param_attr=None, bias_attr=None):
""" """
A layer applies a slope and an intercept to the input element-wise for A layer applies a linear transformation to each element in each row of
scaling and shifting. Noting that this layer is trainable which differs the input matrix. For each element, the layer first re-scale it and then
from the slope_intercept_layer. adds a bias to it.
This layer is very like the SlopeInterceptLayer, except the scale and
bias are trainable.
.. math:: .. math::
y = w * x + b y = w * x + b
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册