提交 0af1c4a9 编写于 作者: G guosheng

Follow comments and refine annotations on ScaleShiftLayer

上级 83abbce8
......@@ -17,15 +17,15 @@ limitations under the License. */
namespace paddle {
/**
* A layer applies a slope and an intercept to the input element-wise for
* scaling and shifting. Noting that this layer is trainable which differs
* from the SlopeInterceptLayer.
* A layer applies a linear transformation to each element in each row of
* the input matrix. For each element, the layer first re-scale it and then
* adds a bias to it.
*
* \f[
* y = wx + b
* \f]
*
* Here, w is scale and b is offset, which are scalars and trainable.
* Here, w is the scale and b is the bias. Both w and b are trainable scalars.
*
*/
......
......@@ -6219,9 +6219,13 @@ def kmax_sequence_score_layer(input, name=None, beam_size=1):
@wrap_bias_attr_default()
def scale_shift_layer(input, name=None, param_attr=None, bias_attr=None):
"""
A layer applies a slope and an intercept to the input element-wise for
scaling and shifting. Noting that this layer is trainable which differs
from the slope_intercept_layer.
A layer applies a linear transformation to each element in each row of
the input matrix. For each element, the layer first re-scale it and then
adds a bias to it.
This layer is very like the SlopeInterceptLayer, except the scale and
bias are trainable.
.. math::
y = w * x + b
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册