提交 29f9935d 编写于 作者: M Megvii Engine Team

fix(imperative/python): add layer_norm doc and rm useless param

GitOrigin-RevId: 1b15db621ec12c60c6c59f3b3e983c9f81907079
上级 30095514
...@@ -1084,12 +1084,18 @@ def layer_norm( ...@@ -1084,12 +1084,18 @@ def layer_norm(
weight: Optional[Tensor] = None, weight: Optional[Tensor] = None,
bias: Optional[Tensor] = None, bias: Optional[Tensor] = None,
eps: float = 1e-5, eps: float = 1e-5,
eps_mode="additive",
): ):
r"""Applies layer normalization to the input. Support tensor of any shape as input.
assert eps_mode.lower() in {"max", "additive"}, "unknown eps_mode: {}".format( Reference: https://arxiv.org/pdf/1803.08494.pdf.
eps_mode
) Args:
inp: input tensor.
normalized_shape: the shape that you want to be normalizated
affine: whether to use weight and bias
weight: must not be None when the affine is true
bias: must not be None when the bias is true
eps: a value added to the denominator for numerical stability. Default: 1e-5
"""
if amp._enabled: if amp._enabled:
inp, weight, bias = cast_tensors(inp, weight, bias, promote=True) inp, weight, bias = cast_tensors(inp, weight, bias, promote=True)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册