Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
aa632305
MegEngine
项目概览
MegEngine 天元
/
MegEngine
11 个月 前同步成功
通知
392
Star
4702
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
aa632305
编写于
2月 28, 2022
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
docs(functional): replace loss function testcode with doctest format
GitOrigin-RevId: 98224f0e5f04a2a54eb3ddbb0c011bcb20c7f2e0
上级
e8d0f9db
变更
1
显示空白变更内容
内联
并排
Showing
1 changed file
with
76 addition
and
103 deletion
+76
-103
imperative/python/megengine/functional/loss.py
imperative/python/megengine/functional/loss.py
+76
-103
未找到文件。
imperative/python/megengine/functional/loss.py
浏览文件 @
aa632305
...
...
@@ -66,29 +66,27 @@ def l1_loss(pred: Tensor, label: Tensor, reduction: str = "mean") -> Tensor:
Args:
pred: predicted result from model.
label: ground truth to compare.
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Default: 'mean'
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Returns:
loss value.
Examples:
.. testcode::
import numpy as np
import megengine as mge
import megengine.functional as F
ipt = mge.tensor(np.array([3, 3, 3, 3]).astype(np.float32))
tgt = mge.tensor(np.array([2, 8, 6, 1]).astype(np.float32))
loss = F.nn.l1_loss(ipt, tgt)
print(loss.numpy())
Shape:
* ``pred``: :math:`(N, *)` where :math:`*` means any number of additional
dimensions.
* ``label``: :math:`(N, *)`. Same shape as ``pred``.
Output
s:
Example
s:
.. testoutput::
>>> pred = Tensor([3, 3, 3, 3])
>>> label = Tensor([2, 8, 6, 1])
>>> F.nn.l1_loss(pred, label)
Tensor(2.75, device=xpux:0)
>>> F.nn.l1_loss(pred, label, reduction="none")
Tensor([1 5 3 2], dtype=int32, device=xpux:0)
>>> F.nn.l1_loss(pred, label, reduction="sum")
Tensor(11, dtype=int32, device=xpux:0)
2.75
"""
diff
=
pred
-
label
return
abs
(
diff
)
...
...
@@ -118,34 +116,27 @@ def square_loss(pred: Tensor, label: Tensor, reduction: str = "mean") -> Tensor:
Args:
pred: predicted result from model.
label: ground truth to compare.
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Default: 'mean'
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Returns:
loss value.
Shape:
*
pred
: :math:`(N, *)` where :math:`*` means any number of additional
*
``pred``
: :math:`(N, *)` where :math:`*` means any number of additional
dimensions.
*
label
: :math:`(N, *)`. Same shape as ``pred``.
*
``label``
: :math:`(N, *)`. Same shape as ``pred``.
Examples:
.. testcode::
import numpy as np
import megengine as mge
import megengine.functional as F
>>> pred = Tensor([3, 3, 3, 3])
>>> label = Tensor([2, 8, 6, 1])
>>> F.nn.square_loss(pred, label)
Tensor(9.75, device=xpux:0)
>>> F.nn.square_loss(pred, label, reduction="none")
Tensor([ 1. 25. 9. 4.], device=xpux:0)
>>> F.nn.square_loss(pred, label, reduction="sum")
Tensor(39.0, device=xpux:0)
ipt = mge.tensor(np.array([3, 3, 3, 3]).astype(np.float32))
tgt = mge.tensor(np.array([2, 8, 6, 1]).astype(np.float32))
loss = F.nn.square_loss(ipt, tgt)
print(loss.numpy())
Outputs:
.. testoutput::
9.75
"""
diff
=
pred
-
label
return
diff
**
2
...
...
@@ -162,11 +153,6 @@ def cross_entropy(
)
->
Tensor
:
r
"""Computes the multi-class cross entropy loss (using logits by default).
By default(``with_logitis`` is True), ``pred`` is assumed to be logits,
class probabilities are given by softmax.
It has better numerical stability compared with sequential calls to :func:`~.softmax` and :func:`~.cross_entropy`.
When using label smoothing, the label distribution is as follows:
.. math:: y^{LS}_{k}=y_{k}\left(1-\alpha\right)+\alpha/K
...
...
@@ -175,36 +161,39 @@ def cross_entropy(
k is the index of label distribution. :math:`\alpha` is ``label_smooth`` and :math:`K` is the number of classes.
Args:
pred: input tensor representing the predicted
probability
.
pred: input tensor representing the predicted
value
.
label: input tensor representing the classification label.
axis: an axis along which softmax will be applied. Default: 1
with_logits: whether to apply softmax first. Default: True
label_smooth: a label smoothing of parameter that can re-distribute target distribution. Default: 0
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Default: 'mean'
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Returns:
loss value.
Examples:
.. testcode::
import numpy as np
from megengine import tensor
import megengine.functional as F
By default(``with_logitis`` is True), ``pred`` is assumed to be logits,
class probabilities are given by softmax.
It has better numerical stability compared with sequential calls to
:func:`~.softmax` and :func:`~.cross_entropy`.
data_shape = (1, 2
)
label_shape = (1,
)
pred = tensor(np.array([0, 0], dtype=np.float32).reshape(data_shape))
label = tensor(np.ones(label_shape, dtype=np.int32)
)
loss = F.nn.cross_entropy(pred, label
)
print(loss.numpy().round(decimals=4)
)
>>> pred = Tensor([[0., 1.], [0.3, 0.7], [0.7, 0.3]]
)
>>> label = Tensor([1., 1., 1.]
)
>>> F.nn.cross_entropy(pred, label) # doctest: +SKIP
Tensor(0.57976407, device=xpux:0
)
>>> F.nn.cross_entropy(pred, label, reduction="none"
)
Tensor([0.3133 0.513 0.913 ], device=xpux:0
)
Outputs
:
If the ``pred`` value has been probabilities, set ``with_logits`` to False
:
.. testoutput::
>>> pred = Tensor([[0., 1.], [0.3, 0.7], [0.7, 0.3]])
>>> label = Tensor([1., 1., 1.])
>>> F.nn.cross_entropy(pred, label, with_logits=False) # doctest: +SKIP
Tensor(0.5202159, device=xpux:0)
>>> F.nn.cross_entropy(pred, label, with_logits=False, reduction="none")
Tensor([0. 0.3567 1.204 ], device=xpux:0)
0.6931
"""
n0
=
pred
.
ndim
n1
=
label
.
ndim
...
...
@@ -234,36 +223,38 @@ def binary_cross_entropy(
)
->
Tensor
:
r
"""Computes the binary cross entropy loss (using logits by default).
By default(``with_logitis`` is True), ``pred`` is assumed to be logits,
class probabilities are given by sigmoid.
Args:
pred: `(N, *)`, where `*` means any number of additional dimensions.
label: `(N, *)`, same shape as the input.
with_logits: bool, whether to apply sigmoid first. Default: True
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Default: 'mean'
reduction: the reduction to apply to the output: 'none' | 'mean' | 'sum'.
Returns:
loss value.
Examples:
.. testcode::
import numpy as np
from megengine import tensor
import megengine.functional as F
By default(``with_logitis`` is True), ``pred`` is assumed to be logits,
class probabilities are given by softmax.
It has better numerical stability compared with sequential calls to
:func:`~.sigmoid` and :func:`~.binary_cross_entropy`.
pred = tensor(np.array([0, 0], dtype=np.float32).reshape(1, 2))
label = tensor(np.ones((1, 2), dtype=np.float32))
loss = F.nn.binary_cross_entropy(pred, label)
print(loss.numpy().round(decimals=4))
>>> pred = Tensor([0.9, 0.7, 0.3])
>>> label = Tensor([1., 1., 1.])
>>> F.nn.binary_cross_entropy(pred, label)
Tensor(0.4328984, device=xpux:0)
>>> F.nn.binary_cross_entropy(pred, label, reduction="none")
Tensor([0.3412 0.4032 0.5544], device=xpux:0)
Outputs
:
If the ``pred`` value has been probabilities, set ``with_logits`` to False
:
.. testoutput::
>>> pred = Tensor([0.9, 0.7, 0.3])
>>> label = Tensor([1., 1., 1.])
>>> F.nn.binary_cross_entropy(pred, label, with_logits=False)
Tensor(0.5553361, device=xpux:0)
>>> F.nn.binary_cross_entropy(pred, label, with_logits=False, reduction="none")
Tensor([0.1054 0.3567 1.204 ], device=xpux:0)
0.6931
"""
if
not
with_logits
:
return
-
(
label
*
log
(
pred
)
+
(
1
-
label
)
*
log
(
1
-
pred
))
...
...
@@ -292,22 +283,15 @@ def hinge_loss(
loss value.
Examples:
>>> pred = Tensor([[0.5, -0.5, 0.1], [-0.6, 0.7, 0.8]])
>>> label = Tensor([[1, -1, -1], [-1, 1, 1]])
>>> F.nn.hinge_loss(pred, label)
Tensor(1.5, device=xpux:0)
>>> F.nn.hinge_loss(pred, label, reduction="none")
Tensor([2.1 0.9], device=xpux:0)
>>> F.nn.hinge_loss(pred, label, reduction="sum")
Tensor(3.0, device=xpux:0)
.. testcode::
from megengine import tensor
import megengine.functional as F
pred = tensor([[0.5, -0.5, 0.1], [-0.6, 0.7, 0.8]], dtype="float32")
label = tensor([[1, -1, -1], [-1, 1, 1]], dtype="float32")
loss = F.nn.hinge_loss(pred, label)
print(loss.numpy())
Outputs:
.. testoutput::
1.5
"""
norm
=
norm
.
upper
()
assert
norm
in
[
"L1"
,
"L2"
],
"norm must be L1 or L2"
...
...
@@ -381,23 +365,12 @@ def ctc_loss(
Examples:
.. testcode::
from megengine import tensor
import megengine.functional as F
pred = tensor([[[0.0614, 0.9386],[0.8812, 0.1188]],[[0.699, 0.301 ],[0.2572, 0.7428]]])
pred_length = tensor([2,2])
label = tensor([1,1])
label_lengths = tensor([1,1])
loss = F.nn.ctc_loss(pred, pred_length, label, label_lengths)
print(loss.numpy())
Outputs:
.. testoutput::
0.1504417
>>> pred = Tensor([[[0.0614, 0.9386],[0.8812, 0.1188]],[[0.699, 0.301 ],[0.2572, 0.7428]]])
>>> pred_lengths = Tensor([2, 2])
>>> label = Tensor([1, 1])
>>> label_lengths = Tensor([1, 1])
>>> F.nn.ctc_loss(pred, pred_lengths, label, label_lengths)
Tensor(0.1504417, device=xpux:0)
"""
T
,
N
,
C
=
pred
.
shape
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录