Metrics¶
NOTE: metrics in this module expect the predictions and ground truth to have the same dimensions for regression and binary classification problems: \((N_{samples}, 1)\). In the case of multiclass classification problems the ground truth is expected to be a 1D tensor with the corresponding classes. See Examples below
We have added the possibility of using the metrics available at the
torchmetrics library. Note
that this library is still in its early versions and therefore this option
should be used with caution. To use torchmetrics
simply import them and
use them as any of the pytorch-widedeep
metrics described below.
from torchmetrics import Accuracy, Precision
accuracy = Accuracy(average=None, num_classes=2)
precision = Precision(average='micro', num_classes=2)
trainer = Trainer(model, objective="binary", metrics=[accuracy, precision])
A functioning example for pytorch-widedeep
using torchmetrics
can be
found in the Examples folder
NOTE: the forward method for all metrics in this
module takes two tensors, y_pred
and y_true
(in that order). Therefore,
we do not include the method in the documentation.
Accuracy ¶
Accuracy(top_k=1)
Bases: Metric
Class to calculate the accuracy for both binary and categorical problems
Parameters:
-
top_k
(
int
) –Accuracy will be computed using the top k most likely classes in multiclass problems
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import Accuracy
>>>
>>> acc = Accuracy()
>>> y_true = torch.tensor([0, 1, 0, 1]).view(-1, 1)
>>> y_pred = torch.tensor([[0.3, 0.2, 0.6, 0.7]]).view(-1, 1)
>>> acc(y_pred, y_true)
array(0.5)
>>>
>>> acc = Accuracy(top_k=2)
>>> y_true = torch.tensor([0, 1, 2])
>>> y_pred = torch.tensor([[0.3, 0.5, 0.2], [0.1, 0.1, 0.8], [0.1, 0.5, 0.4]])
>>> acc(y_pred, y_true)
array(0.66666667)
Source code in pytorch_widedeep/metrics.py
75 76 77 78 79 80 81 |
|
reset ¶
reset()
resets counters to 0
Source code in pytorch_widedeep/metrics.py
83 84 85 86 87 88 |
|
Precision ¶
Precision(average=True)
Bases: Metric
Class to calculate the precision for both binary and categorical problems
Parameters:
-
average
(
bool
) –This applies only to multiclass problems. if
True
calculate precision for each label, and finds their unweighted mean.
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import Precision
>>>
>>> prec = Precision()
>>> y_true = torch.tensor([0, 1, 0, 1]).view(-1, 1)
>>> y_pred = torch.tensor([[0.3, 0.2, 0.6, 0.7]]).view(-1, 1)
>>> prec(y_pred, y_true)
array(0.5)
>>>
>>> prec = Precision(average=True)
>>> y_true = torch.tensor([0, 1, 2])
>>> y_pred = torch.tensor([[0.7, 0.1, 0.2], [0.1, 0.1, 0.8], [0.1, 0.5, 0.4]])
>>> prec(y_pred, y_true)
array(0.33333334)
Source code in pytorch_widedeep/metrics.py
134 135 136 137 138 139 140 141 |
|
reset ¶
reset()
resets counters to 0
Source code in pytorch_widedeep/metrics.py
143 144 145 146 147 148 |
|
Recall ¶
Recall(average=True)
Bases: Metric
Class to calculate the recall for both binary and categorical problems
Parameters:
-
average
(
bool
) –This applies only to multiclass problems. if
True
calculate recall for each label, and finds their unweighted mean.
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import Recall
>>>
>>> rec = Recall()
>>> y_true = torch.tensor([0, 1, 0, 1]).view(-1, 1)
>>> y_pred = torch.tensor([[0.3, 0.2, 0.6, 0.7]]).view(-1, 1)
>>> rec(y_pred, y_true)
array(0.5)
>>>
>>> rec = Recall(average=True)
>>> y_true = torch.tensor([0, 1, 2])
>>> y_pred = torch.tensor([[0.7, 0.1, 0.2], [0.1, 0.1, 0.8], [0.1, 0.5, 0.4]])
>>> rec(y_pred, y_true)
array(0.33333334)
Source code in pytorch_widedeep/metrics.py
200 201 202 203 204 205 206 207 |
|
reset ¶
reset()
resets counters to 0
Source code in pytorch_widedeep/metrics.py
209 210 211 212 213 214 |
|
FBetaScore ¶
FBetaScore(beta, average=True)
Bases: Metric
Class to calculate the fbeta score for both binary and categorical problems
Parameters:
-
beta
(
int
) –Coefficient to control the balance between precision and recall
-
average
(
bool
) –This applies only to multiclass problems. if
True
calculate fbeta for each label, and find their unweighted mean.
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import FBetaScore
>>>
>>> fbeta = FBetaScore(beta=2)
>>> y_true = torch.tensor([0, 1, 0, 1]).view(-1, 1)
>>> y_pred = torch.tensor([[0.3, 0.2, 0.6, 0.7]]).view(-1, 1)
>>> fbeta(y_pred, y_true)
array(0.5)
>>>
>>> fbeta = FBetaScore(beta=2)
>>> y_true = torch.tensor([0, 1, 2])
>>> y_pred = torch.tensor([[0.7, 0.1, 0.2], [0.1, 0.1, 0.8], [0.1, 0.5, 0.4]])
>>> fbeta(y_pred, y_true)
array(0.33333334)
Source code in pytorch_widedeep/metrics.py
272 273 274 275 276 277 278 279 280 |
|
reset ¶
reset()
resets precision and recall
Source code in pytorch_widedeep/metrics.py
282 283 284 285 286 287 |
|
F1Score ¶
F1Score(average=True)
Bases: Metric
Class to calculate the f1 score for both binary and categorical problems
Parameters:
-
average
(
bool
) –This applies only to multiclass problems. if
True
calculate f1 for each label, and find their unweighted mean.
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import F1Score
>>>
>>> f1 = F1Score()
>>> y_true = torch.tensor([0, 1, 0, 1]).view(-1, 1)
>>> y_pred = torch.tensor([[0.3, 0.2, 0.6, 0.7]]).view(-1, 1)
>>> f1(y_pred, y_true)
array(0.5)
>>>
>>> f1 = F1Score()
>>> y_true = torch.tensor([0, 1, 2])
>>> y_pred = torch.tensor([[0.7, 0.1, 0.2], [0.1, 0.1, 0.8], [0.1, 0.5, 0.4]])
>>> f1(y_pred, y_true)
array(0.33333334)
Source code in pytorch_widedeep/metrics.py
330 331 332 333 334 335 |
|
reset ¶
reset()
resets counters to 0
Source code in pytorch_widedeep/metrics.py
337 338 339 340 341 |
|
R2Score ¶
R2Score()
Bases: Metric
Calculates R-Squared, the coefficient of determination:
where \(\hat{y_j}\) is the ground truth, \(y_j\) is the predicted value and \(\bar{y}\) is the mean of the ground truth.
Examples:
>>> import torch
>>>
>>> from pytorch_widedeep.metrics import R2Score
>>>
>>> r2 = R2Score()
>>> y_true = torch.tensor([3, -0.5, 2, 7]).view(-1, 1)
>>> y_pred = torch.tensor([2.5, 0.0, 2, 8]).view(-1, 1)
>>> r2(y_pred, y_true)
array(0.94860814)
Source code in pytorch_widedeep/metrics.py
372 373 374 375 376 377 378 |
|
reset ¶
reset()
resets counters to 0
Source code in pytorch_widedeep/metrics.py
380 381 382 383 384 385 386 387 |
|