f1 score loss 实现问题
Created by: nuptxxp
在paddle上实现了一个f1 loss函数:
def _compute_loss(self, dec_output): tp = fluid.layers.sum(fluid.layers.cast(self.label * dec_output, dtype="float32")) tn = fluid.layers.sum(fluid.layers.cast((1 -self.label) * (1 - dec_output), dtype="float32")) fp = fluid.layers.sum(fluid.layers.cast((1 -self.label) * dec_output, dtype="float32")) fn = fluid.layers.sum(fluid.layers.cast(self.label * (1 - dec_output), dtype="float32")) print ("shape:loss", tp.shape, tn.shape, fp.shape, fn.shape) p = tp / (tp + fp + 1e-07) r = tp / (tp + fn + 1e-07) f1 = 2 * p * r / (p + r + 1e-07) print ("f1_shape ", f1.shape) print ("mean_shape ", fluid.layers.mean(f1)) print ("loss_shape", 1 - fluid.layers.mean(f1)) return 1 - fluid.layers.mean(f1), dec_output, self.label
输出的shape如下:
shape:loss (-1L, -1L, 3L) (-1L, -1L, 3L) (-1L, -1L, 3L) (-1L, -1L, 3L)
f1_shape (-1L, -1L, 3L)
mean_shape name: "mean_0.tmp_0"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: 1
}
}
}
persistable: false
loss_shape name: "tmp_25" type { type: LOD_TENSOR lod_tensor { tensor { data_type: FP32 dims: 1 } lod_level: 0 } }
实际以此loss做训练,结果很怪异,能帮忙看下原因吗,在keras实现此loss函数无问题