"* reduction=\"none\" in binary_cross_entropy_with_logits gives same result for given testing usecase as authors implementation https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html\n",
" * hovewer keras BCE uses auto reduction by default https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy which changes according to usecase : so wtf? https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction\n",
"* keras also return flattened BCE result"
"* keras also return flattened BCE result\n",
"* keras needs loss per sample and it averages it in the backend: https://keras.io/api/losses/"
]
},
{
"cell_type": "code",
"execution_count": 59,
"execution_count": 83,
"id": "9864678c",
"metadata": {},
"outputs": [],
...
...
@@ -56,6 +57,7 @@
"import torch.nn.functional as F\n",
"from pytorch_widedeep.wdtypes import *\n",
"\n",
"\n",
"def _predict_ziln(preds: Tensor) -> Tensor:\n",
" \"\"\"Calculates predicted mean of zero inflated lognormal logits.\n",