Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Pytorch Widedeep
提交
dff1930a
P
Pytorch Widedeep
项目概览
Greenplum
/
Pytorch Widedeep
10 个月 前同步成功
通知
9
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Pytorch Widedeep
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
dff1930a
编写于
11月 26, 2021
作者:
P
Pavol Mulinka
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
added activation to head
上级
9d2f25a7
变更
2
显示空白变更内容
内联
并排
Showing
2 changed file
with
20 addition
and
6 deletion
+20
-6
pytorch_widedeep/losses.py
pytorch_widedeep/losses.py
+3
-3
pytorch_widedeep/models/wide_deep.py
pytorch_widedeep/models/wide_deep.py
+17
-3
未找到文件。
pytorch_widedeep/losses.py
浏览文件 @
dff1930a
...
...
@@ -15,13 +15,13 @@ class TweedieLoss(nn.Module):
<https://arxiv.org/abs/1811.10192>`
"""
def
__init__
():
def
__init__
(
self
):
super
().
__init__
()
def
forward
(
self
,
input
:
Tensor
,
target
:
Tensor
,
p
=
1.5
)
->
Tensor
:
loss
=
-
y
*
torch
.
pow
(
y_ha
t
,
1
-
p
)
/
(
1
-
p
)
+
\
torch
.
pow
(
y_ha
t
,
2
-
p
)
/
(
2
-
p
)
loss
=
-
target
*
torch
.
pow
(
inpu
t
,
1
-
p
)
/
(
1
-
p
)
+
\
torch
.
pow
(
inpu
t
,
2
-
p
)
/
(
2
-
p
)
return
torch
.
mean
(
loss
)
...
...
pytorch_widedeep/models/wide_deep.py
浏览文件 @
dff1930a
...
...
@@ -16,7 +16,7 @@ import torch
import
torch.nn
as
nn
from
pytorch_widedeep.wdtypes
import
*
# noqa: F403
from
pytorch_widedeep.models.tab_mlp
import
MLP
from
pytorch_widedeep.models.tab_mlp
import
MLP
,
get_activation_fn
from
pytorch_widedeep.models.tabnet.tab_net
import
TabNetPredLayer
warnings
.
filterwarnings
(
"default"
,
category
=
UserWarning
)
...
...
@@ -87,6 +87,10 @@ class WideDeep(nn.Module):
the order of the operations in the dense layer. If ``True``:
``[LIN -> ACT -> BN -> DP]``. If ``False``: ``[BN -> DP -> LIN ->
ACT]``
head_activation_last: bool, default=False
If final layer has activation function or not. Important if you are using
loss functions non-negative input restrictions, e.g. RMSLE, or if you know
your predictions are limited only to <0, inf)
pred_dim: int, default = 1
Size of the final wide and deep output layer containing the
predictions. `1` for regression and binary classification or number
...
...
@@ -131,6 +135,7 @@ class WideDeep(nn.Module):
head_batchnorm
:
bool
=
False
,
head_batchnorm_last
:
bool
=
False
,
head_linear_first
:
bool
=
False
,
head_activation_last
:
bool
=
False
,
pred_dim
:
int
=
1
,
):
super
(
WideDeep
,
self
).
__init__
()
...
...
@@ -154,6 +159,8 @@ class WideDeep(nn.Module):
self
.
deeptext
=
deeptext
self
.
deepimage
=
deepimage
self
.
deephead
=
deephead
# to check when loss function is applied
self
.
head_activation_last
=
head_activation_last
if
self
.
deeptabular
is
not
None
:
self
.
is_tabnet
=
deeptabular
.
__class__
.
__name__
==
"TabNet"
...
...
@@ -206,12 +213,15 @@ class WideDeep(nn.Module):
head_batchnorm_last
,
head_linear_first
,
)
self
.
deephead
.
add_module
(
"head_out"
,
nn
.
Linear
(
head_hidden_dims
[
-
1
],
self
.
pred_dim
)
)
if
self
.
head_activation_last
:
self
.
deephead
.
add_module
(
"head_act"
,
get_activation_fn
(
head_activation
)
)
def
_add_pred_layer
(
self
):
def
_add_pred_layer
(
self
,
head_activation
):
if
self
.
deeptabular
is
not
None
:
if
self
.
is_tabnet
:
self
.
deeptabular
=
nn
.
Sequential
(
...
...
@@ -231,6 +241,10 @@ class WideDeep(nn.Module):
self
.
deepimage
=
nn
.
Sequential
(
self
.
deepimage
,
nn
.
Linear
(
self
.
deepimage
.
output_dim
,
self
.
pred_dim
)
)
if
self
.
head_activation_last
:
self
.
deephead
.
add_module
(
"head_act"
,
get_activation_fn
(
head_activation
)
)
def
_forward_wide
(
self
,
X
):
if
self
.
wide
is
not
None
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录