Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Pytorch Widedeep
提交
748f3330
P
Pytorch Widedeep
项目概览
Greenplum
/
Pytorch Widedeep
11 个月 前同步成功
通知
9
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Pytorch Widedeep
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
748f3330
编写于
9月 25, 2019
作者:
J
jrzaurin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
removing attention layer. Will be implemented in future releases
上级
a658304e
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
1 addition
and
20 deletion
+1
-20
pytorch_widedeep/models/deep_text.py
pytorch_widedeep/models/deep_text.py
+1
-20
未找到文件。
pytorch_widedeep/models/deep_text.py
浏览文件 @
748f3330
...
@@ -10,15 +10,12 @@ from ..wdtypes import *
...
@@ -10,15 +10,12 @@ from ..wdtypes import *
class
DeepText
(
nn
.
Module
):
class
DeepText
(
nn
.
Module
):
def
__init__
(
self
,
vocab_size
:
int
,
embedding_dim
:
int
,
hidden_dim
:
int
,
n_layers
:
int
,
def
__init__
(
self
,
vocab_size
:
int
,
embedding_dim
:
int
,
hidden_dim
:
int
,
n_layers
:
int
,
rnn_dropout
:
float
,
spatial_dropout
:
float
,
padding_idx
:
int
,
output_dim
:
int
,
rnn_dropout
:
float
,
spatial_dropout
:
float
,
padding_idx
:
int
,
output_dim
:
int
,
attention
:
bool
=
False
,
bidirectional
:
bool
=
False
,
bidirectional
:
bool
=
False
,
embedding_matrix
:
Optional
[
np
.
ndarray
]
=
None
):
embedding_matrix
:
Optional
[
np
.
ndarray
]
=
None
):
super
(
DeepText
,
self
).
__init__
()
super
(
DeepText
,
self
).
__init__
()
"""
"""
Standard Text Classifier/Regressor with a stack of RNNs.
Standard Text Classifier/Regressor with a stack of RNNs.
"""
"""
self
.
bidirectional
=
bidirectional
self
.
bidirectional
=
bidirectional
self
.
attention
=
attention
self
.
spatial_dropout
=
spatial_dropout
self
.
spatial_dropout
=
spatial_dropout
self
.
embedding_dropout
=
nn
.
Dropout2d
(
spatial_dropout
)
self
.
embedding_dropout
=
nn
.
Dropout2d
(
spatial_dropout
)
self
.
embedding
=
nn
.
Embedding
(
vocab_size
,
embedding_dim
,
padding_idx
=
padding_idx
)
self
.
embedding
=
nn
.
Embedding
(
vocab_size
,
embedding_dim
,
padding_idx
=
padding_idx
)
...
@@ -33,20 +30,6 @@ class DeepText(nn.Module):
...
@@ -33,20 +30,6 @@ class DeepText(nn.Module):
input_dim
=
hidden_dim
*
2
if
bidirectional
else
hidden_dim
input_dim
=
hidden_dim
*
2
if
bidirectional
else
hidden_dim
self
.
dtlinear
=
nn
.
Linear
(
input_dim
,
output_dim
)
self
.
dtlinear
=
nn
.
Linear
(
input_dim
,
output_dim
)
def
attention_net
(
self
,
output
:
Tensor
,
hidden
:
Tensor
)
->
Tensor
:
"""
Attention through Soft alignment Score between output and last hidden.
Read here (and references therein) for more details:
https://machinelearningmastery.com/how-does-attention-work-in-encoder-decoder-recurrent-neural-networks/
code from here (there are more sophisticated approaches but these will do):
https://github.com/prakashpandey9/Text-Classification-Pytorch/blob/master/models/LSTM_Attn.py
"""
attn_weights
=
torch
.
bmm
(
output
,
hidden
.
unsqueeze
(
2
)).
squeeze
(
2
)
attn_weights
=
F
.
softmax
(
attn_weights
,
1
)
new_hidden
=
torch
.
bmm
(
output
.
transpose
(
1
,
2
),
attn_weights
.
unsqueeze
(
2
)).
squeeze
(
2
)
return
new_hidden
def
forward
(
self
,
X
:
Tensor
)
->
Tensor
:
def
forward
(
self
,
X
:
Tensor
)
->
Tensor
:
embedded
=
self
.
embedding
(
X
)
embedded
=
self
.
embedding
(
X
)
...
@@ -62,7 +45,5 @@ class DeepText(nn.Module):
...
@@ -62,7 +45,5 @@ class DeepText(nn.Module):
last_h
=
torch
.
cat
((
h
[
-
2
],
h
[
-
1
]),
dim
=
1
)
last_h
=
torch
.
cat
((
h
[
-
2
],
h
[
-
1
]),
dim
=
1
)
else
:
else
:
last_h
=
h
[
-
1
]
last_h
=
h
[
-
1
]
if
self
.
attention
:
last_h
=
self
.
attention_net
(
o
,
last_h
)
out
=
self
.
dtlinear
(
last_h
)
out
=
self
.
dtlinear
(
last_h
)
return
out
return
out
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录