Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Annotated Deep Learning Paper Implementations
提交
895cad46
A
Annotated Deep Learning Paper Implementations
项目概览
Greenplum
/
Annotated Deep Learning Paper Implementations
11 个月 前同步成功
通知
6
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
Annotated Deep Learning Paper Implementations
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
895cad46
编写于
7月 25, 2021
作者:
V
Varuna Jayasiri
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
link fixes
上级
ce23bc10
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
8 addition
and
8 deletion
+8
-8
docs/graphs/gat/readme.html
docs/graphs/gat/readme.html
+2
-2
docs/graphs/gatv2/readme.html
docs/graphs/gatv2/readme.html
+2
-2
labml_nn/graphs/gat/readme.md
labml_nn/graphs/gat/readme.md
+2
-2
labml_nn/graphs/gatv2/readme.md
labml_nn/graphs/gatv2/readme.md
+2
-2
未找到文件。
docs/graphs/gat/readme.html
浏览文件 @
895cad46
...
@@ -67,7 +67,7 @@
...
@@ -67,7 +67,7 @@
<div
class=
'section-link'
>
<div
class=
'section-link'
>
<a
href=
'#section-0'
>
#
</a>
<a
href=
'#section-0'
>
#
</a>
</div>
</div>
<h1><a
href=
"https://nn.labml.ai/graph/gat/index.html"
>
Graph Attention Networks (GAT)
</a></h1>
<h1><a
href=
"https://nn.labml.ai/graph
s
/gat/index.html"
>
Graph Attention Networks (GAT)
</a></h1>
<p>
This is a
<a
href=
"https://pytorch.org"
>
PyTorch
</a>
implementation of the paper
<p>
This is a
<a
href=
"https://pytorch.org"
>
PyTorch
</a>
implementation of the paper
<a
href=
"https://arxiv.org/abs/1710.10903"
>
Graph Attention Networks
</a>
.
</p>
<a
href=
"https://arxiv.org/abs/1710.10903"
>
Graph Attention Networks
</a>
.
</p>
<p>
GATs work on graph data.
<p>
GATs work on graph data.
...
@@ -79,7 +79,7 @@ GAT consists of graph attention layers stacked on top of each other.
...
@@ -79,7 +79,7 @@ GAT consists of graph attention layers stacked on top of each other.
Each graph attention layer gets node embeddings as inputs and outputs transformed embeddings.
Each graph attention layer gets node embeddings as inputs and outputs transformed embeddings.
The node embeddings pay attention to the embeddings of other nodes it
’
s connected to.
The node embeddings pay attention to the embeddings of other nodes it
’
s connected to.
The details of graph attention layers are included alongside the implementation.
</p>
The details of graph attention layers are included alongside the implementation.
</p>
<p>
Here is
<a
href=
"https://nn.labml.ai/graph/gat/experiment.html"
>
the training code
</a>
for training
<p>
Here is
<a
href=
"https://nn.labml.ai/graph
s
/gat/experiment.html"
>
the training code
</a>
for training
a two-layer GAT on Cora dataset.
</p>
a two-layer GAT on Cora dataset.
</p>
<p><a
href=
"https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d"
><img
alt=
"View Run"
src=
"https://img.shields.io/badge/labml-experiment-brightgreen"
/></a></p>
<p><a
href=
"https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d"
><img
alt=
"View Run"
src=
"https://img.shields.io/badge/labml-experiment-brightgreen"
/></a></p>
</div>
</div>
...
...
docs/graphs/gatv2/readme.html
浏览文件 @
895cad46
...
@@ -67,7 +67,7 @@
...
@@ -67,7 +67,7 @@
<div
class=
'section-link'
>
<div
class=
'section-link'
>
<a
href=
'#section-0'
>
#
</a>
<a
href=
'#section-0'
>
#
</a>
</div>
</div>
<h1><a
href=
"https://nn.labml.ai/graph/gatv2/index.html"
>
Graph Attention Networks v2 (GATv2)
</a></h1>
<h1><a
href=
"https://nn.labml.ai/graph
s
/gatv2/index.html"
>
Graph Attention Networks v2 (GATv2)
</a></h1>
<p>
This is a
<a
href=
"https://pytorch.org"
>
PyTorch
</a>
implementation of the GATv2 opeartor from the paper
<p>
This is a
<a
href=
"https://pytorch.org"
>
PyTorch
</a>
implementation of the GATv2 opeartor from the paper
<a
href=
"https://arxiv.org/abs/2105.14491"
>
How Attentive are Graph Attention Networks?
</a>
.
</p>
<a
href=
"https://arxiv.org/abs/2105.14491"
>
How Attentive are Graph Attention Networks?
</a>
.
</p>
<p>
GATv2s work on graph data.
<p>
GATv2s work on graph data.
...
@@ -78,7 +78,7 @@ connect the papers.</p>
...
@@ -78,7 +78,7 @@ connect the papers.</p>
since the linear layers in the standard GAT are applied right after each other, the ranking
since the linear layers in the standard GAT are applied right after each other, the ranking
of attended nodes is unconditioned on the query node.
of attended nodes is unconditioned on the query node.
In contrast, in GATv2, every node can attend to any other node.
</p>
In contrast, in GATv2, every node can attend to any other node.
</p>
<p>
Here is
<a
href=
"https://nn.labml.ai/graph/gatv2/experiment.html"
>
the training code
</a>
for training
<p>
Here is
<a
href=
"https://nn.labml.ai/graph
s
/gatv2/experiment.html"
>
the training code
</a>
for training
a two-layer GAT on Cora dataset.
</p>
a two-layer GAT on Cora dataset.
</p>
<p><a
href=
"https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868"
><img
alt=
"View Run"
src=
"https://img.shields.io/badge/labml-experiment-brightgreen"
/></a></p>
<p><a
href=
"https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868"
><img
alt=
"View Run"
src=
"https://img.shields.io/badge/labml-experiment-brightgreen"
/></a></p>
</div>
</div>
...
...
labml_nn/graphs/gat/readme.md
浏览文件 @
895cad46
# [Graph Attention Networks (GAT)](https://nn.labml.ai/graph/gat/index.html)
# [Graph Attention Networks (GAT)](https://nn.labml.ai/graph
s
/gat/index.html)
This is a
[
PyTorch
](
https://pytorch.org
)
implementation of the paper
This is a
[
PyTorch
](
https://pytorch.org
)
implementation of the paper
[
Graph Attention Networks
](
https://arxiv.org/abs/1710.10903
)
.
[
Graph Attention Networks
](
https://arxiv.org/abs/1710.10903
)
.
...
@@ -14,7 +14,7 @@ Each graph attention layer gets node embeddings as inputs and outputs transforme
...
@@ -14,7 +14,7 @@ Each graph attention layer gets node embeddings as inputs and outputs transforme
The node embeddings pay attention to the embeddings of other nodes it's connected to.
The node embeddings pay attention to the embeddings of other nodes it's connected to.
The details of graph attention layers are included alongside the implementation.
The details of graph attention layers are included alongside the implementation.
Here is
[
the training code
](
https://nn.labml.ai/graph/gat/experiment.html
)
for training
Here is
[
the training code
](
https://nn.labml.ai/graph
s
/gat/experiment.html
)
for training
a two-layer GAT on Cora dataset.
a two-layer GAT on Cora dataset.
[
![View Run
](
https://img.shields.io/badge/labml-experiment-brightgreen
)
](https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d)
[
![View Run
](
https://img.shields.io/badge/labml-experiment-brightgreen
)
](https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d)
labml_nn/graphs/gatv2/readme.md
浏览文件 @
895cad46
# [Graph Attention Networks v2 (GATv2)](https://nn.labml.ai/graph/gatv2/index.html)
# [Graph Attention Networks v2 (GATv2)](https://nn.labml.ai/graph
s
/gatv2/index.html)
This is a
[
PyTorch
](
https://pytorch.org
)
implementation of the GATv2 opeartor from the paper
This is a
[
PyTorch
](
https://pytorch.org
)
implementation of the GATv2 opeartor from the paper
[
How Attentive are Graph Attention Networks?
](
https://arxiv.org/abs/2105.14491
)
.
[
How Attentive are Graph Attention Networks?
](
https://arxiv.org/abs/2105.14491
)
.
...
@@ -13,7 +13,7 @@ since the linear layers in the standard GAT are applied right after each other,
...
@@ -13,7 +13,7 @@ since the linear layers in the standard GAT are applied right after each other,
of attended nodes is unconditioned on the query node.
of attended nodes is unconditioned on the query node.
In contrast, in GATv2, every node can attend to any other node.
In contrast, in GATv2, every node can attend to any other node.
Here is
[
the training code
](
https://nn.labml.ai/graph/gatv2/experiment.html
)
for training
Here is
[
the training code
](
https://nn.labml.ai/graph
s
/gatv2/experiment.html
)
for training
a two-layer GAT on Cora dataset.
a two-layer GAT on Cora dataset.
[
![View Run
](
https://img.shields.io/badge/labml-experiment-brightgreen
)
](https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868)
[
![View Run
](
https://img.shields.io/badge/labml-experiment-brightgreen
)
](https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录