提交 895cad46 编写于 作者: V Varuna Jayasiri

link fixes

上级 ce23bc10
......@@ -67,7 +67,7 @@
<div class='section-link'>
<a href='#section-0'>#</a>
</div>
<h1><a href="https://nn.labml.ai/graph/gat/index.html">Graph Attention Networks (GAT)</a></h1>
<h1><a href="https://nn.labml.ai/graphs/gat/index.html">Graph Attention Networks (GAT)</a></h1>
<p>This is a <a href="https://pytorch.org">PyTorch</a> implementation of the paper
<a href="https://arxiv.org/abs/1710.10903">Graph Attention Networks</a>.</p>
<p>GATs work on graph data.
......@@ -79,7 +79,7 @@ GAT consists of graph attention layers stacked on top of each other.
Each graph attention layer gets node embeddings as inputs and outputs transformed embeddings.
The node embeddings pay attention to the embeddings of other nodes it&rsquo;s connected to.
The details of graph attention layers are included alongside the implementation.</p>
<p>Here is <a href="https://nn.labml.ai/graph/gat/experiment.html">the training code</a> for training
<p>Here is <a href="https://nn.labml.ai/graphs/gat/experiment.html">the training code</a> for training
a two-layer GAT on Cora dataset.</p>
<p><a href="https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
</div>
......
......@@ -67,7 +67,7 @@
<div class='section-link'>
<a href='#section-0'>#</a>
</div>
<h1><a href="https://nn.labml.ai/graph/gatv2/index.html">Graph Attention Networks v2 (GATv2)</a></h1>
<h1><a href="https://nn.labml.ai/graphs/gatv2/index.html">Graph Attention Networks v2 (GATv2)</a></h1>
<p>This is a <a href="https://pytorch.org">PyTorch</a> implementation of the GATv2 opeartor from the paper
<a href="https://arxiv.org/abs/2105.14491">How Attentive are Graph Attention Networks?</a>.</p>
<p>GATv2s work on graph data.
......@@ -78,7 +78,7 @@ connect the papers.</p>
since the linear layers in the standard GAT are applied right after each other, the ranking
of attended nodes is unconditioned on the query node.
In contrast, in GATv2, every node can attend to any other node.</p>
<p>Here is <a href="https://nn.labml.ai/graph/gatv2/experiment.html">the training code</a> for training
<p>Here is <a href="https://nn.labml.ai/graphs/gatv2/experiment.html">the training code</a> for training
a two-layer GAT on Cora dataset.</p>
<p><a href="https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
</div>
......
# [Graph Attention Networks (GAT)](https://nn.labml.ai/graph/gat/index.html)
# [Graph Attention Networks (GAT)](https://nn.labml.ai/graphs/gat/index.html)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Graph Attention Networks](https://arxiv.org/abs/1710.10903).
......@@ -14,7 +14,7 @@ Each graph attention layer gets node embeddings as inputs and outputs transforme
The node embeddings pay attention to the embeddings of other nodes it's connected to.
The details of graph attention layers are included alongside the implementation.
Here is [the training code](https://nn.labml.ai/graph/gat/experiment.html) for training
Here is [the training code](https://nn.labml.ai/graphs/gat/experiment.html) for training
a two-layer GAT on Cora dataset.
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/d6c636cadf3511eba2f1e707f612f95d)
# [Graph Attention Networks v2 (GATv2)](https://nn.labml.ai/graph/gatv2/index.html)
# [Graph Attention Networks v2 (GATv2)](https://nn.labml.ai/graphs/gatv2/index.html)
This is a [PyTorch](https://pytorch.org) implementation of the GATv2 opeartor from the paper
[How Attentive are Graph Attention Networks?](https://arxiv.org/abs/2105.14491).
......@@ -13,7 +13,7 @@ since the linear layers in the standard GAT are applied right after each other,
of attended nodes is unconditioned on the query node.
In contrast, in GATv2, every node can attend to any other node.
Here is [the training code](https://nn.labml.ai/graph/gatv2/experiment.html) for training
Here is [the training code](https://nn.labml.ai/graphs/gatv2/experiment.html) for training
a two-layer GAT on Cora dataset.
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册