提交 8ee70da1 编写于 作者: V Varuna Jayasiri

feedback link

上级 3ede1c34
......@@ -22,6 +22,7 @@ and
[relative multi-headed attention](https://lab-ml.com/labml_nn/transformers/relative_mha.html).
* [kNN-LM: Generalization through Memorization](https://lab-ml.com/labml_nn/transformers/knn)
* [Feedback Transformer](https://lab-ml.com/labml_nn/transformers/feedback)
#### ✨ [Recurrent Highway Networks](https://lab-ml.com/labml_nn/recurrent_highway_networks)
......
"""
---
title: HyperNetworks
title: HyperNetworks - HyperLSTM
summary: A PyTorch implementation/tutorial of HyperLSTM introduced in paper HyperNetworks.
---
# HyperNetworks
# HyperNetworks - HyperLSTM
We have implemented HyperLSTM introduced in paper
[HyperNetworks](https://arxiv.org/abs/1609.09106), with annotations.
......
......@@ -21,7 +21,12 @@ and derivatives and enhancements of it.
## [kNN-LM](knn)
This is an implementation of the paper
[Generalization through Memorization: Nearest Neighbor Language Models](https://arxiv.org/abs/1911.00172).
[Generalization through Memorization: Nearest Neighbor Language Models](https://arxiv.org/abs/1911.00172).
## [Feedback Transformer](feedback)
This is an implementation of the paper
[Accessing Higher-level Representations in Sequential Transformers with Feedback Memory](https://arxiv.org/abs/2002.09402).
"""
from .configs import TransformerConfigs
......
......@@ -23,6 +23,7 @@ and
[relative multi-headed attention](https://lab-ml.com/labml_nn/transformers/relative_mha.html).
* [kNN-LM: Generalization through Memorization](https://lab-ml.com/labml_nn/transformers/knn)
* [Feedback Transformer](https://lab-ml.com/labml_nn/transformers/feedback)
#### ✨ [Recurrent Highway Networks](https://lab-ml.com/labml_nn/recurrent_highway_networks)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册