**Experiments and comparisson with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)
**Experiments and comparisson with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)
**slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)
**Slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)
### Introduction
### Introduction
...
@@ -109,14 +109,14 @@ is an adaptation of the original implementation.
...
@@ -109,14 +109,14 @@ is an adaptation of the original implementation.
5.``FT-Transformer``: or Feature Tokenizer transformer. This is a relatively small
5.``FT-Transformer``: or Feature Tokenizer transformer. This is a relatively small
variation of the ``TabTransformer``. The variation itself was first
variation of the ``TabTransformer``. The variation itself was first
introduced in the ``SAINT`` paper, but the name ``FT-Transformer`` was first
introduced in the ``SAINT`` paper, but the name "``FT-Transformer``" was first
used in
used in
[Revisiting Deep Learning Models for TabularData](https://arxiv.org/abs/2106.11959).
[Revisiting Deep Learning Models for TabularData](https://arxiv.org/abs/2106.11959).
When using the ``FT-Transformer`` each continuous feature is "embedded"
When using the ``FT-Transformer`` each continuous feature is "embedded"
(i.e. each one going through a 1-layer MLP with or without activation
(i.e. going through a 1-layer MLP with or without activation function) and
function) and then passed through the attention blocks along with the
then passed through the attention blocks along with the categorical features.
categorical features. This is available in ``pytorch-widedeep``'s
This is available in ``pytorch-widedeep``'s ``TabTransformer`` by setting the
``TabTransformer`` by setting the parameter ``embed_continuous = True``.
parameter ``embed_continuous = True``.
6.``SAINT``: Details on SAINT can be found in:
6.``SAINT``: Details on SAINT can be found in:
...
@@ -161,20 +161,19 @@ cd pytorch-widedeep
...
@@ -161,20 +161,19 @@ cd pytorch-widedeep
pip install-e .
pip install-e .
```
```
**Important note for Mac users**: at the time of writing (June-2021) the
**Important note for Mac users**: at the time of writing the latest `torch`
latest `torch` release is `1.9`. Some past
release is `1.9`. Some past [issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
**Experiments and comparisson with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)
**Experiments and comparisson with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)
**slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)
**Slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)
### Introduction
### Introduction
...
@@ -57,20 +57,20 @@ cd pytorch-widedeep
...
@@ -57,20 +57,20 @@ cd pytorch-widedeep
pip install-e .
pip install-e .
```
```
**Important note for Mac users**: at the time of writing (June-2021) the
**Important note for Mac users**: at the time of writing the latest `torch`
latest `torch` release is `1.9`. Some past
release is `1.9`. Some past [issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)