And the ``Tabformer`` family, i.e. Transformers for Tabular data:
The ``Tabformer`` family, i.e. Transformers for Tabular data:
4.**TabTransformer**: details on the TabTransformer can be found in
[TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf).
...
...
@@ -133,12 +132,19 @@ on the Fasformer can be found in
the Perceiver can be found in
[Perceiver: General Perception with Iterative Attention](https://arxiv.org/abs/2103.03206)
And probabilistic DL models for tabular data based on
[Weight Uncertainty in Neural Networks](https://arxiv.org/abs/1505.05424):
9.**BayesianWide**: Probabilistic adaptation of the `Wide` model.
10.**BayesianTabMlp**: Probabilistic adaptation of the `TabMlp` model
Note that while there are scientific publications for the TabTransformer,
SAINT and FT-Transformer, the TabFasfFormer and TabPerceiver are our own
adaptation of those algorithms for tabular data.
For details on these models and their options please see the examples in the
Examples folder and the documentation.
For details on these models (and all the other models in the library for the
different data modes) and their corresponding options please see the examples
in the Examples folder and the documentation.
### Installation
...
...
@@ -165,13 +171,6 @@ cd pytorch-widedeep
pip install-e .
```
**Important note for Mac users**: Since `python
3.8`, [the `multiprocessing` library start method changed from `'fork'` to`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) which affects the data-loaders.
For the time being, `pytorch-widedeep` sets the `num_workers` to 0 when using
Mac and python version 3.8+.
Note that this issue does not affect Linux users.
### Quick start
Binary classification with the [adult
...
...
@@ -181,7 +180,6 @@ using `Wide` and `DeepDense` and defaults settings.
Building a wide (linear) and deep model with ``pytorch-widedeep``:
```python
importpandasaspd
importnumpyasnp
importtorch
...
...
@@ -191,16 +189,15 @@ from pytorch_widedeep import Trainer
3.8`, [the `multiprocessing` library start method changed from `'fork'` to`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) which affects the data-loaders.
For the time being, `pytorch-widedeep` sets the `num_workers` to 0 when using
Mac and python version 3.8+.
Note that this issue does not affect Linux users.
```bash
pip install pytorch-widedeep
pip install torch==1.6.0 torchvision==0.7.0
```
None of these issues affect Linux users.
### Quick start
Binary classification with the [adult
...
...
@@ -83,7 +70,6 @@ using `Wide` and `DeepDense` and defaults settings.
Building a wide (linear) and deep model with ``pytorch-widedeep``:
```python
importpandasaspd
importnumpyasnp
importtorch
...
...
@@ -93,16 +79,15 @@ from pytorch_widedeep import Trainer