From c287c870eb3e22f1a05753c821daaa715b1ed917 Mon Sep 17 00:00:00 2001 From: jrzaurin Date: Wed, 20 Oct 2021 10:14:14 +0200 Subject: [PATCH] added ray to the docs dependencies which hopefully will fix docs issues --- docs/callbacks.rst | 4 +-- docs/examples.rst | 1 + docs/index.rst | 76 ++++++++++++++++++++++++++----------------- docs/installation.rst | 3 +- docs/requirements.txt | 3 +- 5 files changed, 54 insertions(+), 33 deletions(-) diff --git a/docs/callbacks.rst b/docs/callbacks.rst index 2ba4753..e8e65c4 100644 --- a/docs/callbacks.rst +++ b/docs/callbacks.rst @@ -1,8 +1,8 @@ Callbacks ========= -Here are the 4 callbacks available in ``pytorch-widedepp``: ``History``, -``LRHistory``, ``ModelCheckpoint`` and ``EarlyStopping``. +Here are the 5 callbacks available in ``pytorch-widedepp``: ``History``, +``LRHistory``, ``ModelCheckpoint``, ``EarlyStopping`` and ``RayTuneReporter``. .. note:: ``History`` runs by default, so it should not be passed to the ``Trainer`` diff --git a/docs/examples.rst b/docs/examples.rst index 3af3f85..ce5a802 100644 --- a/docs/examples.rst +++ b/docs/examples.rst @@ -17,3 +17,4 @@ them to address different problems * `Using Custom DataLoaders and Torchmetrics `__ * `The Transformer Family `__ * `Extracting Embeddings `__ +* `HyperParameter Tuning With RayTune `__ diff --git a/docs/index.rst b/docs/index.rst index 67f70f6..5153aaf 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -33,11 +33,11 @@ Introduction `_. In general terms, ``pytorch-widedeep`` is a package to use deep learning with -tabular data. In particular, is intended to facilitate the combination of text -and images with corresponding tabular data using wide and deep models. With -that in mind there are a number of architectures that can be implemented with -just a few lines of code. The main components of those architectures are shown -in the Figure below: +tabular and multimodal data. In particular, is intended to facilitate the +combination of text and images with corresponding tabular data using wide and +deep models. With that in mind there are a number of architectures that can +be implemented with just a few lines of code. The main components of those +architectures are shown in the Figure below: .. image:: figures/widedeep_arch.png :width: 700px @@ -88,29 +88,52 @@ into: It is important to emphasize that **each individual component, wide, -deeptabular, deeptext and deepimage, can be used independently** and in -isolation. For example, one could use only ``wide``, which is in simply a -linear model. In fact, one of the most interesting offerings of -``pytorch-widedeep`` is the ``deeptabular`` component. Currently, -``pytorch-widedeep`` offers 4 models for that component: - -1. ``TabMlp``: this is almost identical to the `tabular -model `_ in the fantastic -`fastai `_ library, and consists simply in embeddings -representing the categorical features, concatenated with the continuous -features, and passed then through a MLP. - -2. ``TabRenset``: This is similar to the previous model but the embeddings are +deeptabular, deeptext and deepimage, can be used independently and in +isolation**. For example, one could use only ``wide``, which is in simply a +linear model. In fact, one of the most interesting functionalities in +``pytorch-widedeep`` would be the use of the ``deeptabular`` component on its +own, i.e. what one might normally refer as Deep Learning for Tabular Data. +Currently, ``pytorch-widedeep`` offers the following different models for +that component: + + +1. **TabMlp**: a simple MLP that receives embeddings representing the +categorical features, concatenated with the continuous features. + +2. **TabResnet**: similar to the previous model but the embeddings are passed through a series of ResNet blocks built with dense layers. -3. ``Tabnet``: Details on TabNet can be found in: `TabNet: Attentive -Interpretable Tabular Learning `_. +3. **TabNet**: details on TabNet can be found in `TabNet: Attentive +Interpretable Tabular Learning `_ -4. ``TabTransformer``: Details on the TabTransformer can be found in: +And the ``Tabformer`` family, i.e. Transformers for Tabular data: + +4. **TabTransformer**: details on the TabTransformer can be found in `TabTransformer: Tabular Data Modeling Using Contextual Embeddings `_. -For details on these 4 models and their options please see the examples in the +5. **SAINT**: Details on SAINT can be found in `SAINT: Improved Neural +Networks for Tabular Data via Row Attention and Contrastive Pre-Training +`_. + +6. **FT-Transformer**: details on the FT-Transformer can be found in +`Revisiting Deep Learning Models for Tabular Data +`_. + +7. **TabFastFormer**: adaptation of the FastFormer for tabular data. Details +on the Fasformer can be found in `FastFormers: Highly Efficient Transformer +Models for Natural Language Understanding +`_ + +8. **TabPerceiver**: adaptation of the Perceiver for tabular data. Details on +the Perceiver can be found in `Perceiver: General Perception with Iterative +Attention `_ + +Note that while there are scientific publications for the TabTransformer, +SAINT and FT-Transformer, the TabFasfFormer and TabPerceiver are our own +adaptation of those algorithms for tabular data. + +For details on these models and their options please see the examples in the Examples folder and the documentation. Finally, while I recommend using the ``wide`` and ``deeptabular`` models in @@ -120,13 +143,8 @@ possible as long as the the custom models have an attribute called ``output_dim`` with the size of the last layer of activations, so that ``WideDeep`` can be constructed. Again, examples on how to use custom components can be found in the Examples folder. Just in case -``pytorch-widedeep`` includes standard text (stack of LSTMs) and image -(pre-trained ResNets or stack of CNNs) models. - -References ----------- -[1] Heng-Tze Cheng, et al. 2016. Wide & Deep Learning for Recommender Systems. -`arXiv:1606.07792 `_. +``pytorch-widedeep`` includes standard text (stack of LSTMs or GRUs) and +image(pre-trained ResNets or stack of CNNs) models. Indices and tables ================== diff --git a/docs/installation.rst b/docs/installation.rst index 11500c0..a4e2ad8 100644 --- a/docs/installation.rst +++ b/docs/installation.rst @@ -41,4 +41,5 @@ Dependencies * torchvision * einops * wrapt -* torchmetrics \ No newline at end of file +* torchmetrics +* ray[tune] diff --git a/docs/requirements.txt b/docs/requirements.txt index bab1972..4fde482 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -17,4 +17,5 @@ torch torchvision einops wrapt -torchmetrics \ No newline at end of file +torchmetrics +ray[tune] \ No newline at end of file -- GitLab