提交 7be73d04 编写于 作者: J jrzaurin

Adjusted docs and README. Bump to version 1

上级 0070c739
...@@ -4,6 +4,7 @@ python: ...@@ -4,6 +4,7 @@ python:
- "3.6" - "3.6"
- "3.7" - "3.7"
- "3.8" - "3.8"
- "3.9"
matrix: matrix:
fast_finish: true fast_finish: true
include: include:
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity) [![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues) [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
[![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep) [![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep)
[![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/) [![Python 3.6 3.7 3.8 3.9](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue.svg)](https://www.python.org/)
# pytorch-widedeep # pytorch-widedeep
...@@ -24,8 +24,7 @@ using wide and deep models. ...@@ -24,8 +24,7 @@ using wide and deep models.
### Introduction ### Introduction
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm, [Wide & Deep ``pytorch-widedeep`` is based on Google's [Wide and Deep Algorithm](https://arxiv.org/abs/1606.07792)
Learning for Recommender Systems](https://arxiv.org/abs/1606.07792).
In general terms, `pytorch-widedeep` is a package to use deep learning with In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text tabular data. In particular, is intended to facilitate the combination of text
...@@ -86,7 +85,7 @@ It is important to emphasize that **each individual component, `wide`, ...@@ -86,7 +85,7 @@ It is important to emphasize that **each individual component, `wide`,
isolation. For example, one could use only `wide`, which is in simply a linear isolation. For example, one could use only `wide`, which is in simply a linear
model. In fact, one of the most interesting functionalities model. In fact, one of the most interesting functionalities
in``pytorch-widedeep`` is the ``deeptabular`` component. Currently, in``pytorch-widedeep`` is the ``deeptabular`` component. Currently,
``pytorch-widedeep`` offers 3 models for that component: ``pytorch-widedeep`` offers 4 models for that component:
1. ``TabMlp``: this is almost identical to the [tabular 1. ``TabMlp``: this is almost identical to the [tabular
model](https://docs.fast.ai/tutorial.tabular.html) in the fantastic model](https://docs.fast.ai/tutorial.tabular.html) in the fantastic
...@@ -144,20 +143,20 @@ cd pytorch-widedeep ...@@ -144,20 +143,20 @@ cd pytorch-widedeep
pip install -e . pip install -e .
``` ```
**Important note for Mac users**: at the time of writing (Feb-2021) the latest **Important note for Mac users**: at the time of writing (June-2021) the
`torch` release is `1.7.1`. This release has some latest `torch` release is `1.9`. Some past
[issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206) [issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
when running on Mac and the data-loaders will not run in parallel. In when running on Mac, present in previous versions, persist on this release and
addition, since `python 3.8`, [the `multiprocessing` library start method the data-loaders will not run in parallel. In addition, since `python 3.8`,
changed from `'fork'` to [the `multiprocessing` library start method changed from `'fork'` to
`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods). `'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods).
This also affects the data-loaders (for any `torch` version) and they will not This also affects the data-loaders (for any `torch` version) and they will
run in parallel. Therefore, for Mac users I recommend using `python 3.6` or not run in parallel. Therefore, for Mac users I recommend using `python 3.6`
`3.7` and `torch <= 1.6` (with the corresponding, consistent version of or `3.7` and `torch <= 1.6` (with the corresponding, consistent version of
`torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this `torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this
versioning in the `setup.py` file since I expect that all these issues are versioning in the `setup.py` file since I expect that all these issues are
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip or fixed in the future. Therefore, after installing `pytorch-widedeep` via pip
directly from github, downgrade `torch` and `torchvision` manually: or directly from github, downgrade `torch` and `torchvision` manually:
```bash ```bash
pip install pytorch-widedeep pip install pytorch-widedeep
......
0.4.8 1.0.0
\ No newline at end of file \ No newline at end of file
...@@ -13,3 +13,4 @@ them to address different problems ...@@ -13,3 +13,4 @@ them to address different problems
* `Regression with Images and Text <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/05_Regression_with_Images_and_Text.ipynb>`__ * `Regression with Images and Text <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/05_Regression_with_Images_and_Text.ipynb>`__
* `FineTune routines <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/06_FineTune_and_WarmUp_Model_Components.ipynb>`__ * `FineTune routines <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/06_FineTune_and_WarmUp_Model_Components.ipynb>`__
* `Custom Components <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/07_Custom_Components.ipynb>`__ * `Custom Components <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/07_Custom_Components.ipynb>`__
* `Save and Load Model and Artifacts <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/08_save_and_load_model_and_artifacts.ipynb>`__
...@@ -90,7 +90,7 @@ deeptabular, deeptext and deepimage, can be used independently** and in ...@@ -90,7 +90,7 @@ deeptabular, deeptext and deepimage, can be used independently** and in
isolation. For example, one could use only ``wide``, which is in simply a isolation. For example, one could use only ``wide``, which is in simply a
linear model. In fact, one of the most interesting offerings of linear model. In fact, one of the most interesting offerings of
``pytorch-widedeep`` is the ``deeptabular`` component. Currently, ``pytorch-widedeep`` is the ``deeptabular`` component. Currently,
``pytorch-widedeep`` offers 3 models for that component: ``pytorch-widedeep`` offers 4 models for that component:
1. ``TabMlp``: this is almost identical to the `tabular 1. ``TabMlp``: this is almost identical to the `tabular
model <https://docs.fast.ai/tutorial.tabular.html>`_ in the fantastic model <https://docs.fast.ai/tutorial.tabular.html>`_ in the fantastic
...@@ -101,12 +101,14 @@ features, and passed then through a MLP. ...@@ -101,12 +101,14 @@ features, and passed then through a MLP.
2. ``TabRenset``: This is similar to the previous model but the embeddings are 2. ``TabRenset``: This is similar to the previous model but the embeddings are
passed through a series of ResNet blocks built with dense layers. passed through a series of ResNet blocks built with dense layers.
3. ``TabTransformer``: Details on the TabTransformer can be found in: 3. ``Tabnet``: Details on TabNet can be found in: `TabNet: Attentive
`TabTransformer: Tabular Data Modeling Using Contextual Interpretable Tabular Learning <https://arxiv.org/abs/1908.07442>`_.
Embeddings <https://arxiv.org/pdf/2012.06678.pdf>`_.
4. ``TabTransformer``: Details on the TabTransformer can be found in:
`TabTransformer: Tabular Data Modeling Using Contextual Embeddings
<https://arxiv.org/pdf/2012.06678.pdf>`_.
For details on these 3 models and their options please see the examples in the For details on these 4 models and their options please see the examples in the
Examples folder and the documentation. Examples folder and the documentation.
Finally, while I recommend using the ``wide`` and ``deeptabular`` models in Finally, while I recommend using the ``wide`` and ``deeptabular`` models in
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity) [![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues) [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
[![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep) [![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep)
[![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/) [![Python 3.6 3.7 3.8 3.9](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue.svg)](https://www.python.org/)
# pytorch-widedeep # pytorch-widedeep
...@@ -19,8 +19,7 @@ using wide and deep models. ...@@ -19,8 +19,7 @@ using wide and deep models.
### Introduction ### Introduction
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm, [Wide & Deep ``pytorch-widedeep`` is based on Google's [Wide and Deep Algorithm](https://arxiv.org/abs/1606.07792)
Learning for Recommender Systems](https://arxiv.org/abs/1606.07792).
In general terms, `pytorch-widedeep` is a package to use deep learning with In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text tabular data. In particular, is intended to facilitate the combination of text
...@@ -56,20 +55,20 @@ cd pytorch-widedeep ...@@ -56,20 +55,20 @@ cd pytorch-widedeep
pip install -e . pip install -e .
``` ```
**Important note for Mac users**: at the time of writing (Dec-2020) the latest **Important note for Mac users**: at the time of writing (June-2021) the
`torch` release is `1.7`. This release has some latest `torch` release is `1.9`. Some past
[issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206) [issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
when running on Mac and the data-loaders will not run in parallel. In when running on Mac, present in previous versions, persist on this release and
addition, since `python 3.8`, [the `multiprocessing` library start method the data-loaders will not run in parallel. In addition, since `python 3.8`,
changed from `'fork'` to [the `multiprocessing` library start method changed from `'fork'` to
`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods). `'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods).
This also affects the data-loaders (for any `torch` version) and they will not This also affects the data-loaders (for any `torch` version) and they will
run in parallel. Therefore, for Mac users I recommend using `python 3.6` or not run in parallel. Therefore, for Mac users I recommend using `python 3.6`
`3.7` and `torch <= 1.6` (with the corresponding, consistent version of or `3.7` and `torch <= 1.6` (with the corresponding, consistent version of
`torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this `torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this
versioning in the `setup.py` file since I expect that all these issues are versioning in the `setup.py` file since I expect that all these issues are
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip or fixed in the future. Therefore, after installing `pytorch-widedeep` via pip
directly from github, downgrade `torch` and `torchvision` manually: or directly from github, downgrade `torch` and `torchvision` manually:
```bash ```bash
pip install pytorch-widedeep pip install pytorch-widedeep
......
...@@ -36,7 +36,7 @@ class TabPreprocessor(BasePreprocessor): ...@@ -36,7 +36,7 @@ class TabPreprocessor(BasePreprocessor):
the possibility of normalising the input continuous features via a the possibility of normalising the input continuous features via a
``BatchNorm`` or a ``LayerNorm`` layer. see ``BatchNorm`` or a ``LayerNorm`` layer. see
:class:`pytorch_widedeep.models` :class:`pytorch_widedeep.models`
auto_embed_dim: bool auto_embed_dim: bool, default = True
Boolean indicating whether the embedding dimensions will be Boolean indicating whether the embedding dimensions will be
automatically defined via fastai's rule of thumb': automatically defined via fastai's rule of thumb':
:math:`min(600, int(1.6 \times n_{cat}^{0.56}))` :math:`min(600, int(1.6 \times n_{cat}^{0.56}))`
......
__version__ = "0.4.8" __version__ = "1.0.0"
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册