提交 7be73d04 编写于 作者: J jrzaurin

Adjusted docs and README. Bump to version 1

上级 0070c739
......@@ -4,6 +4,7 @@ python:
- "3.6"
- "3.7"
- "3.8"
- "3.9"
matrix:
fast_finish: true
include:
......
......@@ -9,7 +9,7 @@
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
[![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep)
[![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/)
[![Python 3.6 3.7 3.8 3.9](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue.svg)](https://www.python.org/)
# pytorch-widedeep
......@@ -24,8 +24,7 @@ using wide and deep models.
### Introduction
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm, [Wide & Deep
Learning for Recommender Systems](https://arxiv.org/abs/1606.07792).
``pytorch-widedeep`` is based on Google's [Wide and Deep Algorithm](https://arxiv.org/abs/1606.07792)
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
......@@ -86,7 +85,7 @@ It is important to emphasize that **each individual component, `wide`,
isolation. For example, one could use only `wide`, which is in simply a linear
model. In fact, one of the most interesting functionalities
in``pytorch-widedeep`` is the ``deeptabular`` component. Currently,
``pytorch-widedeep`` offers 3 models for that component:
``pytorch-widedeep`` offers 4 models for that component:
1. ``TabMlp``: this is almost identical to the [tabular
model](https://docs.fast.ai/tutorial.tabular.html) in the fantastic
......@@ -144,20 +143,20 @@ cd pytorch-widedeep
pip install -e .
```
**Important note for Mac users**: at the time of writing (Feb-2021) the latest
`torch` release is `1.7.1`. This release has some
**Important note for Mac users**: at the time of writing (June-2021) the
latest `torch` release is `1.9`. Some past
[issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
when running on Mac and the data-loaders will not run in parallel. In
addition, since `python 3.8`, [the `multiprocessing` library start method
changed from `'fork'` to
when running on Mac, present in previous versions, persist on this release and
the data-loaders will not run in parallel. In addition, since `python 3.8`,
[the `multiprocessing` library start method changed from `'fork'` to
`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods).
This also affects the data-loaders (for any `torch` version) and they will not
run in parallel. Therefore, for Mac users I recommend using `python 3.6` or
`3.7` and `torch <= 1.6` (with the corresponding, consistent version of
This also affects the data-loaders (for any `torch` version) and they will
not run in parallel. Therefore, for Mac users I recommend using `python 3.6`
or `3.7` and `torch <= 1.6` (with the corresponding, consistent version of
`torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this
versioning in the `setup.py` file since I expect that all these issues are
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip or
directly from github, downgrade `torch` and `torchvision` manually:
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip
or directly from github, downgrade `torch` and `torchvision` manually:
```bash
pip install pytorch-widedeep
......
0.4.8
\ No newline at end of file
1.0.0
\ No newline at end of file
......@@ -13,3 +13,4 @@ them to address different problems
* `Regression with Images and Text <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/05_Regression_with_Images_and_Text.ipynb>`__
* `FineTune routines <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/06_FineTune_and_WarmUp_Model_Components.ipynb>`__
* `Custom Components <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/07_Custom_Components.ipynb>`__
* `Save and Load Model and Artifacts <https://github.com/jrzaurin/pytorch-widedeep/blob/master/examples/08_save_and_load_model_and_artifacts.ipynb>`__
......@@ -90,7 +90,7 @@ deeptabular, deeptext and deepimage, can be used independently** and in
isolation. For example, one could use only ``wide``, which is in simply a
linear model. In fact, one of the most interesting offerings of
``pytorch-widedeep`` is the ``deeptabular`` component. Currently,
``pytorch-widedeep`` offers 3 models for that component:
``pytorch-widedeep`` offers 4 models for that component:
1. ``TabMlp``: this is almost identical to the `tabular
model <https://docs.fast.ai/tutorial.tabular.html>`_ in the fantastic
......@@ -101,12 +101,14 @@ features, and passed then through a MLP.
2. ``TabRenset``: This is similar to the previous model but the embeddings are
passed through a series of ResNet blocks built with dense layers.
3. ``TabTransformer``: Details on the TabTransformer can be found in:
`TabTransformer: Tabular Data Modeling Using Contextual
Embeddings <https://arxiv.org/pdf/2012.06678.pdf>`_.
3. ``Tabnet``: Details on TabNet can be found in: `TabNet: Attentive
Interpretable Tabular Learning <https://arxiv.org/abs/1908.07442>`_.
4. ``TabTransformer``: Details on the TabTransformer can be found in:
`TabTransformer: Tabular Data Modeling Using Contextual Embeddings
<https://arxiv.org/pdf/2012.06678.pdf>`_.
For details on these 3 models and their options please see the examples in the
For details on these 4 models and their options please see the examples in the
Examples folder and the documentation.
Finally, while I recommend using the ``wide`` and ``deeptabular`` models in
......
......@@ -4,7 +4,7 @@
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
[![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep)
[![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/)
[![Python 3.6 3.7 3.8 3.9](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue.svg)](https://www.python.org/)
# pytorch-widedeep
......@@ -19,8 +19,7 @@ using wide and deep models.
### Introduction
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm, [Wide & Deep
Learning for Recommender Systems](https://arxiv.org/abs/1606.07792).
``pytorch-widedeep`` is based on Google's [Wide and Deep Algorithm](https://arxiv.org/abs/1606.07792)
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
......@@ -56,20 +55,20 @@ cd pytorch-widedeep
pip install -e .
```
**Important note for Mac users**: at the time of writing (Dec-2020) the latest
`torch` release is `1.7`. This release has some
**Important note for Mac users**: at the time of writing (June-2021) the
latest `torch` release is `1.9`. Some past
[issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
when running on Mac and the data-loaders will not run in parallel. In
addition, since `python 3.8`, [the `multiprocessing` library start method
changed from `'fork'` to
when running on Mac, present in previous versions, persist on this release and
the data-loaders will not run in parallel. In addition, since `python 3.8`,
[the `multiprocessing` library start method changed from `'fork'` to
`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods).
This also affects the data-loaders (for any `torch` version) and they will not
run in parallel. Therefore, for Mac users I recommend using `python 3.6` or
`3.7` and `torch <= 1.6` (with the corresponding, consistent version of
This also affects the data-loaders (for any `torch` version) and they will
not run in parallel. Therefore, for Mac users I recommend using `python 3.6`
or `3.7` and `torch <= 1.6` (with the corresponding, consistent version of
`torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this
versioning in the `setup.py` file since I expect that all these issues are
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip or
directly from github, downgrade `torch` and `torchvision` manually:
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip
or directly from github, downgrade `torch` and `torchvision` manually:
```bash
pip install pytorch-widedeep
......
......@@ -36,7 +36,7 @@ class TabPreprocessor(BasePreprocessor):
the possibility of normalising the input continuous features via a
``BatchNorm`` or a ``LayerNorm`` layer. see
:class:`pytorch_widedeep.models`
auto_embed_dim: bool
auto_embed_dim: bool, default = True
Boolean indicating whether the embedding dimensions will be
automatically defined via fastai's rule of thumb':
:math:`min(600, int(1.6 \times n_{cat}^{0.56}))`
......
__version__ = "0.4.8"
__version__ = "1.0.0"
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册