README.md 10.2 KB
Newer Older
J
Javier 已提交
1

J
jrzaurin 已提交
2
<p align="center">
3
  <img width="300" src="docs/figures/widedeep_logo.png">
J
jrzaurin 已提交
4 5
</p>

J
jrzaurin 已提交
6
[![Build Status](https://travis-ci.org/jrzaurin/pytorch-widedeep.svg?branch=master)](https://travis-ci.org/jrzaurin/pytorch-widedeep)
J
jrzaurin 已提交
7
[![Documentation Status](https://readthedocs.org/projects/pytorch-widedeep/badge/?version=latest)](https://pytorch-widedeep.readthedocs.io/en/latest/?badge=latest)
8
[![PyPI version](https://badge.fury.io/py/pytorch-widedeep.svg)](https://badge.fury.io/py/pytorch-widedeep)
9
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
10
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
J
jrzaurin 已提交
11
[![codecov](https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg)](https://codecov.io/gh/jrzaurin/pytorch-widedeep)
12
[![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/)
13

J
jrzaurin 已提交
14 15
# pytorch-widedeep

16
A flexible package to combine tabular data with text and images using wide and
J
jrzaurin 已提交
17 18
deep models.

J
jrzaurin 已提交
19 20
**Documentation:** [https://pytorch-widedeep.readthedocs.io](https://pytorch-widedeep.readthedocs.io/en/latest/index.html)

J
jrzaurin 已提交
21 22
**Companion posts:** [infinitoml](https://jrzaurin.github.io/infinitoml/)

J
jrzaurin 已提交
23 24
### Introduction

J
jrzaurin 已提交
25
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm. Details of
J
jrzaurin 已提交
26
the original algorithm can be found
J
jrzaurin 已提交
27
[here](https://www.tensorflow.org/tutorials/wide_and_deep), and the nice
J
jrzaurin 已提交
28 29
research paper can be found [here](https://arxiv.org/abs/1606.07792).

J
jrzaurin 已提交
30 31
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
J
jrzaurin 已提交
32 33 34 35 36 37 38 39 40
and images with corresponding tabular data using wide and deep models. With
that in mind there are two architectures that can be implemented with just a
few lines of code.

### Architectures

**Architecture 1**:

<p align="center">
J
jrzaurin 已提交
41
  <img width="750" src="docs/figures/architecture_1.png">
J
jrzaurin 已提交
42 43
</p>

44
Architecture 1 combines the `Wide`, Linear model with the outputs from the
45 46 47 48 49
`DeepDense` or `DeepDenseResnet`, `DeepText` and `DeepImage` components
connected to a final output neuron or neurons, depending on whether we are
performing a binary classification or regression, or a multi-class
classification. The components within the faded-pink rectangles are
concatenated.
J
jrzaurin 已提交
50

51 52 53
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 1 can be formulated
as:
J
jrzaurin 已提交
54 55

<p align="center">
56
  <img width="500" src="docs/figures/architecture_1_math.png">
J
jrzaurin 已提交
57 58
</p>

59 60 61

Where *'W'* are the weight matrices applied to the wide model and to the final
activations of the deep models, *'a'* are these final activations, and
J
jrzaurin 已提交
62 63 64 65 66 67 68
&phi;(x) are the cross product transformations of the original features *'x'*.
In case you are wondering what are *"cross product transformations"*, here is
a quote taken directly from the paper: *"For binary features, a cross-product
transformation (e.g., “AND(gender=female, language=en)”) is 1 if and only if
the constituent features (“gender=female” and “language=en”) are all 1, and 0
otherwise".*

69 70 71

**Architecture 2**

J
jrzaurin 已提交
72
<p align="center">
J
jrzaurin 已提交
73
  <img width="750" src="docs/figures/architecture_2.png">
J
jrzaurin 已提交
74 75
</p>

76 77 78 79
Architecture 2 combines the `Wide`, Linear model with the Deep components of
the model connected to the output neuron(s), after the different Deep
components have been themselves combined through a FC-Head (that I refer as
`deephead`).
J
jrzaurin 已提交
80

81 82 83 84 85 86 87 88
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 2 can be formulated
as:

<p align="center">
  <img width="300" src="docs/figures/architecture_2_math.png">
</p>

89 90 91 92 93 94 95 96 97 98 99 100 101
Note that each individual component, `wide`, `deepdense` (either `DeepDense`
or `DeepDenseResnet`), `deeptext` and `deepimage`, can be used independently
and in isolation. For example, one could use only `wide`, which is in simply a
linear model.

On the other hand, while I recommend using the `Wide` and `DeepDense` (or
`DeepDenseResnet`) classes in `pytorch-widedeep` to build the `wide` and
`deepdense` component, it is very likely that users will want to use their own
models in the case of the `deeptext` and `deepimage` components. That is
perfectly possible as long as the the custom models have an attribute called
`output_dim` with the size of the last layer of activations, so that
`WideDeep` can be constructed

102
`pytorch-widedeep` includes standard text (stack of LSTMs) and image
103 104 105
(pre-trained ResNets or stack of CNNs) models.

See the examples folder or the docs for more information.
J
jrzaurin 已提交
106 107 108


### Installation
J
jrzaurin 已提交
109

110 111 112 113 114 115 116
Install using pip:

```bash
pip install pytorch-widedeep
```

Or install directly from github
J
jrzaurin 已提交
117

J
jrzaurin 已提交
118
```bash
J
jrzaurin 已提交
119 120 121
pip install git+https://github.com/jrzaurin/pytorch-widedeep.git
```

J
jrzaurin 已提交
122
#### Developer Install
J
jrzaurin 已提交
123 124

```bash
J
jrzaurin 已提交
125
# Clone the repository
J
jrzaurin 已提交
126
git clone https://github.com/jrzaurin/pytorch-widedeep
J
jrzaurin 已提交
127 128
cd pytorch-widedeep

J
jrzaurin 已提交
129
# Install in dev mode
J
jrzaurin 已提交
130 131 132
pip install -e .
```

133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154
**Important note for Mac users**: at the time of writing (Dec-2020) the latest
`torch` release is `1.7`. This release has some
[issues](https://stackoverflow.com/questions/64772335/pytorch-w-parallelnative-cpp206)
when running on Mac and the data-loaders will not run in parallel. In
addition, since `python 3.8`, [the `multiprocessing` library start method
changed from `'fork'` to
`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods).
This also affects the data-loaders (for any `torch` version) and they will not
run in parallel. Therefore, for Mac users I recommend using `python 3.6` or
`3.7` and `torch <= 1.6` (with the corresponding, consistent version of
`torchvision`, e.g. `0.7.0` for `torch 1.6`). I do not want to force this
versioning in the `setup.py` file since I expect that all these issues are
fixed in the future. Therefore, after installing `pytorch-widedeep` via pip or
directly from github, downgrade `torch` and `torchvision` manually:

```bash
pip install pytorch-widedeep
pip install torch==1.6.0 torchvision==0.7.0
```

None of these issues affect Linux users.

J
jrzaurin 已提交
155 156 157
### Quick start

Binary classification with the [adult
158
dataset]([adult](https://www.kaggle.com/wenruliu/adult-income-dataset))
159
using `Wide` and `DeepDense` and defaults settings.
J
jrzaurin 已提交
160 161

```python
J
jrzaurin 已提交
162
import pandas as pd
163
import numpy as np
164 165
from sklearn.model_selection import train_test_split

166
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
J
jrzaurin 已提交
167
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
168
from pytorch_widedeep.metrics import Accuracy
J
jrzaurin 已提交
169

170
# these next 4 lines are not directly related to pytorch-widedeep. I assume
J
jrzaurin 已提交
171
# you have downloaded the dataset and place it in a dir called data/adult/
172 173 174 175
df = pd.read_csv("data/adult/adult.csv.zip")
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df_train, df_test = train_test_split(df, test_size=0.2, stratify=df.income_label)
J
jrzaurin 已提交
176 177

# prepare wide, crossed, embedding and continuous columns
178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194
wide_cols = [
    "education",
    "relationship",
    "workclass",
    "occupation",
    "native-country",
    "gender",
]
cross_cols = [("education", "occupation"), ("native-country", "occupation")]
embed_cols = [
    ("education", 16),
    ("workclass", 16),
    ("occupation", 16),
    ("native-country", 32),
]
cont_cols = ["age", "hours-per-week"]
target_col = "income_label"
J
jrzaurin 已提交
195 196

# target
197
target = df_train[target_col].values
J
jrzaurin 已提交
198 199 200

# wide
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=cross_cols)
201
X_wide = preprocess_wide.fit_transform(df_train)
202
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
J
jrzaurin 已提交
203 204

# deepdense
205
preprocess_deep = DensePreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
206 207 208
X_deep = preprocess_deep.fit_transform(df_train)
deepdense = DeepDense(
    hidden_layers=[64, 32],
209
    column_idx=preprocess_deep.column_idx,
210 211 212
    embed_input=preprocess_deep.embeddings_input,
    continuous_cols=cont_cols,
)
213 214 215 216
# # To use DeepDenseResnet as the deepdense component simply:
# from pytorch_widedeep.models import DeepDenseResnet:
# deepdense = DeepDenseResnet(
#     blocks=[64, 32],
217
#     column_idx=preprocess_deep.column_idx,
218 219 220
#     embed_input=preprocess_deep.embeddings_input,
#     continuous_cols=cont_cols,
# )
221 222

# build, compile and fit
J
jrzaurin 已提交
223
model = WideDeep(wide=wide, deepdense=deepdense)
224
model.compile(method="binary", metrics=[Accuracy])
225 226 227 228 229 230 231 232 233 234 235 236 237
model.fit(
    X_wide=X_wide,
    X_deep=X_deep,
    target=target,
    n_epochs=5,
    batch_size=256,
    val_split=0.1,
)

# predict
X_wide_te = preprocess_wide.transform(df_test)
X_deep_te = preprocess_deep.transform(df_test)
preds = model.predict(X_wide=X_wide_te, X_deep=X_deep_te)
238 239 240 241 242 243 244 245 246

#  # save and load
# torch.save(model, "model_weights/model.t")
# model = torch.load("model_weights/model.t")

#  # or via state dictionaries
# torch.save(model.state_dict(), PATH)
# model = WideDeep(*args)
# model.load_state_dict(torch.load(PATH))
J
jrzaurin 已提交
247 248
```

249 250
Of course, one can do much more, such as using different initializations,
optimizers or learning rate schedulers for each component of the overall
251 252 253 254
model. Adding FC-Heads to the Text and Image components. Using the [Focal
Loss](https://arxiv.org/abs/1708.02002), warming up individual components
before joined training, etc. See the `examples` or the `docs` folders for a
better understanding of the content of the package and its functionalities.
J
jrzaurin 已提交
255 256 257 258

### Testing

```
J
jrzaurin 已提交
259
pytest tests
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281
```

### Acknowledgments

This library takes from a series of other libraries, so I think it is just
fair to mention them here in the README (specific mentions are also included
in the code).

The `Callbacks` and `Initializers` structure and code is inspired by the
[`torchsample`](https://github.com/ncullen93/torchsample) library, which in
itself partially inspired by [`Keras`](https://keras.io/).

The `TextProcessor` class in this library uses the
[`fastai`](https://docs.fast.ai/text.transform.html#BaseTokenizer.tokenizer)'s
`Tokenizer` and `Vocab`. The code at `utils.fastai_transforms` is a minor
adaptation of their code so it functions within this library. To my experience
their `Tokenizer` is the best in class.

The `ImageProcessor` class in this library uses code from the fantastic [Deep
Learning for Computer
Vision](https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/)
(DL4CV) book by Adrian Rosebrock.