README.md 8.2 KB
Newer Older
J
Javier 已提交
1

J
jrzaurin 已提交
2
<p align="center">
J
jrzaurin 已提交
3
  <img width="450" src="docs/figures/widedeep_logo.png">
J
jrzaurin 已提交
4 5
</p>

J
jrzaurin 已提交
6
[![Build Status](https://travis-ci.org/jrzaurin/pytorch-widedeep.svg?branch=master)](https://travis-ci.org/jrzaurin/pytorch-widedeep)
J
jrzaurin 已提交
7
[![Documentation Status](https://readthedocs.org/projects/pytorch-widedeep/badge/?version=latest)](https://pytorch-widedeep.readthedocs.io/en/latest/?badge=latest)
8
[![PyPI version](https://badge.fury.io/py/pytorch-widedeep.svg)](https://badge.fury.io/py/pytorch-widedeep)
9
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
10
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
J
jrzaurin 已提交
11

12 13
Platform | Version Support
---------|:---------------
14 15
OSX      | [![Python 3.6 3.7](https://img.shields.io/badge/python-3.6%20%7C%203.7-blue.svg)](https://www.python.org/)
Linux    | [![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/)
16

J
jrzaurin 已提交
17 18
# pytorch-widedeep

19
A flexible package to combine tabular data with text and images using wide and
J
jrzaurin 已提交
20 21
deep models.

J
jrzaurin 已提交
22 23
**Documentation:** [https://pytorch-widedeep.readthedocs.io](https://pytorch-widedeep.readthedocs.io/en/latest/index.html)

J
jrzaurin 已提交
24 25
### Introduction

J
jrzaurin 已提交
26
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm. Details of
J
jrzaurin 已提交
27
the original algorithm can be found
J
jrzaurin 已提交
28
[here](https://www.tensorflow.org/tutorials/wide_and_deep), and the nice
J
jrzaurin 已提交
29 30
research paper can be found [here](https://arxiv.org/abs/1606.07792).

J
jrzaurin 已提交
31 32
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
J
jrzaurin 已提交
33 34 35 36 37 38 39 40 41 42 43 44
and images with corresponding tabular data using wide and deep models. With
that in mind there are two architectures that can be implemented with just a
few lines of code.

### Architectures

**Architecture 1**:

<p align="center">
  <img width="600" src="docs/figures/architecture_1.png">
</p>

45 46 47 48 49
Architecture 1 combines the `Wide`, Linear model with the outputs from the
`DeepDense`, `DeepText` and `DeepImage` components connected to a final output
neuron or neurons, depending on whether we are performing a binary
classification or regression, or a multi-class classification. The components
within the faded-pink rectangles are concatenated.
J
jrzaurin 已提交
50

51 52 53
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 1 can be formulated
as:
J
jrzaurin 已提交
54 55

<p align="center">
56
  <img width="500" src="docs/figures/architecture_1_math.png">
J
jrzaurin 已提交
57 58
</p>

59 60 61

Where *'W'* are the weight matrices applied to the wide model and to the final
activations of the deep models, *'a'* are these final activations, and
J
jrzaurin 已提交
62 63 64 65 66 67 68
&phi;(x) are the cross product transformations of the original features *'x'*.
In case you are wondering what are *"cross product transformations"*, here is
a quote taken directly from the paper: *"For binary features, a cross-product
transformation (e.g., “AND(gender=female, language=en)”) is 1 if and only if
the constituent features (“gender=female” and “language=en”) are all 1, and 0
otherwise".*

69 70 71

**Architecture 2**

J
jrzaurin 已提交
72 73 74 75
<p align="center">
  <img width="600" src="docs/figures/architecture_2.png">
</p>

76 77 78 79
Architecture 2 combines the `Wide`, Linear model with the Deep components of
the model connected to the output neuron(s), after the different Deep
components have been themselves combined through a FC-Head (that I refer as
`deephead`).
J
jrzaurin 已提交
80

81 82 83 84 85 86 87 88
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 2 can be formulated
as:

<p align="center">
  <img width="300" src="docs/figures/architecture_2_math.png">
</p>

J
jrzaurin 已提交
89 90
When using `pytorch-widedeep`, the assumption is that the so called `Wide` and
`DeepDense` components in the figures are **always** present, while `DeepText`
J
jrzaurin 已提交
91 92 93 94
and `DeepImage` are optional. `pytorch-widedeep` includes standard text (stack
of LSTMs) and image (pre-trained ResNets or stack of CNNs) models. However,
the user can use any custom model as long as it has an attribute called
`output_dim` with the size of the last layer of activations, so that
95 96
`WideDeep` can be constructed. See the examples folder or the docs for more
information.
J
jrzaurin 已提交
97 98 99


### Installation
J
jrzaurin 已提交
100

101 102 103 104 105 106 107
Install using pip:

```bash
pip install pytorch-widedeep
```

Or install directly from github
J
jrzaurin 已提交
108

J
jrzaurin 已提交
109
```bash
J
jrzaurin 已提交
110 111 112
pip install git+https://github.com/jrzaurin/pytorch-widedeep.git
```

J
jrzaurin 已提交
113
#### Developer Install
J
jrzaurin 已提交
114 115

```bash
J
jrzaurin 已提交
116
# Clone the repository
J
jrzaurin 已提交
117
git clone https://github.com/jrzaurin/pytorch-widedeep
J
jrzaurin 已提交
118 119
cd pytorch-widedeep

J
jrzaurin 已提交
120
# Install in dev mode
J
jrzaurin 已提交
121 122 123 124 125 126
pip install -e .
```

### Quick start

Binary classification with the [adult
127
dataset]([adult](https://www.kaggle.com/wenruliu/adult-income-dataset))
128
using `Wide` and `DeepDense` and defaults settings.
J
jrzaurin 已提交
129 130

```python
J
jrzaurin 已提交
131
import pandas as pd
132
import numpy as np
133 134
from sklearn.model_selection import train_test_split

135
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
J
jrzaurin 已提交
136
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
137
from pytorch_widedeep.metrics import Accuracy
J
jrzaurin 已提交
138

139
# these next 4 lines are not directly related to pytorch-widedeep. I assume
J
jrzaurin 已提交
140
# you have downloaded the dataset and place it in a dir called data/adult/
141 142 143 144
df = pd.read_csv("data/adult/adult.csv.zip")
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df_train, df_test = train_test_split(df, test_size=0.2, stratify=df.income_label)
J
jrzaurin 已提交
145 146

# prepare wide, crossed, embedding and continuous columns
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163
wide_cols = [
    "education",
    "relationship",
    "workclass",
    "occupation",
    "native-country",
    "gender",
]
cross_cols = [("education", "occupation"), ("native-country", "occupation")]
embed_cols = [
    ("education", 16),
    ("workclass", 16),
    ("occupation", 16),
    ("native-country", 32),
]
cont_cols = ["age", "hours-per-week"]
target_col = "income_label"
J
jrzaurin 已提交
164 165

# target
166
target = df_train[target_col].values
J
jrzaurin 已提交
167 168 169

# wide
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=cross_cols)
170
X_wide = preprocess_wide.fit_transform(df_train)
171
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
J
jrzaurin 已提交
172 173

# deepdense
174
preprocess_deep = DensePreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
175 176 177 178 179 180 181 182 183
X_deep = preprocess_deep.fit_transform(df_train)
deepdense = DeepDense(
    hidden_layers=[64, 32],
    deep_column_idx=preprocess_deep.deep_column_idx,
    embed_input=preprocess_deep.embeddings_input,
    continuous_cols=cont_cols,
)

# build, compile and fit
J
jrzaurin 已提交
184
model = WideDeep(wide=wide, deepdense=deepdense)
185
model.compile(method="binary", metrics=[Accuracy])
186 187 188 189 190 191 192 193 194 195 196 197 198
model.fit(
    X_wide=X_wide,
    X_deep=X_deep,
    target=target,
    n_epochs=5,
    batch_size=256,
    val_split=0.1,
)

# predict
X_wide_te = preprocess_wide.transform(df_test)
X_deep_te = preprocess_deep.transform(df_test)
preds = model.predict(X_wide=X_wide_te, X_deep=X_deep_te)
J
jrzaurin 已提交
199 200
```

201 202
Of course, one can do much more, such as using different initializations,
optimizers or learning rate schedulers for each component of the overall
203 204 205 206
model. Adding FC-Heads to the Text and Image components. Using the [Focal
Loss](https://arxiv.org/abs/1708.02002), warming up individual components
before joined training, etc. See the `examples` or the `docs` folders for a
better understanding of the content of the package and its functionalities.
J
jrzaurin 已提交
207 208 209 210

### Testing

```
J
jrzaurin 已提交
211
pytest tests
212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233
```

### Acknowledgments

This library takes from a series of other libraries, so I think it is just
fair to mention them here in the README (specific mentions are also included
in the code).

The `Callbacks` and `Initializers` structure and code is inspired by the
[`torchsample`](https://github.com/ncullen93/torchsample) library, which in
itself partially inspired by [`Keras`](https://keras.io/).

The `TextProcessor` class in this library uses the
[`fastai`](https://docs.fast.ai/text.transform.html#BaseTokenizer.tokenizer)'s
`Tokenizer` and `Vocab`. The code at `utils.fastai_transforms` is a minor
adaptation of their code so it functions within this library. To my experience
their `Tokenizer` is the best in class.

The `ImageProcessor` class in this library uses code from the fantastic [Deep
Learning for Computer
Vision](https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/)
(DL4CV) book by Adrian Rosebrock.