README.md 7.8 KB
Newer Older
J
Javier 已提交
1

J
jrzaurin 已提交
2
<p align="center">
J
jrzaurin 已提交
3
  <img width="450" src="docs/figures/widedeep_logo.png">
J
jrzaurin 已提交
4 5
</p>

J
jrzaurin 已提交
6
[![Build Status](https://travis-ci.org/jrzaurin/pytorch-widedeep.svg?branch=master)](https://travis-ci.org/jrzaurin/pytorch-widedeep)
J
jrzaurin 已提交
7
[![Documentation Status](https://readthedocs.org/projects/pytorch-widedeep/badge/?version=latest)](https://pytorch-widedeep.readthedocs.io/en/latest/?badge=latest)
J
jrzaurin 已提交
8

J
jrzaurin 已提交
9 10
# pytorch-widedeep

11
A flexible package to combine tabular data with text and images using wide and
J
jrzaurin 已提交
12 13
deep models.

J
jrzaurin 已提交
14 15
**Documentation:** [https://pytorch-widedeep.readthedocs.io](https://pytorch-widedeep.readthedocs.io/en/latest/index.html)

J
jrzaurin 已提交
16 17
### Introduction

J
jrzaurin 已提交
18
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm. Details of
J
jrzaurin 已提交
19
the original algorithm can be found
J
jrzaurin 已提交
20
[here](https://www.tensorflow.org/tutorials/wide_and_deep), and the nice
J
jrzaurin 已提交
21 22
research paper can be found [here](https://arxiv.org/abs/1606.07792).

J
jrzaurin 已提交
23 24
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
J
jrzaurin 已提交
25 26 27 28 29 30 31 32 33 34 35 36
and images with corresponding tabular data using wide and deep models. With
that in mind there are two architectures that can be implemented with just a
few lines of code.

### Architectures

**Architecture 1**:

<p align="center">
  <img width="600" src="docs/figures/architecture_1.png">
</p>

J
jrzaurin 已提交
37 38 39 40 41
Architecture 1 combines the `Wide`, one-hot encoded features with the outputs
from the `DeepDense`, `DeepText` and `DeepImage` components connected to a
final output neuron or neurons, depending on whether we are performing a
binary classification or regression, or a multi-class classification. The
components within the faded-pink rectangles are concatenated.
J
jrzaurin 已提交
42

43 44 45
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 1 can be formulated
as:
J
jrzaurin 已提交
46 47

<p align="center">
48
  <img width="500" src="docs/figures/architecture_1_math.png">
J
jrzaurin 已提交
49 50
</p>

51 52 53

Where *'W'* are the weight matrices applied to the wide model and to the final
activations of the deep models, *'a'* are these final activations, and
J
jrzaurin 已提交
54 55 56 57 58 59 60
&phi;(x) are the cross product transformations of the original features *'x'*.
In case you are wondering what are *"cross product transformations"*, here is
a quote taken directly from the paper: *"For binary features, a cross-product
transformation (e.g., “AND(gender=female, language=en)”) is 1 if and only if
the constituent features (“gender=female” and “language=en”) are all 1, and 0
otherwise".*

61 62 63

**Architecture 2**

J
jrzaurin 已提交
64 65 66 67
<p align="center">
  <img width="600" src="docs/figures/architecture_2.png">
</p>

J
jrzaurin 已提交
68 69 70 71
Architecture 2 combines the `Wide` one-hot encoded features with the Deep
components of the model connected to the output neuron(s), after the different
Deep components have been themselves combined through a FC-Head (that I refer
as `deephead`).
J
jrzaurin 已提交
72

73 74 75 76 77 78 79 80
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 2 can be formulated
as:

<p align="center">
  <img width="300" src="docs/figures/architecture_2_math.png">
</p>

J
jrzaurin 已提交
81 82
When using `pytorch-widedeep`, the assumption is that the so called `Wide` and
`DeepDense` components in the figures are **always** present, while `DeepText`
J
jrzaurin 已提交
83 84 85 86 87
and `DeepImage` are optional. `pytorch-widedeep` includes standard text (stack
of LSTMs) and image (pre-trained ResNets or stack of CNNs) models. However,
the user can use any custom model as long as it has an attribute called
`output_dim` with the size of the last layer of activations, so that
`WideDeep` can be constructed. See the examples folder for more information.
J
jrzaurin 已提交
88 89 90


### Installation
J
jrzaurin 已提交
91

92 93 94 95 96 97 98
Install using pip:

```bash
pip install pytorch-widedeep
```

Or install directly from github
J
jrzaurin 已提交
99

J
jrzaurin 已提交
100
```bash
J
jrzaurin 已提交
101 102 103
pip install git+https://github.com/jrzaurin/pytorch-widedeep.git
```

J
jrzaurin 已提交
104
#### Developer Install
J
jrzaurin 已提交
105 106

```bash
J
jrzaurin 已提交
107
# Clone the repository
J
jrzaurin 已提交
108
git clone https://github.com/jrzaurin/pytorch-widedeep
J
jrzaurin 已提交
109 110
cd pytorch-widedeep

J
jrzaurin 已提交
111
# Install in dev mode
J
jrzaurin 已提交
112 113 114
pip install -e .
```

J
jrzaurin 已提交
115 116
### Examples

117
There are a number of notebooks in the `examples` folder plus some additional
J
jrzaurin 已提交
118
files. These notebooks cover most of the utilities of this package and can
119
also act as documentation. In the case that github does not render the
J
jrzaurin 已提交
120
notebooks, or it renders them missing some parts, they are saved as markdown
J
jrzaurin 已提交
121
files in the `docs` folder.
J
jrzaurin 已提交
122

J
jrzaurin 已提交
123 124 125
### Quick start

Binary classification with the [adult
126
dataset]([adult](https://www.kaggle.com/wenruliu/adult-income-dataset))
127
using `Wide` and `DeepDense` and defaults settings.
J
jrzaurin 已提交
128 129

```python
J
jrzaurin 已提交
130
import pandas as pd
131 132
from sklearn.model_selection import train_test_split

133
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
J
jrzaurin 已提交
134
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
135
from pytorch_widedeep.metrics import Accuracy
J
jrzaurin 已提交
136

137
# these next 4 lines are not directly related to pytorch-widedeep. I assume
J
jrzaurin 已提交
138
# you have downloaded the dataset and place it in a dir called data/adult/
139 140 141 142
df = pd.read_csv("data/adult/adult.csv.zip")
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df_train, df_test = train_test_split(df, test_size=0.2, stratify=df.income_label)
J
jrzaurin 已提交
143 144

# prepare wide, crossed, embedding and continuous columns
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
wide_cols = [
    "education",
    "relationship",
    "workclass",
    "occupation",
    "native-country",
    "gender",
]
cross_cols = [("education", "occupation"), ("native-country", "occupation")]
embed_cols = [
    ("education", 16),
    ("workclass", 16),
    ("occupation", 16),
    ("native-country", 32),
]
cont_cols = ["age", "hours-per-week"]
target_col = "income_label"
J
jrzaurin 已提交
162 163

# target
164
target = df_train[target_col].values
J
jrzaurin 已提交
165 166 167

# wide
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=cross_cols)
168
X_wide = preprocess_wide.fit_transform(df_train)
169
wide = Wide(wide_dim=X_wide.shape[1], pred_dim=1)
J
jrzaurin 已提交
170 171

# deepdense
172
preprocess_deep = DensePreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
173 174 175 176 177 178 179 180 181
X_deep = preprocess_deep.fit_transform(df_train)
deepdense = DeepDense(
    hidden_layers=[64, 32],
    deep_column_idx=preprocess_deep.deep_column_idx,
    embed_input=preprocess_deep.embeddings_input,
    continuous_cols=cont_cols,
)

# build, compile and fit
J
jrzaurin 已提交
182
model = WideDeep(wide=wide, deepdense=deepdense)
183
model.compile(method="binary", metrics=[Accuracy])
184 185 186 187 188 189 190 191 192 193 194 195 196
model.fit(
    X_wide=X_wide,
    X_deep=X_deep,
    target=target,
    n_epochs=5,
    batch_size=256,
    val_split=0.1,
)

# predict
X_wide_te = preprocess_wide.transform(df_test)
X_deep_te = preprocess_deep.transform(df_test)
preds = model.predict(X_wide=X_wide_te, X_deep=X_deep_te)
J
jrzaurin 已提交
197 198
```

199 200
Of course, one can do much more, such as using different initializations,
optimizers or learning rate schedulers for each component of the overall
201 202 203 204
model. Adding FC-Heads to the Text and Image components. Using the [Focal
Loss](https://arxiv.org/abs/1708.02002), warming up individual components
before joined training, etc. See the `examples` or the `docs` folders for a
better understanding of the content of the package and its functionalities.
J
jrzaurin 已提交
205 206 207 208

### Testing

```
J
jrzaurin 已提交
209
pytest tests
210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231
```

### Acknowledgments

This library takes from a series of other libraries, so I think it is just
fair to mention them here in the README (specific mentions are also included
in the code).

The `Callbacks` and `Initializers` structure and code is inspired by the
[`torchsample`](https://github.com/ncullen93/torchsample) library, which in
itself partially inspired by [`Keras`](https://keras.io/).

The `TextProcessor` class in this library uses the
[`fastai`](https://docs.fast.ai/text.transform.html#BaseTokenizer.tokenizer)'s
`Tokenizer` and `Vocab`. The code at `utils.fastai_transforms` is a minor
adaptation of their code so it functions within this library. To my experience
their `Tokenizer` is the best in class.

The `ImageProcessor` class in this library uses code from the fantastic [Deep
Learning for Computer
Vision](https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/)
(DL4CV) book by Adrian Rosebrock.