README.md 8.3 KB
Newer Older
J
Javier 已提交
1

J
jrzaurin 已提交
2
<p align="center">
J
jrzaurin 已提交
3
  <img width="450" src="docs/figures/widedeep_logo.png">
J
jrzaurin 已提交
4 5
</p>

J
jrzaurin 已提交
6
[![Build Status](https://travis-ci.org/jrzaurin/pytorch-widedeep.svg?branch=master)](https://travis-ci.org/jrzaurin/pytorch-widedeep)
J
jrzaurin 已提交
7
[![Documentation Status](https://readthedocs.org/projects/pytorch-widedeep/badge/?version=latest)](https://pytorch-widedeep.readthedocs.io/en/latest/?badge=latest)
8
[![PyPI version](https://badge.fury.io/py/pytorch-widedeep.svg)](https://badge.fury.io/py/pytorch-widedeep)
9
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/jrzaurin/pytorch-widedeep/graphs/commit-activity)
10
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/jrzaurin/pytorch-widedeep/issues)
J
jrzaurin 已提交
11

12 13
Platform | Version Support
---------|:---------------
14 15
OSX      | [![Python 3.6 3.7](https://img.shields.io/badge/python-3.6%20%7C%203.7-blue.svg)](https://www.python.org/)
Linux    | [![Python 3.6 3.7 3.8](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-blue.svg)](https://www.python.org/)
16

J
jrzaurin 已提交
17 18
# pytorch-widedeep

19
A flexible package to combine tabular data with text and images using wide and
J
jrzaurin 已提交
20 21
deep models.

J
jrzaurin 已提交
22 23
**Documentation:** [https://pytorch-widedeep.readthedocs.io](https://pytorch-widedeep.readthedocs.io/en/latest/index.html)

J
jrzaurin 已提交
24 25
### Introduction

J
jrzaurin 已提交
26
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm. Details of
J
jrzaurin 已提交
27
the original algorithm can be found
J
jrzaurin 已提交
28
[here](https://www.tensorflow.org/tutorials/wide_and_deep), and the nice
J
jrzaurin 已提交
29 30
research paper can be found [here](https://arxiv.org/abs/1606.07792).

J
jrzaurin 已提交
31 32
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
J
jrzaurin 已提交
33 34 35 36 37 38 39 40 41 42 43 44
and images with corresponding tabular data using wide and deep models. With
that in mind there are two architectures that can be implemented with just a
few lines of code.

### Architectures

**Architecture 1**:

<p align="center">
  <img width="600" src="docs/figures/architecture_1.png">
</p>

45
Architecture 1 combines the `Wide`, Linear model with the outputs from the
46 47 48 49 50
`DeepDense` or `DeepDenseResnet`, `DeepText` and `DeepImage` components
connected to a final output neuron or neurons, depending on whether we are
performing a binary classification or regression, or a multi-class
classification. The components within the faded-pink rectangles are
concatenated.
J
jrzaurin 已提交
51

52 53 54
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 1 can be formulated
as:
J
jrzaurin 已提交
55 56

<p align="center">
57
  <img width="500" src="docs/figures/architecture_1_math.png">
J
jrzaurin 已提交
58 59
</p>

60 61 62

Where *'W'* are the weight matrices applied to the wide model and to the final
activations of the deep models, *'a'* are these final activations, and
J
jrzaurin 已提交
63 64 65 66 67 68 69
&phi;(x) are the cross product transformations of the original features *'x'*.
In case you are wondering what are *"cross product transformations"*, here is
a quote taken directly from the paper: *"For binary features, a cross-product
transformation (e.g., “AND(gender=female, language=en)”) is 1 if and only if
the constituent features (“gender=female” and “language=en”) are all 1, and 0
otherwise".*

70 71 72

**Architecture 2**

J
jrzaurin 已提交
73 74 75 76
<p align="center">
  <img width="600" src="docs/figures/architecture_2.png">
</p>

77 78 79 80
Architecture 2 combines the `Wide`, Linear model with the Deep components of
the model connected to the output neuron(s), after the different Deep
components have been themselves combined through a FC-Head (that I refer as
`deephead`).
J
jrzaurin 已提交
81

82 83 84 85 86 87 88 89
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 2 can be formulated
as:

<p align="center">
  <img width="300" src="docs/figures/architecture_2_math.png">
</p>

J
jrzaurin 已提交
90
When using `pytorch-widedeep`, the assumption is that the so called `Wide` and
91 92 93 94 95 96 97 98
`deep dense` (this can be either `DeepDense` or `DeepDenseResnet`. See the
documentation and examples folder for more details) components in the figures
are **always** present, while `DeepText text` and `DeepImage` are optional.
`pytorch-widedeep` includes standard text (stack of LSTMs) and image
(pre-trained ResNets or stack of CNNs) models. However, the user can use any
custom model as long as it has an attribute called `output_dim` with the size
of the last layer of activations, so that `WideDeep` can be constructed. See
the examples folder or the docs for more information.
J
jrzaurin 已提交
99 100 101


### Installation
J
jrzaurin 已提交
102

103 104 105 106 107 108 109
Install using pip:

```bash
pip install pytorch-widedeep
```

Or install directly from github
J
jrzaurin 已提交
110

J
jrzaurin 已提交
111
```bash
J
jrzaurin 已提交
112 113 114
pip install git+https://github.com/jrzaurin/pytorch-widedeep.git
```

J
jrzaurin 已提交
115
#### Developer Install
J
jrzaurin 已提交
116 117

```bash
J
jrzaurin 已提交
118
# Clone the repository
J
jrzaurin 已提交
119
git clone https://github.com/jrzaurin/pytorch-widedeep
J
jrzaurin 已提交
120 121
cd pytorch-widedeep

J
jrzaurin 已提交
122
# Install in dev mode
J
jrzaurin 已提交
123 124 125 126 127 128
pip install -e .
```

### Quick start

Binary classification with the [adult
129
dataset]([adult](https://www.kaggle.com/wenruliu/adult-income-dataset))
130
using `Wide` and `DeepDense` and defaults settings.
J
jrzaurin 已提交
131 132

```python
J
jrzaurin 已提交
133
import pandas as pd
134
import numpy as np
135 136
from sklearn.model_selection import train_test_split

137
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
J
jrzaurin 已提交
138
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
139
from pytorch_widedeep.metrics import Accuracy
J
jrzaurin 已提交
140

141
# these next 4 lines are not directly related to pytorch-widedeep. I assume
J
jrzaurin 已提交
142
# you have downloaded the dataset and place it in a dir called data/adult/
143 144 145 146
df = pd.read_csv("data/adult/adult.csv.zip")
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df_train, df_test = train_test_split(df, test_size=0.2, stratify=df.income_label)
J
jrzaurin 已提交
147 148

# prepare wide, crossed, embedding and continuous columns
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
wide_cols = [
    "education",
    "relationship",
    "workclass",
    "occupation",
    "native-country",
    "gender",
]
cross_cols = [("education", "occupation"), ("native-country", "occupation")]
embed_cols = [
    ("education", 16),
    ("workclass", 16),
    ("occupation", 16),
    ("native-country", 32),
]
cont_cols = ["age", "hours-per-week"]
target_col = "income_label"
J
jrzaurin 已提交
166 167

# target
168
target = df_train[target_col].values
J
jrzaurin 已提交
169 170 171

# wide
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=cross_cols)
172
X_wide = preprocess_wide.fit_transform(df_train)
173
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
J
jrzaurin 已提交
174 175

# deepdense
176
preprocess_deep = DensePreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
177 178 179 180 181 182 183 184 185
X_deep = preprocess_deep.fit_transform(df_train)
deepdense = DeepDense(
    hidden_layers=[64, 32],
    deep_column_idx=preprocess_deep.deep_column_idx,
    embed_input=preprocess_deep.embeddings_input,
    continuous_cols=cont_cols,
)

# build, compile and fit
J
jrzaurin 已提交
186
model = WideDeep(wide=wide, deepdense=deepdense)
187
model.compile(method="binary", metrics=[Accuracy])
188 189 190 191 192 193 194 195 196 197 198 199 200
model.fit(
    X_wide=X_wide,
    X_deep=X_deep,
    target=target,
    n_epochs=5,
    batch_size=256,
    val_split=0.1,
)

# predict
X_wide_te = preprocess_wide.transform(df_test)
X_deep_te = preprocess_deep.transform(df_test)
preds = model.predict(X_wide=X_wide_te, X_deep=X_deep_te)
J
jrzaurin 已提交
201 202
```

203 204
Of course, one can do much more, such as using different initializations,
optimizers or learning rate schedulers for each component of the overall
205 206 207 208
model. Adding FC-Heads to the Text and Image components. Using the [Focal
Loss](https://arxiv.org/abs/1708.02002), warming up individual components
before joined training, etc. See the `examples` or the `docs` folders for a
better understanding of the content of the package and its functionalities.
J
jrzaurin 已提交
209 210 211 212

### Testing

```
J
jrzaurin 已提交
213
pytest tests
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235
```

### Acknowledgments

This library takes from a series of other libraries, so I think it is just
fair to mention them here in the README (specific mentions are also included
in the code).

The `Callbacks` and `Initializers` structure and code is inspired by the
[`torchsample`](https://github.com/ncullen93/torchsample) library, which in
itself partially inspired by [`Keras`](https://keras.io/).

The `TextProcessor` class in this library uses the
[`fastai`](https://docs.fast.ai/text.transform.html#BaseTokenizer.tokenizer)'s
`Tokenizer` and `Vocab`. The code at `utils.fastai_transforms` is a minor
adaptation of their code so it functions within this library. To my experience
their `Tokenizer` is the best in class.

The `ImageProcessor` class in this library uses code from the fantastic [Deep
Learning for Computer
Vision](https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/)
(DL4CV) book by Adrian Rosebrock.