README.md 7.3 KB
Newer Older
J
Javier 已提交
1

J
jrzaurin 已提交
2
<p align="center">
J
jrzaurin 已提交
3
  <img width="450" src="docs/figures/widedeep_logo.png">
J
jrzaurin 已提交
4 5
</p>

J
jrzaurin 已提交
6 7
[![Build Status](https://travis-ci.org/jrzaurin/pytorch-widedeep.svg?branch=master)](https://travis-ci.org/jrzaurin/pytorch-widedeep)

J
jrzaurin 已提交
8 9
# pytorch-widedeep

10
A flexible package to combine tabular data with text and images using wide and
J
jrzaurin 已提交
11 12 13 14
deep models.

### Introduction

J
jrzaurin 已提交
15
`pytorch-widedeep` is based on Google's Wide and Deep Algorithm. Details of
J
jrzaurin 已提交
16
the original algorithm can be found
J
jrzaurin 已提交
17
[here](https://www.tensorflow.org/tutorials/wide_and_deep), and the nice
J
jrzaurin 已提交
18 19
research paper can be found [here](https://arxiv.org/abs/1606.07792).

J
jrzaurin 已提交
20 21
In general terms, `pytorch-widedeep` is a package to use deep learning with
tabular data. In particular, is intended to facilitate the combination of text
J
jrzaurin 已提交
22 23 24 25 26 27 28 29 30 31 32 33
and images with corresponding tabular data using wide and deep models. With
that in mind there are two architectures that can be implemented with just a
few lines of code.

### Architectures

**Architecture 1**:

<p align="center">
  <img width="600" src="docs/figures/architecture_1.png">
</p>

J
jrzaurin 已提交
34 35 36 37 38
Architecture 1 combines the `Wide`, one-hot encoded features with the outputs
from the `DeepDense`, `DeepText` and `DeepImage` components connected to a
final output neuron or neurons, depending on whether we are performing a
binary classification or regression, or a multi-class classification. The
components within the faded-pink rectangles are concatenated.
J
jrzaurin 已提交
39

40
In math terms, and following the notation in the [paper](https://arxiv.org/abs/1606.07792), Architecture 1 can be formulated as:
J
jrzaurin 已提交
41 42

<p align="center">
43
  <img width="500" src="docs/figures/architecture_1_math.png">
J
jrzaurin 已提交
44 45
</p>

46 47 48

Where *'W'* are the weight matrices applied to the wide model and to the final
activations of the deep models, *'a'* are these final activations, and
J
jrzaurin 已提交
49 50 51 52 53 54 55
&phi;(x) are the cross product transformations of the original features *'x'*.
In case you are wondering what are *"cross product transformations"*, here is
a quote taken directly from the paper: *"For binary features, a cross-product
transformation (e.g., “AND(gender=female, language=en)”) is 1 if and only if
the constituent features (“gender=female” and “language=en”) are all 1, and 0
otherwise".*

56 57 58

**Architecture 2**

J
jrzaurin 已提交
59 60 61 62
<p align="center">
  <img width="600" src="docs/figures/architecture_2.png">
</p>

J
jrzaurin 已提交
63 64 65 66
Architecture 2 combines the `Wide` one-hot encoded features with the Deep
components of the model connected to the output neuron(s), after the different
Deep components have been themselves combined through a FC-Head (that I refer
as `deephead`).
J
jrzaurin 已提交
67

68 69 70 71 72 73 74 75
In math terms, and following the notation in the
[paper](https://arxiv.org/abs/1606.07792), Architecture 2 can be formulated
as:

<p align="center">
  <img width="300" src="docs/figures/architecture_2_math.png">
</p>

J
jrzaurin 已提交
76 77
When using `pytorch-widedeep`, the assumption is that the so called `Wide` and
`DeepDense` components in the figures are **always** present, while `DeepText`
J
jrzaurin 已提交
78 79 80 81 82
and `DeepImage` are optional. `pytorch-widedeep` includes standard text (stack
of LSTMs) and image (pre-trained ResNets or stack of CNNs) models. However,
the user can use any custom model as long as it has an attribute called
`output_dim` with the size of the last layer of activations, so that
`WideDeep` can be constructed. See the examples folder for more information.
J
jrzaurin 已提交
83 84 85


### Installation
J
jrzaurin 已提交
86

87 88 89 90 91 92 93
Install using pip:

```bash
pip install pytorch-widedeep
```

Or install directly from github
J
jrzaurin 已提交
94

J
jrzaurin 已提交
95
```bash
J
jrzaurin 已提交
96 97 98
pip install git+https://github.com/jrzaurin/pytorch-widedeep.git
```

J
jrzaurin 已提交
99
#### Developer Install
J
jrzaurin 已提交
100 101

```bash
J
jrzaurin 已提交
102
# Clone the repository
J
jrzaurin 已提交
103
git clone https://github.com/jrzaurin/pytorch-widedeep
J
jrzaurin 已提交
104 105
cd pytorch-widedeep

J
jrzaurin 已提交
106
# Install in dev mode
J
jrzaurin 已提交
107 108 109
pip install -e .
```

J
jrzaurin 已提交
110 111
### Examples

112
There are a number of notebooks in the `examples` folder plus some additional
J
jrzaurin 已提交
113
files. These notebooks cover most of the utilities of this package and can
114
also act as documentation. In the case that github does not render the
J
jrzaurin 已提交
115
notebooks, or it renders them missing some parts, they are saved as markdown
J
jrzaurin 已提交
116
files in the `docs` folder.
J
jrzaurin 已提交
117

J
jrzaurin 已提交
118 119 120 121
### Quick start

Binary classification with the [adult
dataset]([adult](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2))
122
using `Wide` and `DeepDense` and defaults settings.
J
jrzaurin 已提交
123 124

```python
J
jrzaurin 已提交
125
import pandas as pd
J
jrzaurin 已提交
126 127 128 129
from pytorch_widedeep.preprocessing import WidePreprocessor, DeepPreprocessor
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
from pytorch_widedeep.metrics import BinaryAccuracy

J
jrzaurin 已提交
130 131
# these next 3 lines are not directly related to pytorch-widedeep. I assume
# you have downloaded the dataset and place it in a dir called data/adult/
J
jrzaurin 已提交
132 133 134 135 136
df = pd.read_csv('data/adult/adult.csv.zip')
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)

# prepare wide, crossed, embedding and continuous columns
137 138 139 140
wide_cols  = ['education', 'relationship', 'workclass', 'occupation','native-country', 'gender']
cross_cols = [('education', 'occupation'), ('native-country', 'occupation')]
embed_cols = [('education',16), ('workclass',16), ('occupation',16),('native-country',32)]
cont_cols  = ["age", "hours-per-week"]
J
jrzaurin 已提交
141
target_col = 'income_label'
J
jrzaurin 已提交
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165

# target
target = df[target_col].values

# wide
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=cross_cols)
X_wide = preprocess_wide.fit_transform(df)
wide = Wide(wide_dim=X_wide.shape[1], output_dim=1)

# deepdense
preprocess_deep = DeepPreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
X_deep = preprocess_deep.fit_transform(df)
deepdense = DeepDense(hidden_layers=[64,32],
                      deep_column_idx=preprocess_deep.deep_column_idx,
                      embed_input=preprocess_deep.embeddings_input,
                      continuous_cols=cont_cols)

# build, compile, fit and predict
model = WideDeep(wide=wide, deepdense=deepdense)
model.compile(method='binary', metrics=[BinaryAccuracy])
model.fit(X_wide=X_wide, X_deep=X_deep, target=target, n_epochs=5, batch_size=256, val_split=0.2)
model.predict(X_wide=X_wide_te, X_deep=X_deep_te)
```

166 167
Of course, one can do much more, such as using different initializations,
optimizers or learning rate schedulers for each component of the overall
168 169 170 171
model. Adding FC-Heads to the Text and Image components. Using the [Focal
Loss](https://arxiv.org/abs/1708.02002), warming up individual components
before joined training, etc. See the `examples` or the `docs` folders for a
better understanding of the content of the package and its functionalities.
J
jrzaurin 已提交
172 173 174 175

### Testing

```
J
jrzaurin 已提交
176
pytest tests
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198
```

### Acknowledgments

This library takes from a series of other libraries, so I think it is just
fair to mention them here in the README (specific mentions are also included
in the code).

The `Callbacks` and `Initializers` structure and code is inspired by the
[`torchsample`](https://github.com/ncullen93/torchsample) library, which in
itself partially inspired by [`Keras`](https://keras.io/).

The `TextProcessor` class in this library uses the
[`fastai`](https://docs.fast.ai/text.transform.html#BaseTokenizer.tokenizer)'s
`Tokenizer` and `Vocab`. The code at `utils.fastai_transforms` is a minor
adaptation of their code so it functions within this library. To my experience
their `Tokenizer` is the best in class.

The `ImageProcessor` class in this library uses code from the fantastic [Deep
Learning for Computer
Vision](https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/)
(DL4CV) book by Adrian Rosebrock.