wide_deep.py 15.7 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12
"""
During the development of the package I realised that there is a typing
inconsistency. The input components of a Wide and Deep model are of type
nn.Module. These change type internally to nn.Sequential. While nn.Sequential
is an instance of nn.Module the oppossite is, of course, not true. This does
not affect any funcionality of the package, but it is something that needs
fixing. However, while fixing is simple (simply define new attributes that
are the nn.Sequential objects), its implications are quite wide within the
package (involves changing a number of tests and tutorials). Therefore, I
will introduce that fix when I do a major release. For now, we live with it.
"""

13
import warnings
14

15
import torch
16 17
import torch.nn as nn

18
from pytorch_widedeep.wdtypes import *  # noqa: F403
P
Pavol Mulinka 已提交
19
from pytorch_widedeep.models.tab_mlp import MLP, get_activation_fn
20
from pytorch_widedeep.models.tabnet.tab_net import TabNetPredLayer
J
jrzaurin 已提交
21

22
warnings.filterwarnings("default", category=UserWarning)
23

24
use_cuda = torch.cuda.is_available()
25
device = torch.device("cuda" if use_cuda else "cpu")
26 27 28


class WideDeep(nn.Module):
29 30 31 32 33
    r"""Main collector class that combines all ``wide``, ``deeptabular``
    (which can be a number of architectures), ``deeptext`` and
    ``deepimage`` models.

    There are two options to combine these models that correspond to the
34
    two main architectures that ``pytorch-widedeep`` can build.
35 36 37 38 39 40 41 42 43 44

        - Directly connecting the output of the model components to an ouput neuron(s).

        - Adding a `Fully-Connected Head` (FC-Head) on top of the deep models.
          This FC-Head will combine the output form the ``deeptabular``, ``deeptext`` and
          ``deepimage`` and will be then connected to the output neuron(s).

    Parameters
    ----------
    wide: ``nn.Module``, Optional, default = None
J
jrzaurin 已提交
45
        ``Wide`` model. I recommend using the ``Wide`` class in this
46 47 48 49
        package. However, it is possible to use a custom model as long as
        is consistent with the required architecture, see
        :class:`pytorch_widedeep.models.wide.Wide`
    deeptabular: ``nn.Module``, Optional, default = None
J
jrzaurin 已提交
50 51 52 53 54
        currently ``pytorch-widedeep`` implements a number of possible
        architectures for the ``deeptabular`` component. See the documenation
        of the package. I recommend using the ``deeptabular`` components in
        this package. However, it is possible to use a custom model as long
        as is  consistent with the required architecture.
55 56 57 58
    deeptext: ``nn.Module``, Optional, default = None
        Model for the text input. Must be an object of class ``DeepText``
        or a custom model as long as is consistent with the required
        architecture. See
59
        :class:`pytorch_widedeep.models.deep_text.DeepText`
60 61 62 63
    deepimage: ``nn.Module``, Optional, default = None
        Model for the images input. Must be an object of class
        ``DeepImage`` or a custom model as long as is consistent with the
        required architecture. See
64
        :class:`pytorch_widedeep.models.deep_image.DeepImage`
65 66 67 68 69 70 71 72 73 74 75 76
    deephead: ``nn.Module``, Optional, default = None
        Custom model by the user that will receive the outtput of the deep
        component. Typically a FC-Head (MLP)
    head_hidden_dims: List, Optional, default = None
        Alternatively, the ``head_hidden_dims`` param can be used to
        specify the sizes of the stacked dense layers in the fc-head e.g:
        ``[128, 64]``. Use ``deephead`` or ``head_hidden_dims``, but not
        both.
    head_dropout: float, default = 0.1
        If ``head_hidden_dims`` is not None, dropout between the layers in
        ``head_hidden_dims``
    head_activation: str, default = "relu"
J
jrzaurin 已提交
77 78
        If ``head_hidden_dims`` is not None, activation function of the head
        layers. One of ``tanh``, ``relu``, ``gelu`` or ``leaky_relu``
79 80 81 82 83 84 85 86 87 88 89
    head_batchnorm: bool, default = False
        If ``head_hidden_dims`` is not None, specifies if batch
        normalizatin should be included in the head layers
    head_batchnorm_last: bool, default = False
        If ``head_hidden_dims`` is not None, boolean indicating whether or
        not to apply batch normalization to the last of the dense layers
    head_linear_first: bool, default = False
        If ``head_hidden_dims`` is not None, boolean indicating whether
        the order of the operations in the dense layer. If ``True``:
        ``[LIN -> ACT -> BN -> DP]``. If ``False``: ``[BN -> DP -> LIN ->
        ACT]``
P
Pavol Mulinka 已提交
90 91 92 93
    head_activation_last: bool, default=False
        If final layer has activation function or not. Important if you are using
        loss functions non-negative input restrictions, e.g. RMSLE, or if you know
        your predictions are limited only to <0, inf)
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
    pred_dim: int, default = 1
        Size of the final wide and deep output layer containing the
        predictions. `1` for regression and binary classification or number
        of classes for multiclass classification.

    Examples
    --------

    >>> from pytorch_widedeep.models import TabResnet, DeepImage, DeepText, Wide, WideDeep
    >>> embed_input = [(u, i, j) for u, i, j in zip(["a", "b", "c"][:4], [4] * 3, [8] * 3)]
    >>> column_idx = {k: v for v, k in enumerate(["a", "b", "c"])}
    >>> wide = Wide(10, 1)
    >>> deeptabular = TabResnet(blocks_dims=[8, 4], column_idx=column_idx, embed_input=embed_input)
    >>> deeptext = DeepText(vocab_size=10, embed_dim=4, padding_idx=0)
    >>> deepimage = DeepImage(pretrained=False)
    >>> model = WideDeep(wide=wide, deeptabular=deeptabular, deeptext=deeptext, deepimage=deepimage)


    .. note:: While I recommend using the ``wide`` and ``deeptabular`` components
        within this package when building the corresponding model components,
        it is very likely that the user will want to use custom text and image
        models. That is perfectly possible. Simply, build them and pass them
        as the corresponding parameters. Note that the custom models MUST
        return a last layer of activations (i.e. not the final prediction) so
        that  these activations are collected by ``WideDeep`` and combined
        accordingly. In addition, the models MUST also contain an attribute
        ``output_dim`` with the size of these last layers of activations. See
        for example :class:`pytorch_widedeep.models.tab_mlp.TabMlp`

    """

125
    def __init__(
J
jrzaurin 已提交
126
        self,
127
        wide: Optional[nn.Module] = None,
128
        deeptabular: Optional[nn.Module] = None,
J
jrzaurin 已提交
129 130 131
        deeptext: Optional[nn.Module] = None,
        deepimage: Optional[nn.Module] = None,
        deephead: Optional[nn.Module] = None,
132
        head_hidden_dims: Optional[List[int]] = None,
133 134 135 136 137
        head_activation: str = "relu",
        head_dropout: float = 0.1,
        head_batchnorm: bool = False,
        head_batchnorm_last: bool = False,
        head_linear_first: bool = False,
P
Pavol Mulinka 已提交
138
        head_activation_last: bool = False,
139
        pred_dim: int = 1,
J
jrzaurin 已提交
140
    ):
141
        super(WideDeep, self).__init__()
142

J
jrzaurin 已提交
143 144
        self._check_model_components(
            wide,
145
            deeptabular,
J
jrzaurin 已提交
146 147 148
            deeptext,
            deepimage,
            deephead,
149
            head_hidden_dims,
J
jrzaurin 已提交
150
            pred_dim,
151
        )
152

153 154 155
        # required as attribute just in case we pass a deephead
        self.pred_dim = pred_dim

156
        # The main 5 components of the wide and deep assemble
157
        self.wide = wide
158
        self.deeptabular = deeptabular
J
jrzaurin 已提交
159
        self.deeptext = deeptext
160
        self.deepimage = deepimage
161
        self.deephead = deephead
P
Pavol Mulinka 已提交
162 163
        # to check when loss function is applied
        self.head_activation_last = head_activation_last
164

165 166
        if self.deeptabular is not None:
            self.is_tabnet = deeptabular.__class__.__name__ == "TabNet"
167 168
        else:
            self.is_tabnet = False
169

170
        if self.deephead is None:
171
            if head_hidden_dims is not None:
172 173 174 175 176 177 178 179
                self._build_deephead(
                    head_hidden_dims,
                    head_activation,
                    head_dropout,
                    head_batchnorm,
                    head_batchnorm_last,
                    head_linear_first,
                )
180 181 182 183 184 185 186 187 188 189
            else:
                self._add_pred_layer()

    def forward(self, X: Dict[str, Tensor]):
        wide_out = self._forward_wide(X)
        if self.deephead:
            return self._forward_deephead(X, wide_out)
        else:
            return self._forward_deep(X, wide_out)

190 191 192 193 194 195 196 197 198
    def _build_deephead(
        self,
        head_hidden_dims,
        head_activation,
        head_dropout,
        head_batchnorm,
        head_batchnorm_last,
        head_linear_first,
    ):
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218
        deep_dim = 0
        if self.deeptabular is not None:
            deep_dim += self.deeptabular.output_dim
        if self.deeptext is not None:
            deep_dim += self.deeptext.output_dim
        if self.deepimage is not None:
            deep_dim += self.deepimage.output_dim

        head_hidden_dims = [deep_dim] + head_hidden_dims
        self.deephead = MLP(
            head_hidden_dims,
            head_activation,
            head_dropout,
            head_batchnorm,
            head_batchnorm_last,
            head_linear_first,
        )
        self.deephead.add_module(
            "head_out", nn.Linear(head_hidden_dims[-1], self.pred_dim)
        )
P
Pavol Mulinka 已提交
219 220 221 222
        if self.head_activation_last:
            self.deephead.add_module(
                "head_act", get_activation_fn(head_activation)
            )
223

P
Pavol Mulinka 已提交
224
    def _add_pred_layer(self, head_activation):
225 226 227 228 229
        if self.deeptabular is not None:
            if self.is_tabnet:
                self.deeptabular = nn.Sequential(
                    self.deeptabular,
                    TabNetPredLayer(self.deeptabular.output_dim, self.pred_dim),
J
jrzaurin 已提交
230
                )
231
            else:
232 233 234 235 236 237 238 239 240 241 242 243
                self.deeptabular = nn.Sequential(
                    self.deeptabular,
                    nn.Linear(self.deeptabular.output_dim, self.pred_dim),
                )
        if self.deeptext is not None:
            self.deeptext = nn.Sequential(
                self.deeptext, nn.Linear(self.deeptext.output_dim, self.pred_dim)
            )
        if self.deepimage is not None:
            self.deepimage = nn.Sequential(
                self.deepimage, nn.Linear(self.deepimage.output_dim, self.pred_dim)
            )
P
Pavol Mulinka 已提交
244 245 246 247
        if self.head_activation_last:
            self.deephead.add_module(
                "head_act", get_activation_fn(head_activation)
            )
248 249

    def _forward_wide(self, X):
250 251 252 253
        if self.wide is not None:
            out = self.wide(X["wide"])
        else:
            batch_size = X[list(X.keys())[0]].size(0)
254
            out = torch.zeros(batch_size, self.pred_dim).to(device)
255

256 257 258 259 260 261 262 263
        return out

    def _forward_deephead(self, X, wide_out):
        if self.deeptabular is not None:
            if self.is_tabnet:
                tab_out = self.deeptabular(X["deeptabular"])
                deepside, M_loss = tab_out[0], tab_out[1]
            else:
264
                deepside = self.deeptabular(X["deeptabular"])
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286
        else:
            deepside = torch.FloatTensor().to(device)
        if self.deeptext is not None:
            deepside = torch.cat([deepside, self.deeptext(X["deeptext"])], axis=1)
        if self.deepimage is not None:
            deepside = torch.cat([deepside, self.deepimage(X["deepimage"])], axis=1)

        deephead_out = self.deephead(deepside)
        deepside_out = nn.Linear(deephead_out.size(1), self.pred_dim).to(device)

        if self.is_tabnet:
            res = (wide_out.add_(deepside_out(deephead_out)), M_loss)
        else:
            res = wide_out.add_(deepside_out(deephead_out))

        return res

    def _forward_deep(self, X, wide_out):
        if self.deeptabular is not None:
            if self.is_tabnet:
                tab_out, M_loss = self.deeptabular(X["deeptabular"])
                wide_out.add_(tab_out)
287
            else:
288 289 290 291 292 293 294 295
                wide_out.add_(self.deeptabular(X["deeptabular"]))
        if self.deeptext is not None:
            wide_out.add_(self.deeptext(X["deeptext"]))
        if self.deepimage is not None:
            wide_out.add_(self.deepimage(X["deepimage"]))

        if self.is_tabnet:
            res = (wide_out, M_loss)
296
        else:
297 298 299
            res = wide_out

        return res
300

J
jrzaurin 已提交
301
    @staticmethod  # noqa: C901
J
jrzaurin 已提交
302 303
    def _check_model_components(
        wide,
304
        deeptabular,
J
jrzaurin 已提交
305 306 307
        deeptext,
        deepimage,
        deephead,
308
        head_hidden_dims,
J
jrzaurin 已提交
309
        pred_dim,
310 311
    ):

J
jrzaurin 已提交
312 313 314 315 316 317 318
        if wide is not None:
            assert wide.wide_linear.weight.size(1) == pred_dim, (
                "the 'pred_dim' of the wide component ({}) must be equal to the 'pred_dim' "
                "of the deep component and the overall model itself ({})".format(
                    wide.wide_linear.weight.size(1), pred_dim
                )
            )
319
        if deeptabular is not None and not hasattr(deeptabular, "output_dim"):
320
            raise AttributeError(
321
                "deeptabular model must have an 'output_dim' attribute. "
322
                "See pytorch-widedeep.models.deep_text.DeepText"
323
            )
324 325 326 327 328 329 330 331 332
        if deeptabular is not None:
            is_tabnet = deeptabular.__class__.__name__ == "TabNet"
            has_wide_text_or_image = (
                wide is not None or deeptext is not None or deepimage is not None
            )
            if is_tabnet and has_wide_text_or_image:
                warnings.warn(
                    "'WideDeep' is a model comprised by multiple components and the 'deeptabular'"
                    " component is 'TabNet'. We recommend using 'TabNet' in isolation."
333
                    " The reasons are: i)'TabNet' uses sparse regularization which partially losses"
334 335
                    " its purpose when used in combination with other components."
                    " If you still want to use a multiple component model with 'TabNet',"
336 337 338
                    " consider setting 'lambda_sparse' to 0 during training. ii) The feature"
                    " importances will be computed only for TabNet but the model will comprise multiple"
                    " components. Therefore, such importances will partially lose their 'meaning'.",
339 340
                    UserWarning,
                )
341 342 343
        if deeptext is not None and not hasattr(deeptext, "output_dim"):
            raise AttributeError(
                "deeptext model must have an 'output_dim' attribute. "
344
                "See pytorch-widedeep.models.deep_text.DeepText"
345 346 347 348
            )
        if deepimage is not None and not hasattr(deepimage, "output_dim"):
            raise AttributeError(
                "deepimage model must have an 'output_dim' attribute. "
349
                "See pytorch-widedeep.models.deep_text.DeepText"
350
            )
351
        if deephead is not None and head_hidden_dims is not None:
352
            raise ValueError(
353
                "both 'deephead' and 'head_hidden_dims' are not None. Use one of the other, but not both"
354
            )
355
        if (
356
            head_hidden_dims is not None
357 358 359 360
            and not deeptabular
            and not deeptext
            and not deepimage
        ):
361
            raise ValueError(
362
                "if 'head_hidden_dims' is not None, at least one deep component must be used"
363
            )
J
jrzaurin 已提交
364 365 366
        if deephead is not None:
            deephead_inp_feat = next(deephead.parameters()).size(1)
            output_dim = 0
367 368
            if deeptabular is not None:
                output_dim += deeptabular.output_dim
J
jrzaurin 已提交
369 370 371 372 373 374 375 376 377 378
            if deeptext is not None:
                output_dim += deeptext.output_dim
            if deepimage is not None:
                output_dim += deepimage.output_dim
            assert deephead_inp_feat == output_dim, (
                "if a custom 'deephead' is used its input features ({}) must be equal to "
                "the output features of the deep component ({})".format(
                    deephead_inp_feat, output_dim
                )
            )