Position-wise Feed-Forward Network (FFN)

FFN consists of two fully connected layers. Number of dimensions in the hidden layer $d_{ff}$, is generally set to around four times that of the token embedding $d_{model}$. So it is sometime also called the expand-and-contract network.

There is an activation at the hidden layer, which is usually set to ReLU (Rectified Linear Unit) activation,

That is, the FFN function is, where $W_1$, $W_2$, $b_1$ and $b_2$ are learnable parameters.

Sometimes the GELU (Gaussian Error Linear Unit) activation is also used instead of ReLU. where $\Phi(x) = P(X \le x), X \sim \mathcal{N}(0,1)$

Gated Linear Units

This is a generic implementation that supports different variants including Gated Linear Units (GLU). We have also implemented experiments on these:

35import torch
36from torch import nn as nn
37
38from labml_helpers.module import Module

FFN module

41class FeedForward(Module):
  • d_model is the number of features in a token embedding
  • d_ff is the number of features in the hidden layer of the FFN
  • dropout is dropout probability for the hidden layer
  • is_gated specifies whether the hidden layer is gated
  • bias1 specified whether the first fully connected layer should have a learnable bias
  • bias2 specified whether the second fully connected layer should have a learnable bias
  • bias_gate specified whether the fully connected layer for the gate should have a learnable bias
46    def __init__(self, d_model: int, d_ff: int,
47                 dropout: float = 0.1,
48                 activation=nn.ReLU(),
49                 is_gated: bool = False,
50                 bias1: bool = True,
51                 bias2: bool = True,
52                 bias_gate: bool = True):
62        super().__init__()

Layer one parameterized by weight $W_1$ and bias $b_1$

64        self.layer1 = nn.Linear(d_model, d_ff, bias=bias1)

Layer one parameterized by weight $W_1$ and bias $b_1$

66        self.layer2 = nn.Linear(d_ff, d_model, bias=bias2)

Hidden layer dropout

68        self.dropout = nn.Dropout(dropout)

Activation function $f$

70        self.activation = activation

Whether there is a gate

72        self.is_gated = is_gated
73        if is_gated:

If there is a gate the linear layer to transform inputs to be multiplied by the gate, parameterized by weight $V$ and bias $c$

76            self.linear_v = nn.Linear(d_model, d_ff, bias=bias_gate)
78    def __call__(self, x: torch.Tensor):

$f(x W_1 + b_1)$

80        g = self.activation(self.layer1(x))

If gated, $f(x W_1 + b_1) \otimes (x V + b) $

82        if self.is_gated:
83            x = g * self.linear_v(x)

Otherwise

85        else:
86            x = g

Apply dropout

88        x = self.dropout(x)

$(f(x W_1 + b_1) \otimes (x V + b)) W_2 + b_2$ or $f(x W_1 + b_1) W_2 + b_2$ depending on whether it is gated

91        return self.layer2(x)