Deep Q Network (DQN) Model

Open In Colab View Run

13import torch
14from torch import nn
15
16from labml_helpers.module import Module

Dueling Network ⚔️ Model for $Q$ Values

We are using a dueling network to calculate Q-values. Intuition behind dueling network architecture is that in most states the action doesn’t matter, and in some states the action is significant. Dueling network allows this to be represented very well.

So we create two networks for $V$ and $A$ and get $Q$ from them. We share the initial layers of the $V$ and $A$ networks.

19class Model(Module):
50    def __init__(self):
51        super().__init__()
52        self.conv = nn.Sequential(

The first convolution layer takes a $84\times84$ frame and produces a $20\times20$ frame

55            nn.Conv2d(in_channels=4, out_channels=32, kernel_size=8, stride=4),
56            nn.ReLU(),

The second convolution layer takes a $20\times20$ frame and produces a $9\times9$ frame

60            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2),
61            nn.ReLU(),

The third convolution layer takes a $9\times9$ frame and produces a $7\times7$ frame

65            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),
66            nn.ReLU(),
67        )

A fully connected layer takes the flattened frame from third convolution layer, and outputs $512$ features

72        self.lin = nn.Linear(in_features=7 * 7 * 64, out_features=512)
73        self.activation = nn.ReLU()

This head gives the state value $V$

76        self.state_value = nn.Sequential(
77            nn.Linear(in_features=512, out_features=256),
78            nn.ReLU(),
79            nn.Linear(in_features=256, out_features=1),
80        )

This head gives the action value $A$

82        self.action_value = nn.Sequential(
83            nn.Linear(in_features=512, out_features=256),
84            nn.ReLU(),
85            nn.Linear(in_features=256, out_features=4),
86        )
88    def forward(self, obs: torch.Tensor):

Convolution

90        h = self.conv(obs)

Reshape for linear layers

92        h = h.reshape((-1, 7 * 7 * 64))

Linear layer

95        h = self.activation(self.lin(h))

$A$

98        action_value = self.action_value(h)

$V$

100        state_value = self.state_value(h)

$A(s, a) - \frac{1}{|\mathcal{A}|} \sum_{a’ \in \mathcal{A}} A(s, a’)$

103        action_score_centered = action_value - action_value.mean(dim=-1, keepdim=True)

$Q(s, a) =V(s) + \Big(A(s, a) - \frac{1}{|\mathcal{A}|} \sum_{a’ \in \mathcal{A}} A(s, a’)\Big)$

105        q = state_value + action_score_centered
106
107        return q