This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the [OpenAI Gym](https://gym.openai.com/).
**任务**
**Task**
The agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright. You can find an official leaderboard with various algorithms and visualizations at the [Gym website](https://gym.openai.com/envs/CartPole-v0).
As the agent observes the current state of the environment and chooses an action, the environment _transitions_ to a new state, and also returns a reward that indicates the consequences of the action. In this task, rewards are +1 for every incremental timestep and the environment terminates if the pole falls over too far or the cart moves more then 2.4 units away from center. This means better performing scenarios will run for longer duration, accumulating larger return.
The CartPole task is designed so that the inputs to the agent are 4 real values representing the environment state (position, velocity, etc.). However, neural networks can solve the task purely by looking at the scene, so we’ll use a patch of the screen centered on the cart as an input. Because of this, our results aren’t directly comparable to the ones from the official leaderboard - our task is much harder. Unfortunately this does slow down the training, because we have to render all the frames.
Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.
严格地说,我们将以当前帧和前一个帧之间的差异来呈现状态。这将允许代理从一张图像中考虑杆子的速度。
**Packages**
**包**
First, let’s import needed packages. Firstly, we need [gym](https://gym.openai.com/docs) for the environment (Install using `pip install gym`). We’ll also use the following from PyTorch:
We’ll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure.
*`Transition` - a named tuple representing a single transition in our environment. It maps essentially maps (state, action) pairs to their (next_state, reward) result, with the state being the screen difference image as described later on.
*`ReplayMemory` - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a `.sample()` method for selecting a random batch of transitions for training.
Now, let’s define our model. But first, let quickly recap what a DQN is.
现在我们来定义自己的模型。但首先来快速了解一下DQN。
## DQN algorithm
## DQN 算法
Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment.
Our aim will be to train a policy that tries to maximize the discounted, cumulative reward `\(R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t\)`, where `\(R_{t_0}\)` is also known as the _return_. The discount, `\(\gamma\)`, should be a constant between `\(0\)` and `\(1\)` that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about.
The main idea behind Q-learning is that if we had a function `\(Q^*: State \times Action \rightarrow \mathbb{R}\)`, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards:
Q-Learning背后的主要思想是,如果我们有一个函数 `\(Q^*: State \times Action \rightarrow \mathbb{R}\)`, 则如果我们在特定的状态下采取行动,那么我们可以很容易地构建一个最大化回报的策略:
```py
\[\pi^*(s)= \arg\!\max_a \ Q^*(s,a)\]
```
However, we don’t know everything about the world, so we don’t have access to `\(Q^*\)`. But, since neural networks are universal function approximators, we can simply create one and train it to resemble `\(Q^*\)`.
For our training update rule, we’ll use a fact that every `\(Q\)` function for some policy obeys the Bellman equation:
The difference between the two sides of the equality is known as the temporal difference error,`\(\delta\)`:
等式两边的差异被称为时间差误差,即`\(\delta\)`:
```py
\[\delta=Q(s,a)-(r+ \gamma \max_aQ(s', a))\]
```
To minimise this error, we will use the [Huber loss](https://en.wikipedia.org/wiki/Huber_loss). The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of `\(Q\)` are very noisy. We calculate this over a batch of transitions, `\(B\)`, sampled from the replay memory:
Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing `\(Q(s, \mathrm{left})\)` and `\(Q(s, \mathrm{right})\)` (where `\(s\)` is the input to the network). In effect, the network is trying to predict the _expected return_ of taking each action given the current input.
# Called with either one element to determine next action, or a batch
# during optimization. Returns tensor([[left0exp,right0exp]...]).
# 使用一个元素调用以确定下一个操作,或在优化期间调用批处理。返回张量
defforward(self,x):
x=F.relu(self.bn1(self.conv1(x)))
x=F.relu(self.bn2(self.conv2(x)))
...
...
@@ -168,9 +163,9 @@ class DQN(nn.Module):
```
### Input extraction
### 获取输入
The code below are utilities for extracting and processing rendered images from the environment. It uses the `torchvision` package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted.
This cell instantiates our model and its optimizer, and defines some utilities:
此单元实例化模型及其优化器,并定义一些实用程序:
*`select_action` - will select an action accordingly to an epsilon greedy policy. Simply put, we’ll sometimes use our model for choosing the action, and sometimes we’ll just sample one uniformly. The probability of choosing a random action will start at `EPS_START` and will decay exponentially towards `EPS_END`. `EPS_DECAY` controls the rate of the decay.
*`plot_durations` - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode.
plt.pause(0.001)# pause a bit so that plots are updated
plt.pause(0.001)# 暂定一会等待屏幕更新
ifis_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
```
### Training loop
### 训练循环
Finally, the code for training our model.
最后,训练我们模型的代码。
Here, you can find an `optimize_model` function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes `\(Q(s_t, a_t)\)` and `\(V(s_{t+1}) = \max_a Q(s_{t+1}, a)\)`, and combines them into our loss. By defition we set `\(V(s) = 0\)` if `\(s\)` is a terminal state. We also use a target network to compute `\(V(s_{t+1})\)` for added stability. The target network has its weights kept frozen most of the time, but is updated with the policy network’s weights every so often. This is usually a set number of steps but we shall use episodes for simplicity.
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
# 优化模型
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
...
...
@@ -340,25 +320,25 @@ def optimize_model():
```
Below, you can find the main training loop. At the beginning we reset the environment and initialize the `state` Tensor. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop.
Actions are chosen either randomly or based on a policy, getting the next step sample from the gym environment. We record the results in the replay memory and also run optimization step on every iteration. Optimization picks a random batch from the replay memory to do training of the new policy. “Older” target_net is also used in optimization to compute the expected Q values; it is updated occasionally to keep it current.