@@ -18,10 +18,8 @@ This is a [PyTorch](https://pytorch.org) implementation of paper
Here is the [experiment](experiment.html) and [model](model.html) implementation.
\(
\def\green#1{{\color{yellowgreen}{#1}}}
\)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/rl/dqn/experiment.ipynb)
@@ -8,6 +8,9 @@ summary: Implementation of DQN experiment with Atari Breakout
This experiment trains a Deep Q Network (DQN) to play Atari Breakout game on OpenAI Gym.
It runs the [game environments on multiple processes](../game.html) to sample efficiently.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/rl/dqn/experiment.ipynb)
@@ -5,6 +5,9 @@ summary: Implementation of neural network model for Deep Q Network (DQN).
---
# Deep Q Network (DQN) Model
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/rl/dqn/experiment.ipynb)
@@ -8,6 +8,9 @@ summary: Annotated implementation of prioritized experience replay using a binar
This implements paper [Prioritized experience replay](https://papers.labml.ai/paper/1511.05952),
using a binary segment tree.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/rl/dqn/experiment.ipynb)