From 0cae3d0594bcbf1ed073a95841415ab0a09d3f68 Mon Sep 17 00:00:00 2001 From: zenghsh3 Date: Thu, 28 Jun 2018 10:45:05 +0800 Subject: [PATCH] Update README --- fluid/DeepQNetwork/README.md | 11 ++++++----- fluid/DeepQNetwork/README_cn.md | 12 +++++++----- 2 files changed, 13 insertions(+), 10 deletions(-) diff --git a/fluid/DeepQNetwork/README.md b/fluid/DeepQNetwork/README.md index 7f348abe..d578dc08 100644 --- a/fluid/DeepQNetwork/README.md +++ b/fluid/DeepQNetwork/README.md @@ -1,6 +1,6 @@ [中文版](README_cn.md) -# Reproduce DQN, DoubleDQN, DuelingDQN model with Fluid version of PaddlePaddle +## Reproduce DQN, DoubleDQN, DuelingDQN model with Fluid version of PaddlePaddle Based on PaddlePaddle's next-generation API Fluid, the DQN model of deep reinforcement learning is reproduced, and the same level of indicators of the paper is reproduced in the classic Atari game. The model receives the image of the game as input, and uses the end-to-end model to directly predict the next step. The repository contains the following three types of models. + DQN in [Human-level Control Through Deep Reinforcement Learning](http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html) @@ -9,13 +9,14 @@ Based on PaddlePaddle's next-generation API Fluid, the DQN model of deep reinfor + DuelingDQN in: [Dueling Network Architectures for Deep Reinforcement Learning](http://proceedings.mlr.press/v48/wangf16.html) -# Atari benchmark & performance -## [Atari games introduction](https://gym.openai.com/envs/#atari) +## Atari benchmark & performance +### [Atari games introduction](https://gym.openai.com/envs/#atari) -+ Pong game result +### Pong game result +The average game rewards that can be obtained for the three models as the number of training steps changes during the training are as follows: ![DQN result](assets/dqn.png) -# How to use +## How to use ### Dependencies: + python2.7 + gym diff --git a/fluid/DeepQNetwork/README_cn.md b/fluid/DeepQNetwork/README_cn.md index fac98857..a6b6ffe4 100644 --- a/fluid/DeepQNetwork/README_cn.md +++ b/fluid/DeepQNetwork/README_cn.md @@ -1,4 +1,4 @@ -# 基于PaddlePaddle的Fluid版本复现DQN, DoubleDQN, DuelingDQN三个模型 +## 基于PaddlePaddle的Fluid版本复现DQN, DoubleDQN, DuelingDQN三个模型 基于PaddlePaddle下一代API Fluid复现了深度强化学习领域的DQN模型,在经典的Atari 游戏上复现了论文同等水平的指标,模型接收游戏的图像作为输入,采用端到端的模型直接预测下一步要执行的控制信号,本仓库一共包含以下3类模型。 + DQN模型: [Human-level Control Through Deep Reinforcement Learning](http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html) @@ -7,13 +7,14 @@ + DuelingDQN模型: [Dueling Network Architectures for Deep Reinforcement Learning](http://proceedings.mlr.press/v48/wangf16.html) -# 模型效果:Atari游戏表现 -## [Atari游戏介绍](https://gym.openai.com/envs/#atari) +## 模型效果:Atari游戏表现 +### [Atari游戏介绍](https://gym.openai.com/envs/#atari) -+ Pong游戏训练结果 +### Pong游戏训练结果 +三个模型在训练过程中随着训练步数的变化,能得到的平均游戏奖励如下图所示: ![DQN result](assets/dqn.png) -# 使用教程 +## 使用教程 ### 依赖: + python2.7 + gym @@ -55,3 +56,4 @@ python play.py --rom ./rom_files/pong.bin --use_cuda --model_path ./saved_model/ # 以可视化的形式来玩游戏 python play.py --rom ./rom_files/pong.bin --use_cuda --model_path ./saved_model/DQN-pong --viz 0.01 ``` + -- GitLab