README.md 1.9 KB
Newer Older
1 2 3 4 5 6 7
## Reproduce IMPALA with PARL
Based on PARL, the IMPALA algorithm of deep reinforcement learning is reproduced, and the same level of indicators of the paper is reproduced in the classic Atari game.

+ IMPALA in
[Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures](https://arxiv.org/abs/1802.01561)

### Atari games introduction
H
Hongsheng Zeng 已提交
8
Please see [here](https://gym.openai.com/envs/#atari) to know more about Atari games.
9 10

### Benchmark result
H
Hongsheng Zeng 已提交
11
Result with one learner (in a P40 GPU) and 32 actors (in 32 CPUs).
12 13 14 15 16 17 18 19 20 21 22
+ PongNoFrameskip-v4: mean_episode_rewards can reach 18-19 score in about 7~10 minutes.
<img src=".benchmark/IMPALA_Pong.jpg" width = "400" height ="300" alt="IMPALA_Pong" />

+ Results of other games in an hour.

<img src=".benchmark/IMPALA_Breakout.jpg" width = "400" height ="300" alt="IMPALA_Breakout" /> <img src=".benchmark/IMPALA_BeamRider.jpg" width = "400" height ="300" alt="IMPALA_BeamRider"/>
<br>
<img src=".benchmark/IMPALA_Qbert.jpg" width = "400" height ="300" alt="IMPALA_Qbert" /> <img src=".benchmark/IMPALA_SpaceInvaders.jpg" width = "400" height ="300" alt="IMPALA_SpaceInvaders"/>

## How to use
### Dependencies
23
+ [paddlepaddle>=1.5.1](https://github.com/PaddlePaddle/Paddle)
24 25
+ [parl](https://github.com/PaddlePaddle/PARL)
+ gym
B
Bo Zhou 已提交
26
+ atari-py
27 28 29 30


### Distributed Training:

F
fuyw 已提交
31
At first, We can start a local cluster with 32 CPUs:
32

F
fuyw 已提交
33 34
```bash
xparl start --port 8010 --cpu_num 32
35 36
```

F
fuyw 已提交
37 38 39 40 41 42 43 44 45
Note that if you have started a master before, you don't have to run the above
command. For more information about the cluster, please refer to our
[documentation](https://parl.readthedocs.io/en/latest/parallel_training/setup.html)

Then we can start the distributed training by running `train.py`.

```bash
python train.py
```
46 47

### Reference
F
fuyw 已提交
48
+ [Parl Cluster Setup](https://parl.readthedocs.io/en/latest/parallel_training/setup.html).
49 50
+ [deepmind/scalable_agent](https://github.com/deepmind/scalable_agent)
+ [Ray](https://github.com/ray-project/ray)