For more information about installation, you can refer to [installation](https://opendilab.github.io/DI-engine/installation/index.html).
And our dockerhub repo can be found [here](https://hub.docker.com/repository/docker/opendilab/ding),we prepare `base image` and `env image` with common RL environments.
- base: opendilab/ding:nightly
- atari: opendilab/ding:nightly-atari
- mujoco: opendilab/ding:nightly-mujoco
- smac: opendilab/ding:nightly-smac
## Documentation
The detailed documentation are hosted on [doc](https://opendilab.github.io/DI-engine/)([中文文档](https://di-engine-docs.readthedocs.io/en/main-zh/)).
...
...
@@ -158,6 +163,7 @@ P.S: The `.py` file in `Runnable Demo` can be found in `dizoo`
![discrete](https://img.shields.io/badge/-discrete-brightgreen) means discrete action space
...
...
@@ -169,11 +175,13 @@ P.S: The `.py` file in `Runnable Demo` can be found in `dizoo`
![offline](https://img.shields.io/badge/-offlineRL-darkblue) means offline RL environment
![IL](https://img.shields.io/badge/-IL/SL-purple) means Imitation Learning or Supervised Learning Dataset
P.S. some enviroments in Atari, such as **MontezumaRevenge**, are also sparse reward type
## Contribution
We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by [this link](https://github.com/opendilab/DI-engine/projects/1).
We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by [this link](https://github.com/opendilab/DI-engine/projects).
And users can join our [slack communication channel](https://join.slack.com/t/opendilab/shared_invite/zt-v9tmv4fp-nUBAQEH1_Kuyu_q4plBssQ) or our [forum](https://github.com/opendilab/DI-engine/discussions) for more detailed discussion.