README.md

    PyPI Conda Conda update PyPI - Python Version PyTorch Version

    Loc Comments

    Style Docs Unittest Algotest deploy codecov

    GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

    Updated on 2021.09.30 DI-engine-v0.2.0 (beta)

    Introduction to DI-engine (beta)

    DI-engine is a generalized Decision Intelligence engine. It supports most basic deep reinforcement learning (DRL) algorithms, such as DQN, PPO, SAC, and domain-specific algorithms like QMIX in multi-agent RL, GAIL in inverse RL, and RND in exploration problems. Various training pipelines and customized decision AI applications are also supported. Have fun with exploration and exploitation.

    Application

    Environment

    System Optimization and Design

    Other

    Installation

    You can simply install DI-engine from PyPI with the following command:

    pip install DI-engine

    If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

    conda install -c opendilab di-engine

    For more information about installation, you can refer to installation.

    And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

    • base: opendilab/ding:nightly
    • atari: opendilab/ding:nightly-atari
    • mujoco: opendilab/ding:nightly-mujoco
    • smac: opendilab/ding:nightly-smac

    Documentation

    The detailed documentation are hosted on doc(中文文档).

    Quick Start

    3 Minutes Kickoff

    3 Minutes Kickoff(colab)

    3 分钟上手中文版(kaggle)

    Bonus: Train RL agent in one line code:

    ding -m serial -e cartpole -p dqn -s 0

    Feature

    Algorithm Versatility

    No Algorithm Label Implementation Runnable Demo
    1 DQN discrete policy/dqn python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
    2 C51 discrete policy/c51 ding -m serial -c cartpole_c51_config.py -s 0
    3 QRDQN discrete policy/qrdqn ding -m serial -c cartpole_qrdqn_config.py -s 0
    4 IQN discrete policy/iqn ding -m serial -c cartpole_iqn_config.py -s 0
    5 Rainbow discrete policy/rainbow ding -m serial -c cartpole_rainbow_config.py -s 0
    6 SQL discretecontinuous policy/sql ding -m serial -c cartpole_sql_config.py -s 0
    7 R2D2 distdiscrete policy/r2d2 ding -m serial -c cartpole_r2d2_config.py -s 0
    8 A2C discrete policy/a2c ding -m serial -c cartpole_a2c_config.py -s 0
    9 PPO discretecontinuous policy/ppo python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
    10 PPG discrete policy/ppg python3 -u cartpole_ppg_main.py
    11 ACER discretecontinuous policy/acer ding -m serial -c cartpole_acer_config.py -s 0
    12 IMPALA distdiscrete policy/impala ding -m serial -c cartpole_impala_config.py -s 0
    13 DDPG continuous policy/ddpg ding -m serial -c pendulum_ddpg_config.py -s 0
    14 TD3 continuous policy/td3 python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
    15 SAC continuous policy/sac ding -m serial -c pendulum_sac_config.py -s 0
    16 QMIX MARL policy/qmix ding -m serial -c smac_3s5z_qmix_config.py -s 0
    17 COMA MARL policy/coma ding -m serial -c smac_3s5z_coma_config.py -s 0
    18 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
    19 WQMIX MARL policy/wqmix ding -m serial -c smac_3s5z_wqmix_config.py -s 0
    20 CollaQ MARL policy/collaq ding -m serial -c smac_3s5z_collaq_config.py -s 0
    21 GAIL IL reward_model/gail ding -m serial_reward_model -c cartpole_dqn_config.py -s 0
    22 SQIL IL entry/sqil ding -m serial_sqil -c cartpole_sqil_config.py -s 0
    23 DQFD discrete IL policy/dqfd ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
    24 HER exp reward_model/her python3 -u bitflip_her_dqn.py
    25 RND exp reward_model/rnd python3 -u cartpole_ppo_rnd_main.py
    26 CQL offline policy/cql python3 -u d4rl_cql_main.py
    27 PER other worker/replay_buffer rainbow demo
    28 GAE other rl_utils/gae ppo demo
    29 D4PG continuous policy/d4pg python3 -u pendulum_d4pg_config.py

    discrete means discrete action space, which is only label in normal DRL algorithms(1-15)

    continuous means continuous action space, which is only label in normal DRL algorithms(1-15)

    dist means distributed training (collector-learner parallel) RL algorithm

    MARL means multi-agent RL algorithm

    exp means RL algorithm which is related to exploration and sparse reward

    IL means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL

    offline means offline RL algorithm

    other means other sub-direction algorithm, usually as plugin-in in the whole pipeline

    P.S: The .py file in Runnable Demo can be found in dizoo

    Environment Versatility

    No Environment Label Visualization dizoo link
    1 atari discrete original dizoo link
    2 box2d/bipedalwalker continuous original dizoo link
    3 box2d/lunarlander discrete original dizoo link
    4 classic_control/cartpole discrete original dizoo link
    5 classic_control/pendulum discrete original dizoo link
    6 competitive_rl discrete selfplay original dizoo link
    7 gfootball discretesparseselfplay original dizoo link
    8 minigrid discretesparse original dizoo link
    9 mujoco continuous original dizoo link
    10 multiagent_particle discrete marl original dizoo link
    11 overcooked discrete marl original dizoo link
    12 procgen discrete original dizoo link
    13 pybullet continuous original dizoo link
    14 smac discrete marlselfplaysparse original dizoo link
    15 d4rl offline ori dizoo link
    16 league_demo discrete selfplay original dizoo link
    17 pomdp atari discrete dizoo link
    18 bsuite discrete original dizoo link
    19 ImageNet IL original dizoo link
    20 slime_volleyball discreteselfplay ori dizoo link
    21 gym_bybrid hybrid ori dizoo link
    22 GoBigger hybridmarlselfplay ori dizoo link

    discrete means discrete action space

    continuous means continuous action space

    hybridmeans hybrid (discrete + continuous) action space

    MARL means multi-agent RL environment

    sparse means environment which is related to exploration and sparse reward

    offline means offline RL environment

    IL means Imitation Learning or Supervised Learning Dataset

    selfplay means environment that allows agent VS agent battle

    P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type

    Contribution

    We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by this link.

    And users can join our slack communication channel or our forum for more detailed discussion.

    For future plans or milestones, please refer to our GitHub Projects.

    Citation

    @misc{ding,
        title={{DI-engine: OpenDILab} Decision Intelligence Engine},
        author={DI-engine Contributors},
        publisher = {GitHub},
        howpublished = {\url{https://github.com/opendilab/DI-engine}},
        year={2021},
    }

    License

    DI-engine released under the Apache 2.0 license.

    项目简介

    OpenDILab Decision AI Engine

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/opendilab/DI-engine

    发行版本 5

    v0.2.2

    全部发行版

    贡献者 28

    全部贡献者

    开发语言

    • Python 99.8 %
    • Shell 0.2 %
    • Makefile 0.0 %