diff --git a/README.md b/README.md index 0d1f689ee996aae6db3c6a6b0fdbe5c983c0b66e..b49e4573046e362fb5466a7d4bf5a4523ce89f57 100755 --- a/README.md +++ b/README.md @@ -114,8 +114,15 @@ just-in-time (JIT) using [torch's JIT C++ extension loader that relies on ninja](https://pytorch.org/docs/stable/cpp_extension.html) to build and dynamically link them at runtime. -**Note:** [PyTorch](https://pytorch.org/) must be installed _before_ installing -DeepSpeed. +## Requirements +* [PyTorch](https://pytorch.org/) must be installed _before_ installing DeepSpeed. +* For full feature support we recommend a version of PyTorch that is >= 1.8 and ideally the latest PyTorch stable release. +* Specific GPUs we develop and test against are listed below, this doesn't mean your GPU will not work if it doesn't fall into this category it's just DeepSpeed is most well tested on the following: + * NVIDIA: Pascal, Volta, and Ampere architectures + * AMD: MI100 and MI200 + +## PyPI +We regularly push releases to [PyPI](https://pypi.org/project/deepspeed/) and encourage users to install from there in most cases. ```bash pip install deepspeed @@ -132,7 +139,8 @@ If you would like to pre-install any of the DeepSpeed extensions/ops (instead of JIT compiling) or install pre-compiled ops via PyPI please see our [advanced installation instructions](https://www.deepspeed.ai/tutorials/advanced-install/). -On Windows you can build wheel with following steps, currently only inference mode is supported. +## Windows +Windows support is partially supported with DeepSpeed. On Windows you can build wheel with following steps, currently only inference mode is supported. 1. Install pytorch, such as pytorch 1.8 + cuda 11.1 2. Install visual cpp build tools, such as VS2019 C++ x64/x86 build tools 3. Launch cmd console with Administrator privilege for creating required symlink folders