diff --git a/docs/_pages/features.md b/docs/_pages/features.md index 344c72222fb1be7178be1c7c7453a2641f2657e6..36e3e70e493b91e90e33ed3fa28da1caac42d9a0 100755 --- a/docs/_pages/features.md +++ b/docs/_pages/features.md @@ -103,6 +103,12 @@ during the backward computation, the activation gradients are short lived while gradients are long lived. CMO transfers activation checkpoints and parameter gradients to contiguous buffers preventing memory fragmentation. +## ZeRO-Offload + +ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU. + +For more details see the [ZeRO-Offload release blog]( https://www.microsoft.com/en-us/research/?p=689370&secret=iSlooB), and [tutorial](/tutorials/zero-offload/) on integration with DeepSpeed. + ## Additional Memory and Bandwidth Optimizations ### Smart Gradient Accumulation diff --git a/docs/_posts/2020-09-09-ZeRO-Offload.md b/docs/_posts/2020-09-09-ZeRO-Offload.md new file mode 100755 index 0000000000000000000000000000000000000000..f61884fb8db7ad602c83320952aef47494a0ffd3 --- /dev/null +++ b/docs/_posts/2020-09-09-ZeRO-Offload.md @@ -0,0 +1,14 @@ +--- +layout: single +title: "10x bigger model training on a single GPU with ZeRO-Offload" +excerpt: "" +categories: news +new_post: true +date: 2020-09-09 00:00:00 +--- + +We introduce a new technology called ZeRO-Offload to enable **10X bigger model training on a single GPU**. ZeRO-Offload extends ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with **a single GPU**, our users now can run **models of up to 13 billion parameters** without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models. + +* For more information on ZeRO-Offload, see our [press release]( {{ site.press_release_v3 }} ). +* For more information on how to use ZeRO-Offload, see our [ZeRO-Offload tutorial](https://www.deepspeed.ai/tutorials/zero-offload/). +* The source code for ZeRO-Offload can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed). diff --git a/docs/index.md b/docs/index.md index 4aa29673ecc6ae3b89b4d0a087cf1d382329c22d..fd267f273758b3218774413882c294de01456e05 100755 --- a/docs/index.md +++ b/docs/index.md @@ -10,7 +10,6 @@ efficient, and effective.

10x Larger Models

10x Faster Training

Minimal Code Change

- DeepSpeed can train DL models with over a hundred billion parameters on current generation of GPU clusters, while achieving over 10x in system performance compared to the state-of-art. Early adopters of DeepSpeed have already produced @@ -157,6 +156,9 @@ overview](/features/) for descriptions and usage. * Activation Partitioning * Constant Buffer Optimization * Contiguous Memory Optimization +* [ZeRO-Offload](/features/#zero-offload) + * Leverage both CPU/GPU memory for model training + * Support 10B model training on a single GPU * [Additional Memory and Bandwidth Optimizations](/features/#additional-memory-and-bandwidth-optimizations) * Smart Gradient Accumulation * Communication/Computation Overlap