未验证 提交 c5c9aaaa 编写于 作者: J Jeff Rasley 提交者: GitHub

update bert images (#245)

上级 0c77f878
......@@ -373,9 +373,9 @@ for more details in
## DeepSpeed Single GPU Throughput Results
![DeepSpeed Single GPU Bert Training Throughput 128](/assets/images/deepspeed-throughput-seq128.png){: .align-center}
![DeepSpeed Single GPU Bert Training Throughput 128](/assets/images/transformer_kernel_perf_seq128.PNG){: .align-center}
![DeepSpeed Single GPU Bert Training Throughput 512](/assets/images/deepspeed-throughput-seq512.png){: .align-center}
![DeepSpeed Single GPU Bert Training Throughput 512](/assets/images/transformer_kernel_perf_seq128.PNG){: .align-center}
Compared to SOTA, DeepSpeed significantly improves single GPU performance for transformer-based model like BERT. Figure above shows the single GPU throughput of training BertBERT-Large optimized through DeepSpeed, compared with two well-known Pytorch implementations, NVIDIA BERT and HuggingFace BERT. DeepSpeed reaches as high as 64 and 53 teraflops throughputs (corresponding to 272 and 52 samples/second) for sequence lengths of 128 and 512, respectively, exhibiting up to 28% throughput improvements over NVIDIA BERT and up to 62% over HuggingFace BERT. We also support up to 1.8x larger batch size without running out of memory.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册