未验证 提交 e082d475 编写于 作者: M Minjia Zhang 提交者: GitHub

updating pld docs (#517)

上级 41fb24b3
......@@ -9,6 +9,6 @@ date: 2020-10-29 00:00:00
We introduce a new technology called progressive layer dropping (PLD) to speedup the pre-training of Transformer-based networks through efficient and robust compressed training. The pre-training step of Transformer networks often suffer from unbearable overall computational expenses. We analyze the training dynamics and stability of Transformer networks and propose PLD to sparsely update Transformer blocks following a progressive dropping schedule, which smoothly increases the layer dropping rate for each mini-batch as training evolves along both the temporal and the model depth dimension. PLD is able to allow the pre-training to be **2.5X faster** to get similar accuracy on downstream tasks and allows the training to be **24% faster** when training the same number of samples, not at the cost of excessive hardware resources.
* For detailed technology deep dive, see our [technical report]().
* For detailed technology deep dive, see our [technical report](https://arxiv.org/pdf/2010.13369.pdf).
* For more information on how to use PLD, see our [Progressive layer dropping tutorial](https://www.deepspeed.ai/tutorials/progressive_layer_dropping/).
* The source code for PLD is now available at the [DeepSpeed repo](https://github.com/microsoft/deepspeed).
......@@ -3,7 +3,7 @@ title: "Accelerating Training of Transformer-Based Language Models with Progress
---
In this tutorial, we are going to introduce the progressive layer dropping (PLD) in DeepSpeed and provide examples on how to use PLD. PLD allows to train Transformer networks such as BERT 24% faster under the same number of samples and 2.5 times faster to get similar accuracy on downstream tasks. Detailed description of PLD and the experimental results are available in our [technical report](git push --set-upstream origin staging-pld-v1).
In this tutorial, we are going to introduce the progressive layer dropping (PLD) in DeepSpeed and provide examples on how to use PLD. PLD allows to train Transformer networks such as BERT 24% faster under the same number of samples and 2.5 times faster to get similar accuracy on downstream tasks. Detailed description of PLD and the experimental results are available in our [technical report](https://arxiv.org/pdf/2010.13369.pdf).
To illustrate how to use PLD in DeepSpeed, we show how to enable PLD to pre-train a BERT model and fine-tune the pre-trained model on the GLUE datasets.
......@@ -152,4 +152,4 @@ The fine-tuning results can be found under the "logs" directory, and below are e
| Bert_{base} (original) | 66.4 | 88.9/84.8 | 87.1/89.2 | 52.1 | 93.5 | 90.5 | 71.2/89.2 | 84.6/83.4 | 80.7 |
| Bert_{base} (Our impl) | 67.8 | 88.0/86.0 | 89.5/89.2 | 52.5 | 91.2 | 87.1 | 89.0/90.6 | 82.5/83.4 | 82.1 |
| PLD | 69.3 | 86.6/84.3 | 90.0/89.6 | 55.8 | 91.6 | 90.7 | 89.6/91.2 | 84.1/83.8 | 82.9 |
| Lr | 7e-5 | 9e-5 | 7e-5 | 5e-5 | 7e-5 | 9e-5 | 2e-4 | 3e-5 | |
| Lr | 7e-5 | 9e-5 | 7e-5 | 5e-5 | 7e-5 | 9e-5 | 2e-4 | 3e-5 | |
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册