提交 85a5762c 编写于 作者: C chenhaozhe 提交者: Gitee

update docs/source_en/benchmark.md.

上级 3795e3b9
......@@ -22,8 +22,8 @@ For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee
| Network | Network Type | Dataset | MindSpore Version | Resource                 | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910 </br> CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - |
| | | | | Ascend: 8 * Ascend 910 </br> CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 |
| BERT-Large | Attention | zhwiki | 0.5.0-beta | Ascend: 1 * Ascend 910 </br> CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - |
| | | | | Ascend: 8 * Ascend 910 </br> CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 |
1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. The network contains 24 hidden layers, the sequence length is 128 tokens, and the vocabulary contains 21128 tokens.
2. For details about other open source frameworks, see [BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT).
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册