提交 adcb2e14 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!317 【轻量级 PR】:update docs/source_en/benchmark.md.

Merge pull request !317 from chenhaozhe/N/A
......@@ -22,8 +22,8 @@ For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee
| Network | Network Type | Dataset | MindSpore Version | Resource                 | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910 </br> CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - |
| | | | | Ascend: 8 * Ascend 910 </br> CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 |
| BERT-Large | Attention | zhwiki | 0.5.0-beta | Ascend: 1 * Ascend 910 </br> CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - |
| | | | | Ascend: 8 * Ascend 910 </br> CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 |
1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. The network contains 24 hidden layers, the sequence length is 128 tokens, and the vocabulary contains 21128 tokens.
2. For details about other open source frameworks, see [BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT).
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册