From 85a5762cb55c04860338a9b3bd962896f4576ca0 Mon Sep 17 00:00:00 2001 From: chenhaozhe Date: Mon, 29 Jun 2020 20:22:20 +0800 Subject: [PATCH] update docs/source_en/benchmark.md. --- docs/source_en/benchmark.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source_en/benchmark.md b/docs/source_en/benchmark.md index b3b34a1f..c81515ff 100644 --- a/docs/source_en/benchmark.md +++ b/docs/source_en/benchmark.md @@ -22,8 +22,8 @@ For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee | Network | Network Type | Dataset | MindSpore Version | Resource                 | Precision | Batch Size | Throughput | Speedup | | --- | --- | --- | --- | --- | --- | --- | --- | --- | -| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910
CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - | -| | | | | Ascend: 8 * Ascend 910
CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 | +| BERT-Large | Attention | zhwiki | 0.5.0-beta | Ascend: 1 * Ascend 910
CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - | +| | | | | Ascend: 8 * Ascend 910
CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 | 1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. The network contains 24 hidden layers, the sequence length is 128 tokens, and the vocabulary contains 21128 tokens. 2. For details about other open source frameworks, see [BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT). \ No newline at end of file -- GitLab