From ae194a5fbbd2893b36f1a86d73e1cd51dd7c21a4 Mon Sep 17 00:00:00 2001 From: Xiaoda Date: Mon, 27 Apr 2020 20:49:49 +0800 Subject: [PATCH] update tutorials/source_en/advanced_use/mixed_precision.md. --- tutorials/source_en/advanced_use/mixed_precision.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tutorials/source_en/advanced_use/mixed_precision.md b/tutorials/source_en/advanced_use/mixed_precision.md index b4f7531b..51963dbd 100644 --- a/tutorials/source_en/advanced_use/mixed_precision.md +++ b/tutorials/source_en/advanced_use/mixed_precision.md @@ -15,7 +15,7 @@ The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. -For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching `reduce precision'. +For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 'reduce precision'. ## Computation Process -- GitLab