From 7f7c2b5539174dd35ebc1b56b9dcea6470136ba9 Mon Sep 17 00:00:00 2001 From: Xiaoda Date: Mon, 27 Apr 2020 20:25:37 +0800 Subject: [PATCH] update tutorials/source_en/advanced_use/mixed_precision.md. --- tutorials/source_en/advanced_use/mixed_precision.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tutorials/source_en/advanced_use/mixed_precision.md b/tutorials/source_en/advanced_use/mixed_precision.md index 10c87f9e..b4f7531b 100644 --- a/tutorials/source_en/advanced_use/mixed_precision.md +++ b/tutorials/source_en/advanced_use/mixed_precision.md @@ -15,6 +15,8 @@ The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching `reduce precision'. + ## Computation Process The following figure shows the typical computation process of mixed precision in MindSpore. -- GitLab