diff --git a/tutorials/source_en/advanced_use/mixed_precision.md b/tutorials/source_en/advanced_use/mixed_precision.md index 10c87f9ee01e5159d8b65c54e7ec85fbd35e32d3..b4f7531bd1dc02275e5f256c15f4b6d58159a3e9 100644 --- a/tutorials/source_en/advanced_use/mixed_precision.md +++ b/tutorials/source_en/advanced_use/mixed_precision.md @@ -15,6 +15,8 @@ The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching `reduce precision'. + ## Computation Process The following figure shows the typical computation process of mixed precision in MindSpore.