提交 7f7c2b55 编写于 作者: X Xiaoda 提交者: Gitee

update tutorials/source_en/advanced_use/mixed_precision.md.

上级 45755f3d
......@@ -15,6 +15,8 @@
The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time.
Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching `reduce precision'.
## Computation Process
The following figure shows the typical computation process of mixed precision in MindSpore.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册