@@ -36,7 +36,6 @@ This document describes the computation process by using examples of automatic a
...
@@ -36,7 +36,6 @@ This document describes the computation process by using examples of automatic a
## Automatic Mixed Precision
## Automatic Mixed Precision
To use the automatic mixed precision, you need to invoke the corresponding API, which takes the network to be trained and the optimizer as the input. This API converts the operators of the entire network into FP16 operators (except the BatchNorm and Loss operators).
To use the automatic mixed precision, you need to invoke the corresponding API, which takes the network to be trained and the optimizer as the input. This API converts the operators of the entire network into FP16 operators (except the BatchNorm and Loss operators).
In addition, after the mixed precision is employed, the loss scale must be used to avoid data overflow.
@@ -110,66 +96,53 @@ MindSpore also supports manual mixed precision. It is assumed that only one dens
...
@@ -110,66 +96,53 @@ MindSpore also supports manual mixed precision. It is assumed that only one dens
The following is the procedure for implementing manual mixed precision:
The following is the procedure for implementing manual mixed precision:
1. Define the network. This step is similar to step 2 in the automatic mixed precision.
1. Define the network. This step is similar to step 2 in the automatic mixed precision.
2. Configure the mixed precision. Use net.to_float(mstype.float16) to set all operators of the cell and its sub-cells to FP16. Then, configure the fc3 to FP32.
2. Configure the mixed precision. Use net.to_float(mstype.float16) to set all operators of the cell and its sub-cells to FP16. Then, configure the dense to FP32.
3. Use TrainOneStepWithLossScaleCell to encapsulate the network model and optimizer.
3. Use TrainOneStepCell to encapsulate the network model and optimizer.