提交 68eb1451 编写于 作者: X Xiaoda 提交者: Gitee

update tutorials/source_en/advanced_use/mixed_precision.md.

Fix English version text issues of mix_precision.
上级 13b40565
......@@ -80,7 +80,6 @@ label = Tensor(np.zeros([1, 10]).astype(np.float32))
scaling_sens = Tensor(np.full((1), 1.0), dtype=mstype.float32)
# Define Loss and Optimizer
net.set_train()
loss = MSELoss()
optimizer = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
net_with_loss = WithLossCell(net, loss)
......@@ -98,7 +97,7 @@ MindSpore also supports manual mixed precision. It is assumed that only one dens
The following is the procedure for implementing manual mixed precision:
1. Define the network. This step is similar to step 2 in the automatic mixed precision. NoteThe fc3 operator in LeNet needs to be manually set to FP32.
2. Configure the mixed precision. Use net.add_flags_recursive(fp16=True) to set all operators of the cell and its sub-cells to FP16.
2. Configure the mixed precision. Use net.to_float(mstype.float16) to set all operators of the cell and its sub-cells to FP16.
3. Use TrainOneStepWithLossScaleCell to encapsulate the network model and optimizer.
......@@ -113,7 +112,7 @@ class LeNet5(nn.Cell):
self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
self.fc1 = nn.Dense(16 * 5 * 5, 120)
self.fc2 = nn.Dense(120, 84)
self.fc3 = nn.Dense(84, 10).add_flags_recursive(fp32=True)
self.fc3 = nn.Dense(84, 10).to_float(mstype.float32)
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2)
self.flatten = P.Flatten()
......@@ -129,7 +128,7 @@ class LeNet5(nn.Cell):
# Initialize network and set mixing precision
net = LeNet5()
net.add_flags_recursive(fp16=True)
net.to_float(mstype.float16)
# Define training data, label and sens
predict = Tensor(np.ones([1, 1, 32, 32]).astype(np.float32) * 0.01)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册