提交 8088d191 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!5507 vgg16 readme update

Merge pull request !5507 from caojian05/ms_vgg_readme_update
......@@ -79,6 +79,7 @@ here basic modules mainly include basic operation like: **3×3 conv** and **2×
## Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
......@@ -370,4 +371,4 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16%
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
\ No newline at end of file
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册