diff --git a/model_zoo/official/cv/alexnet/README.md b/model_zoo/official/cv/alexnet/README.md index f2fefc12042e6d5127fa4be5f1ca8adb14ea6839..84ec3c1d26941cedb1d9dd56c474d09bf25f4d76 100644 --- a/model_zoo/official/cv/alexnet/README.md +++ b/model_zoo/official/cv/alexnet/README.md @@ -71,8 +71,7 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME] ## [Script and Sample Code](#contents) ``` -├── model_zoo - ├── README.md // descriptions about all the models +├── cv ├── alexnet ├── README.md // descriptions about alexnet ├── requirements.txt // package needed @@ -116,8 +115,8 @@ sh run_standalone_train_ascend.sh cifar-10-batches-bin ckpt After training, the loss value will be achieved as follows: -# grep "loss is " train.log ``` +# grep "loss is " train.log epoch: 1 step: 1, loss is 2.2791853 ... epoch: 1 step: 1536, loss is 1.9366643 @@ -171,7 +170,7 @@ You can view the results through the file "log.txt". The accuracy of the test da # [Description of Random Situation](#contents) -In dataset.py, we set the seed inside “create_dataset" function. +In dataset.py, we set the seed inside ```create_dataset``` function. # [ModelZoo Homepage](#contents) Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/cv/lenet/README.md b/model_zoo/official/cv/lenet/README.md index 7cd4b63776952b26af68554ab1137463fc6a0720..2f0b00229ececd5efc42a135ae1058b3dd90d915 100644 --- a/model_zoo/official/cv/lenet/README.md +++ b/model_zoo/official/cv/lenet/README.md @@ -77,8 +77,7 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME] ## [Script and Sample Code](#contents) ``` -├── model_zoo - ├── README.md // descriptions about all the models +├── cv ├── lenet ├── README.md // descriptions about lenet ├── requirements.txt // package needed @@ -181,7 +180,7 @@ You can view the results through the file "log.txt". The accuracy of the test da # [Description of Random Situation](#contents) -In dataset.py, we set the seed inside “create_dataset" function. +In dataset.py, we set the seed inside ```create_dataset``` function. # [ModelZoo Homepage](#contents) Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/cv/mobilenetv2/Readme.md b/model_zoo/official/cv/mobilenetv2/Readme.md index f7021170fdfe790a552b3e08668307d02daa6dd8..430d5b75865b3e1fadbe2b770841d36304bd59f6 100644 --- a/model_zoo/official/cv/mobilenetv2/Readme.md +++ b/model_zoo/official/cv/mobilenetv2/Readme.md @@ -175,7 +175,7 @@ result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625. | Parameters | | | | | -------------------------- | ----------------------------- | ------------------------- | -------------------- | | Model Version | V1 | | | -| Resource | Huawei 910 | NV SMX2 V100-32G | Huawei 310 | +| Resource | Ascend 910 | NV SMX2 V100-32G | Ascend 310 | | uploaded Date | 05/06/2020 | 05/22/2020 | | | MindSpore Version | 0.2.0 | 0.2.0 | 0.2.0 | | Dataset | ImageNet, 1.2W | ImageNet, 1.2W | ImageNet, 1.2W | diff --git a/model_zoo/official/cv/resnext50/README.md b/model_zoo/official/cv/resnext50/README.md index 75631126aa7580d288bcc43734d61cc7778fb2e5..18dc435324ffc2458d22f82dcdcbf036d161fe52 100644 --- a/model_zoo/official/cv/resnext50/README.md +++ b/model_zoo/official/cv/resnext50/README.md @@ -47,7 +47,8 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. + For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. # [Environment Requirements](#contents) @@ -228,7 +229,7 @@ acc=93.88%(TOP5) | Parameters | | | | | -------------------------- | ----------------------------- | ------------------------- | -------------------- | -| Resource | Huawei 910 | NV SMX2 V100-32G | Huawei 310 | +| Resource | Ascend 910 | NV SMX2 V100-32G | Ascend 310 | | uploaded Date | 06/30/2020 | 07/23/2020 | 07/23/2020 | | MindSpore Version | 0.5.0 | 0.6.0 | 0.6.0 | | Dataset | ImageNet, 1.2W | ImageNet, 1.2W | ImageNet, 1.2W |