提交 2fb88e8c 编写于 作者: W wukesong

modify readme

上级 3a6749ab
# AlexNet Example
## Description
Training AlexNet with dataset in MindSpore.
This is the simple tutorial for training AlexNet in MindSpore.
## Requirements
- Install [MindSpore](https://www.mindspore.cn/install/en).
- Download the dataset, the directory structure is as follows:
```
├─10-batches-bin
└─10-verify-bin
```
## Running the example
```python
# train AlexNet, hyperparameter setting in config.py
python train.py --data_path 10-batches-bin
```
You will get the loss value of each step as following:
```bash
epoch: 1 step: 1, loss is 2.2791853
...
epoch: 1 step: 1536, loss is 1.9366643
epoch: 1 step: 1537, loss is 1.6983616
epoch: 1 step: 1538, loss is 1.0221305
...
```
Then, evaluate AlexNet according to network model
```python
# evaluate AlexNet
python eval.py --data_path 10-verify-bin --ckpt_path checkpoint_alexnet-1_1562.ckpt
```
## Note
Here are some optional parameters:
```bash
--device_target {Ascend,GPU}
device where the code will be implemented (default: Ascend)
--data_path DATA_PATH
path where the dataset is saved
--dataset_sink_mode DATASET_SINK_MODE
dataset_sink_mode is False or True
```
You can run ```python train.py -h``` or ```python eval.py -h``` to get more information.
# Contents
- [AlexNet Description](#alexnet-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# [AlexNet Description](#contents)
AlexNet was proposed in 2012, one of the most influential neural networks. It got big success in ImageNet Dataset recognition than other models.
[Paper](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf): Krizhevsky A, Sutskever I, Hinton G E. ImageNet Classification with Deep ConvolutionalNeural Networks. *Advances In Neural Information Processing Systems*. 2012.
# [Model Architecture](#contents)
AlexNet composition consists of 5 convolutional layers and 3 fully connected layers. Multiple convolutional kernels can extract interesting features in images and get more accurate classification.
# [Dataset](#contents)
Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
- Dataset size:175M,60,000 32*32 colorful images in 10 classes
- Train:146M,50,000 images
- Test:29.3M,10,000 images
- Data format:binary files
- Note:Data will be processed in dataset.py
- Download the dataset, the directory structure is as follows:
```
├─cifar-10-batches-bin
└─cifar-10-verify-bin
```
# [Environment Requirements](#contents)
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
# [Quick Start](#contents)
After installing MindSpore via the official website, you can start training and evaluation as follows:
```python
# enter script dir, train AlexNet
sh run_standalone_train_ascend.sh [DATA_PATH] [CKPT_SAVE_PATH]
# enter script dir, evaluate AlexNet
sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
├── cv
├── alexnet
├── README.md // descriptions about alexnet
├── requirements.txt // package needed
├── scripts
│ ├──run_standalone_train_gpu.sh // train in gpu
│ ├──run_standalone_train_ascend.sh // train in ascend
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
├── src
│ ├──dataset.py // creating dataset
│ ├──alexnet.py // alexnet architecture
│ ├──config.py // parameter configuration
├── train.py // training script
├── eval.py // evaluation script
```
## [Script Parameters](#contents)
```python
Major parameters in train.py and config.py as follows:
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--image_height: Image height used as input to the model.
--image_width: Image width used as input the model.
--device_target: Device where the code will be implemented. Optional values are "Ascend", "GPU".
--checkpoint_path: The absolute full path to the checkpoint file saved after training.
--data_path: Path where the dataset is saved
```
## [Training Process](#contents)
### Training
```
python train.py --data_path cifar-10-batches-bin --ckpt_path ckpt > log.txt 2>&1 &
# or enter script dir, and run the script
sh run_standalone_train_ascend.sh cifar-10-batches-bin ckpt
```
After training, the loss value will be achieved as follows:
```
# grep "loss is " train.log
epoch: 1 step: 1, loss is 2.2791853
...
epoch: 1 step: 1536, loss is 1.9366643
epoch: 1 step: 1537, loss is 1.6983616
epoch: 1 step: 1538, loss is 1.0221305
...
```
The model checkpoint will be saved in the current directory.
## [Evaluation Process](#contents)
### Evaluation
Before running the command below, please check the checkpoint path used for evaluation.
```
python eval.py --data_path cifar-10-verify-bin --ckpt_path ckpt/checkpoint_alexnet-1_1562.ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_eval_ascend.sh cifar-10-verify-bin ckpt/checkpoint_alexnet-1_1562.ckpt
```
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
```
# grep "Accuracy: " log.txt
'Accuracy': 0.8832
```
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
| Parameters | AlexNet |
| -------------------------- | ----------------------------------------------------------- |
| Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
| uploaded Date | 06/09/2020 (month/day/year) |
| MindSpore Version | 0.5.0-beta |
| Dataset | CIFAR-10 |
| Training Parameters | epoch=30, steps=1562, batch_size = 32, lr=0.002 |
| Optimizer | Momentum |
| Loss Function | Softmax Cross Entropy |
| outputs | probability |
| Loss | 0.0016 |
| Speed | 21 ms/step |
| Total time | 17 mins |
| Checkpoint for Fine tuning | 445M (.ckpt file) |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/alexnet |
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside ```create_dataset``` function.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
......@@ -18,6 +18,7 @@ eval alexnet according to model file:
python eval.py --data_path /YourDataPath --ckpt_path Your.ckpt
"""
import ast
import argparse
from src.config import alexnet_cfg as cfg
from src.dataset import create_dataset_cifar10
......@@ -36,7 +37,8 @@ if __name__ == "__main__":
parser.add_argument('--data_path', type=str, default="./", help='path where the dataset is saved')
parser.add_argument('--ckpt_path', type=str, default="./ckpt", help='if is test, must provide\
path where the trained ckpt file')
parser.add_argument('--dataset_sink_mode', type=bool, default=True, help='dataset_sink_mode is False or True')
parser.add_argument('--dataset_sink_mode', type=ast.literal_eval, default=True,
help='dataset_sink_mode is False or True')
args = parser.parse_args()
context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
......
......@@ -18,6 +18,7 @@ train alexnet and get network model files(.ckpt) :
python train.py --data_path /YourDataPath
"""
import ast
import argparse
from src.config import alexnet_cfg as cfg
from src.dataset import create_dataset_cifar10
......@@ -38,7 +39,8 @@ if __name__ == "__main__":
parser.add_argument('--data_path', type=str, default="./", help='path where the dataset is saved')
parser.add_argument('--ckpt_path', type=str, default="./ckpt", help='if is test, must provide\
path where the trained ckpt file')
parser.add_argument('--dataset_sink_mode', type=bool, default=True, help='dataset_sink_mode is False or True')
parser.add_argument('--dataset_sink_mode', type=ast.literal_eval, default=True,
help='dataset_sink_mode is False or True')
args = parser.parse_args()
context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
......
# LeNet Example
## Description
Training LeNet with dataset in MindSpore.
This is the simple and basic tutorial for constructing a network in MindSpore.
## Requirements
- Install [MindSpore](https://www.mindspore.cn/install/en).
- Download the dataset, the directory structure is as follows:
```
└─Data
├─test
│ t10k-images.idx3-ubyte
│ t10k-labels.idx1-ubyte
└─train
train-images.idx3-ubyte
train-labels.idx1-ubyte
```
## Running the example
```python
# train LeNet, hyperparameter setting in config.py
python train.py --data_path Data
```
You will get the loss value of each step as following:
```bash
epoch: 1 step: 1, loss is 2.3040335
...
epoch: 1 step: 1739, loss is 0.06952668
epoch: 1 step: 1740, loss is 0.05038793
epoch: 1 step: 1741, loss is 0.05018193
...
```
Then, evaluate LeNet according to network model
```python
# evaluate LeNet
python eval.py --data_path Data --ckpt_path checkpoint_lenet-1_1875.ckpt
```
## Note
Here are some optional parameters:
```bash
--device_target {Ascend,GPU,CPU}
device where the code will be implemented (default: Ascend)
--data_path DATA_PATH
path where the dataset is saved
--dataset_sink_mode DATASET_SINK_MODE
dataset_sink_mode is False or True
```
You can run ```python train.py -h``` or ```python eval.py -h``` to get more information.
# Contents
- [LeNet Description](#lenet-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# [LeNet Description](#contents)
LeNet was proposed in 1998, a typical convolutional neural network. It was used for digit recognition and got big success.
[Paper](https://ieeexplore.ieee.org/document/726791): Y.Lecun, L.Bottou, Y.Bengio, P.Haffner. Gradient-Based Learning Applied to Document Recognition. *Proceedings of the IEEE*. 1998.
# [Model Architecture](#contents)
LeNet is very simple, which contains 5 layers. The layer composition consists of 2 convolutional layers and 3 fully connected layers.
# [Dataset](#contents)
Dataset used: [MNIST](<http://yann.lecun.com/exdb/mnist/>)
- Dataset size:52.4M,60,000 28*28 in 10 classes
- Train:60,000 images
- Test:10,000 images
- Data format:binary files
- Note:Data will be processed in dataset.py
- The directory structure is as follows:
```
└─Data
├─test
│ t10k-images.idx3-ubyte
│ t10k-labels.idx1-ubyte
└─train
train-images.idx3-ubyte
train-labels.idx1-ubyte
```
# [Environment Requirements](#contents)
- Hardware(Ascend/GPU/CPU)
- Prepare hardware environment with Ascend, GPU, or CPU processor.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
# [Quick Start](#contents)
After installing MindSpore via the official website, you can start training and evaluation as follows:
```python
# enter script dir, train LeNet
sh run_standalone_train_ascend.sh [DATA_PATH] [CKPT_SAVE_PATH]
# enter script dir, evaluate LeNet
sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
├── cv
├── lenet
├── README.md // descriptions about lenet
├── requirements.txt // package needed
├── scripts
│ ├──run_standalone_train_cpu.sh // train in cpu
│ ├──run_standalone_train_gpu.sh // train in gpu
│ ├──run_standalone_train_ascend.sh // train in ascend
│ ├──run_standalone_eval_cpu.sh // evaluate in cpu
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
├── src
│ ├──dataset.py // creating dataset
│ ├──lenet.py // lenet architecture
│ ├──config.py // parameter configuration
├── train.py // training script
├── eval.py // evaluation script
```
## [Script Parameters](#contents)
```python
Major parameters in train.py and config.py as follows:
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--image_height: Image height used as input to the model.
--image_width: Image width used as input the model.
--device_target: Device where the code will be implemented. Optional values
are "Ascend", "GPU", "CPU".
--checkpoint_path: The absolute full path to the checkpoint file saved
after training.
--data_path: Path where the dataset is saved
```
## [Training Process](#contents)
### Training
```
python train.py --data_path Data --ckpt_path ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_train_ascend.sh Data ckpt
```
After training, the loss value will be achieved as follows:
```
# grep "loss is " log.txt
epoch: 1 step: 1, loss is 2.2791853
...
epoch: 1 step: 1536, loss is 1.9366643
epoch: 1 step: 1537, loss is 1.6983616
epoch: 1 step: 1538, loss is 1.0221305
...
```
The model checkpoint will be saved in the current directory.
## [Evaluation Process](#contents)
### Evaluation
Before running the command below, please check the checkpoint path used for evaluation.
```
python eval.py --data_path Data --ckpt_path ckpt/checkpoint_lenet-1_1875.ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_eval_ascend.sh Data ckpt/checkpoint_lenet-1_1875.ckpt
```
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
```
# grep "Accuracy: " log.txt
'Accuracy': 0.9842
```
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
| Parameters | LeNet |
| -------------------------- | ----------------------------------------------------------- |
| Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
| uploaded Date | 06/09/2020 (month/day/year) |
| MindSpore Version | 0.5.0-beta |
| Dataset | MNIST |
| Training Parameters | epoch=10, steps=1875, batch_size = 32, lr=0.01 |
| Optimizer | Momentum |
| Loss Function | Softmax Cross Entropy |
| outputs | probability |
| Loss | 0.002 |
| Speed | 1.70 ms/step |
| Total time | 43.1s | |
| Checkpoint for Fine tuning | 482k (.ckpt file) |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet |
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside ```create_dataset``` function.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
......@@ -19,6 +19,7 @@ python eval.py --data_path /YourDataPath --ckpt_path Your.ckpt
"""
import os
import ast
import argparse
import mindspore.nn as nn
from mindspore import context
......@@ -37,7 +38,8 @@ if __name__ == "__main__":
help='path where the dataset is saved')
parser.add_argument('--ckpt_path', type=str, default="", help='if mode is test, must provide\
path where the trained ckpt file')
parser.add_argument('--dataset_sink_mode', type=bool, default=False, help='dataset_sink_mode is False or True')
parser.add_argument('--dataset_sink_mode', type=ast.literal_eval, default=False,
help='dataset_sink_mode is False or True')
args = parser.parse_args()
......
......@@ -19,6 +19,7 @@ python train.py --data_path /YourDataPath
"""
import os
import ast
import argparse
from src.config import mnist_cfg as cfg
from src.dataset import create_dataset
......@@ -38,7 +39,8 @@ if __name__ == "__main__":
help='path where the dataset is saved')
parser.add_argument('--ckpt_path', type=str, default="./ckpt", help='if is test, must provide\
path where the trained ckpt file')
parser.add_argument('--dataset_sink_mode', type=bool, default=True, help='dataset_sink_mode is False or True')
parser.add_argument('--dataset_sink_mode', type=ast.literal_eval, default=True,
help='dataset_sink_mode is False or True')
args = parser.parse_args()
......
......@@ -175,7 +175,7 @@ result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.
| Parameters | | | |
| -------------------------- | ----------------------------- | ------------------------- | -------------------- |
| Model Version | V1 | | |
| Resource | Huawei 910 | NV SMX2 V100-32G | Huawei 310 |
| Resource | Ascend 910 | NV SMX2 V100-32G | Ascend 310 |
| uploaded Date | 05/06/2020 | 05/22/2020 | |
| MindSpore Version | 0.2.0 | 0.2.0 | 0.2.0 |
| Dataset | ImageNet, 1.2W | ImageNet, 1.2W | ImageNet, 1.2W |
......
......@@ -48,6 +48,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
......@@ -228,7 +229,7 @@ acc=93.88%(TOP5)
| Parameters | | | |
| -------------------------- | ----------------------------- | ------------------------- | -------------------- |
| Resource | Huawei 910 | NV SMX2 V100-32G | Huawei 310 |
| Resource | Ascend 910 | NV SMX2 V100-32G | Ascend 310 |
| uploaded Date | 06/30/2020 | 07/23/2020 | 07/23/2020 |
| MindSpore Version | 0.5.0 | 0.6.0 | 0.6.0 |
| Dataset | ImageNet, 1.2W | ImageNet, 1.2W | ImageNet, 1.2W |
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册