MindSpore can compile user source code based on the Python syntax into computational graphs, and can convert common functions or instances inherited from nn.Cell into computational graphs. Currently, MindSpore does not support conversion of any Python source code into computational graphs. Therefore, there are constraints on source code compilation, including syntax constraints and network definition constraints. As MindSpore evolves, the constraints may change.
...
...
@@ -155,8 +155,8 @@ Currently, the following syntax is not supported in network constructors:
## Network Definition Constraints
### Instance Types on the Entire Network
* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
### Network Input Type
* The training data input parameters of the entire network must be of the Tensor type.
...
...
@@ -169,13 +169,13 @@ Currently, the following syntax is not supported in network constructors:
| Category | Content
| :----------- |:--------
| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| Member function of a `Cell` instance | Member functions of other classes in the construct function of Cell can be called.
| Function | Custom Python functions and system functions listed in the preceding content.
| Dataclass instance | Class decorated with @dataclass.
| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
*It refers to the released MindSpore version. The hardware platforms that support model training are CPU, GPU and Ascend. As shown in the table below, ✓ indicates that the pre-trained model run on the selected platform.
Domain | Sub Domain| Network | CPU | GPU | Ascend | 0.5.0-beta*
@@ -21,7 +21,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> same as the executable file installation dependencies. |
| MindSpore 0.6.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> same as the executable file installation dependencies. |
- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -62,7 +62,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
1. Download the source code from the code repository.
2. Run the following command in the root directory of the source code to compile MindSpore:
...
...
@@ -97,7 +97,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.6.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.6.0-beta <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.6/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -122,7 +122,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
@@ -20,7 +20,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh <br> - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 <br> - [CMake](https://cmake.org/download/) 3.14.1 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindSpore 0.6.0-beta | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh <br> - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 <br> - [CMake](https://cmake.org/download/) 3.14.1 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -62,7 +62,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
1. Download the source code from the code repository.
- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361))的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
@@ -32,7 +32,7 @@ This document describes how to quickly install MindSpore in an Ascend AI process
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361)) <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361)) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindSpore 0.6.0-beta | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361)) <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361)) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C00RC1](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251638361)). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
- GCC 7.3.0 can be installed by using apt command.
...
...
@@ -81,7 +81,7 @@ The compilation and installation must be performed on the Ascend 910 AI processo
1. Download the source code from the code repository.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -205,7 +205,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
...
...
@@ -247,7 +247,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.6.0-beta | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.6.0-beta <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.6/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -272,7 +272,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
@@ -28,7 +28,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)<br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)<br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindSpore 0.6.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)<br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)<br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during `.whl` package installation. In other cases, you need to manually install dependency items.
- MindSpore reduces dependency on Autoconf, Libtool, Automake versions for the convenience of users, default versions of these tools built in their systems are now supported.
...
...
@@ -63,7 +63,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
1. Download the source code from the code repository.
2. Run the following command in the root directory of the source code to compile MindSpore:
...
...
@@ -123,7 +123,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindInsight 0.6.0-beta | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.6.0-beta <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.6/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -148,7 +148,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
...
...
@@ -190,7 +190,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.6.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.6.0-beta <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.6/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -215,7 +215,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
3. Execute stage 2 training: There are two devices in stage 2 training environment. The weight shape of the MatMul operator on each device is \[4, 8]. Load the initialized model parameter data from the integrated checkpoint file and then perform training.
> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/distributed_training.html).
>
> This document provides the example code for integrating checkpoint files and loading checkpoint files before distributed training. The code is for reference only.
The key point is to select a proper model. The model generally refers to a deep convolutional neural network (CNN), such as AlexNet, VGG, GoogleNet, and ResNet.
MindSpore presets a typical CNN, developer can visit [model_zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official) to get more details.
MindSpore presets a typical CNN, developer can visit [model_zoo](https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/official) to get more details.
MindSpore supports the following image classification networks: LeNet, AlexNet, and ResNet.
...
...
@@ -59,7 +59,7 @@ Next, let's use MindSpore to solve the image classification task. The overall pr
5. Call the high-level `Model` API to train and save the model file.
6. Load the saved model for inference.
> This example is for the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: <https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/resnet>.
> This example is for the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: <https://gitee.com/mindspore/docs/blob/r0.6/tutorials/tutorial_code/resnet>.
The key parts of the task process code are explained below.
...
...
@@ -143,7 +143,7 @@ CNN is a standard algorithm for image classification tasks. CNN uses a layered s
ResNet is recommended. First, it is deep enough with 34 layers, 50 layers, or 101 layers. The deeper the hierarchy, the stronger the representation capability, and the higher the classification accuracy. Second, it is learnable. The residual structure is used. The lower layer is directly connected to the upper layer through the shortcut connection, which solves the problem of gradient disappearance caused by the network depth during the reverse propagation. In addition, the ResNet network has good performance, including the recognition accuracy, model size, and parameter quantity.
MindSpore Model Zoo has a ResNet [model](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py). The calling method is as follows:
MindSpore Model Zoo has a ResNet [model](https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/cv/resnet/src/resnet.py). The calling method is as follows:
When the training result deviates from the expectation on Ascend, the input and output of the operator can be dumped for debugging through Asynchronous Data Dump.
> `comm_ops` operators are not supported by Asynchronous Data Dump. `comm_ops` can be found in [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
> `comm_ops` operators are not supported by Asynchronous Data Dump. `comm_ops` can be found in [Operator List](https://www.mindspore.cn/docs/en/r0.6/operator_list.html).
1. Turn on the switch to save graph IR: `context.set_context(save_graphs=True)`.
The LeNet model and MNIST dataset are used as an example to describe how to use the differential privacy optimizer to train a neural network model on MindSpore.
> This example is for the Ascend 910 AI processor. You can download the complete sample code from <https://gitee.com/mindspore/mindarmour/blob/master/example/mnist_demo/lenet5_dp.py>.
> This example is for the Ascend 910 AI processor. You can download the complete sample code from <https://gitee.com/mindspore/mindarmour/blob/r0.6/example/mnist_demo/lenet5_dp.py>.
This tutorial describes how to train the ResNet-50 network in data parallel and automatic parallel modes on MindSpore based on the Ascend 910 AI processor.
> Download address of the complete sample code: <https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/distributed_training/resnet50_distributed_training.py>
> Download address of the complete sample code: <https://gitee.com/mindspore/docs/blob/r0.6/tutorials/tutorial_code/distributed_training/resnet50_distributed_training.py>
## Preparations
...
...
@@ -154,7 +154,7 @@ Different from the single-node system, the multi-node system needs to transfer t
## Defining the Network
In data parallel and automatic parallel modes, the network definition method is the same as that in a single-node system. The reference code is as follows: <https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/resnet/resnet.py>
In data parallel and automatic parallel modes, the network definition method is the same as that in a single-node system. The reference code is as follows: <https://gitee.com/mindspore/docs/blob/r0.6/tutorials/tutorial_code/resnet/resnet.py>
Take the training model of the `BERT-large` network as an example. For details about the dataset and training script, see <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/nlp/bert>. You only need to modify the `context` parameter.
Take the training model of the `BERT-large` network as an example. For details about the dataset and training script, see <https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/official/nlp/bert>. You only need to modify the `context` parameter.
In deep learning, one usually has to deal with the huge model problem, in which the total size of parameters in the model is beyond the device memory capacity. To efficiently train a huge model, one solution is to employ homogenous accelerators (*e.g.*, Ascend 910 AI Accelerator and GPU) for [distributed training](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html). When the size of a model is hundreds of GBs or several TBs,
In deep learning, one usually has to deal with the huge model problem, in which the total size of parameters in the model is beyond the device memory capacity. To efficiently train a huge model, one solution is to employ homogenous accelerators (*e.g.*, Ascend 910 AI Accelerator and GPU) for [distributed training](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/distributed_training.html). When the size of a model is hundreds of GBs or several TBs,
the number of required accelerators is too overwhelming for people to access, resulting in this solution inapplicable. One alternative is Host+Device hybrid training. This solution simultaneously leveraging the huge memory in hosts and fast computation in accelerators, is a promisingly
efficient method for addressing huge model problem.
In MindSpore, users can easily implement hybrid training by configuring trainable parameters and necessary operators to run on hosts, and other operators to run on accelerators.
This tutorial introduces how to train [Wide&Deep](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep) in the Host+Ascend 910 AI Accelerator mode.
This tutorial introduces how to train [Wide&Deep](https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/official/recommend/wide_and_deep) in the Host+Ascend 910 AI Accelerator mode.
## Preliminaries
1. Prepare the model. The Wide&Deep code can be found at: <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep>, in which `train_and_eval_auto_parallel.py` is the main function for training,
1. Prepare the model. The Wide&Deep code can be found at: <https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/official/recommend/wide_and_deep>, in which `train_and_eval_auto_parallel.py` is the main function for training,
`src/` directory contains the model definition, data processing and configuration files, `script/` directory contains the launch scripts in different modes.
2. Prepare the dataset. The dataset can be found at: <https://s3-eu-west-1.amazonaws.com/kaggle-display-advertising-challenge-dataset/dac.tar.gz>. Use the script `src/preprocess_data.py` to transform dataset into MindRecord format.
@@ -29,7 +29,7 @@ At the beginning of AI algorithm design, related security threats are sometimes
This section describes how to use MindArmour in adversarial attack and defense by taking the Fast Gradient Sign Method (FGSM) attack algorithm and Natural Adversarial Defense (NAD) algorithm as examples.
> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/model_safety>
> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/r0.6/tutorials/tutorial_code/model_safety>
@@ -29,9 +29,9 @@ Before you start working on your scripts, prepare your operator assessment and h
### Operator Assessment
Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/r0.6/operator_list.html).
Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
If any operator does not exist, you are advised to perform the following operations:
...
...
@@ -57,17 +57,17 @@ Prepare the hardware environment, find a platform corresponding to your environm
MindSpore differs from TensorFlow and PyTorch in the network structure. Before migration, you need to clearly understand the original script and information of each layer, such as shape.
> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/r0.6/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
The ResNet-50 network migration and training on the Ascend 910 is used as an example.
1. Import MindSpore modules.
Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see <https://www.mindspore.cn/api/en/master/index.html>.
Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see <https://www.mindspore.cn/api/en/r0.6/index.html>.
2. Load and preprocess a dataset.
Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_preparation.html).
Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/r0.6/use/data_preparation/data_preparation.html).
In this example, the CIFAR-10 dataset is loaded, which supports both single-GPU and multi-GPU scenarios.
...
...
@@ -79,7 +79,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
num_shards=device_num,shard_id=rank_id)
```
Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see <https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/dataset.py>.
Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see <https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/cv/resnet/src/dataset.py>.
3. Build a network.
...
...
@@ -214,7 +214,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
6. Build the entire network.
The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
7. Define a loss function and an optimizer.
...
...
@@ -235,7 +235,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/master/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
@@ -264,15 +264,15 @@ The accuracy optimization process is as follows:
#### On-Cloud Integration
Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html).
Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/r0.6/advanced_use/use_on_the_cloud.html).
### Inference Phase
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps.
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/r0.6/use/multi_platform_inference.html) for detailed steps.
@@ -86,7 +86,7 @@ Currently, MindSpore GPU and CPU supports SentimentNet network based on the long
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. For details, refer to online documentation.
3. After the model is obtained, use the validation dataset to check the accuracy of model.
> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm>
> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/nlp/lstm>
> - `src/config.py`:some configurations on the network, including the batch size and number of training epochs.
> - `src/dataset.py`:dataset related definition,include MindRecord file convert and data-preprocess, etc.
> - `src/imdb.py`: the util class for parsing IMDB dataset.
...
...
@@ -156,7 +156,7 @@ if args.preprocess == "true":
```
> After convert success, we can file `mindrecord` files under the directory `preprocess_path`. Usually, this operation does not need to be performed every time while the data set is unchanged.
> `convert_to_mindrecord` You can find the complete definition at: <https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/dataset.py>
> `convert_to_mindrecord` You can find the complete definition at: <https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/nlp/lstm/src/dataset.py>
> It consists of two steps:
>1. Process the text dataset, including encoding, word segmentation, alignment, and processing the original GloVe data to adapt to the network structure.
print("============== Training Success ==============")
```
> `lstm_create_dataset` You can find the complete definition at: <https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/dataset.py>
> `lstm_create_dataset` You can find the complete definition at: <https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/nlp/lstm/src/dataset.py>
Performance data like operators' execution time is recorded in files and can be viewed on the web page, this can help the user optimize the performance of neural networks. MindInsight Profiler can only support the Ascend chip now.
...
...
@@ -64,7 +64,7 @@ def test_profiler():
## Launch MindInsight
The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/mindinsight_commands.html).
@@ -74,7 +74,7 @@ Compared with common training, the quantization aware training requires addition
Next, the LeNet network is used as an example to describe steps 3 and 6.
> You can obtain the complete executable sample code at <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/lenet_quant>.
> You can obtain the complete executable sample code at <https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/lenet_quant>.
### Defining a Fusion Network
...
...
@@ -173,7 +173,7 @@ The preceding describes the quantization aware training from scratch. A more com
2. Define a network.
3. Define a fusion network.
4. Define an optimizer and loss function.
5. Load a model file and retrain the model. Load an existing model file and retrain the model based on the fusion network to generate a fusion model. For details, see <https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#id6>.
5. Load a model file and retrain the model. Load an existing model file and retrain the model based on the fusion network to generate a fusion model. For details, see <https://www.mindspore.cn/tutorial/en/r0.6/use/saving_and_loading_model_parameters.html#id6>.
6. Generate a quantization network.
7. Perform quantization training.
...
...
@@ -181,7 +181,7 @@ The preceding describes the quantization aware training from scratch. A more com
The inference using a quantization model is the same as common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as ONNX or GEIR).
For details, see <https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html>.
For details, see <https://www.mindspore.cn/tutorial/en/r0.6/use/multi_platform_inference.html>.
- To use a checkpoint file obtained after quantization aware training for inference, perform the following steps:
Users can view system metrics such as Ascend AI processor, CPU, memory, etc., so as to allocate appropriate resources for training.
Just [Start MindInsight](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html#start-the-service), and click "System Metrics" in the navigation bar to view it.
Just [Start MindInsight](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/mindinsight_commands.html#start-the-service), and click "System Metrics" in the navigation bar to view it.
@@ -38,7 +38,7 @@ During the practice, a simple image classification function is implemented. The
5. Load the saved model for inference.
6. Validate the model, load the test dataset and trained model, and validate the result accuracy.
> You can find the complete executable sample code at <https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet.py>.
> You can find the complete executable sample code at <https://gitee.com/mindspore/docs/blob/r0.6/tutorials/tutorial_code/lenet.py>.
This is a simple and basic application process. For other advanced and complex applications, extend this basic process as needed.
...
...
@@ -83,7 +83,7 @@ Currently, the `os` libraries are required. For ease of understanding, other req
importos
```
For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/master/index.html).
For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/r0.6/index.html).
### Configuring the Running Information
...
...
@@ -179,7 +179,7 @@ In the preceding information:
Perform the shuffle and batch operations, and then perform the repeat operation to ensure that data during an epoch is unique.
> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html).
> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/r0.6/use/data_preparation/data_processing_and_augmentation.html).
@@ -27,14 +27,14 @@ The related concepts are as follows:
- Operator implementation: describes the implementation of the internal computation logic for an operator through the DSL API provided by the Tensor Boost Engine (TBE). The TBE supports the development of custom operators based on the Ascend AI chip. You can apply for Open Beta Tests (OBTs) by visiting <https://www.huaweicloud.com/ascend/tbe>.
- Operator information: describes basic information about a TBE operator, such as the operator name and supported input and output types. It is the basis for the backend to select and map operators.
This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/r0.6/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
## Registering the Operator Primitive
The primitive of an operator is a subclass inherited from `PrimitiveWithInfer`. The type name of the subclass is the operator name.
The definition of the custom operator primitive is the same as that of the built-in operator primitive.
- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/r0.6/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
- The input and output names are defined by the `init_prim_io_names` function.
- The shape inference method of the output tensor is defined in the `infer_shape` function, and the dtype inference method of the output tensor is defined in the `infer_dtype` function.
@@ -150,7 +150,7 @@ MindSpore can also read datasets in the `TFRecord` data format through the `TFRe
## Loading a Custom Dataset
In real scenarios, there are virous datasets. For a custom dataset or a dataset that can't be loaded by APIs directly, there are tow ways.
One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/r0.6/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
The following shows how to use `GeneratorDataset`.
1. Define an iterable object to generate a dataset. There are two examples following. One is a customized function which contains `yield`. The other one is a customized class which contains `__getitem__`.
@@ -58,15 +58,15 @@ MindSpore supports the following inference scenarios based on the hardware platf
res=model.eval(dataset)
```
In the preceding information:
`model.eval` is an API for model validation. For details about the API, see <https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.html#mindspore.Model.eval>.
`model.eval` is an API for model validation. For details about the API, see <https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.html#mindspore.Model.eval>.
2. Use the `model.predict` API to perform inference.
```python
model.predict(input_data)
```
In the preceding information:
`model.predict` is an API for inference. For details about the API, see <https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.html#mindspore.Model.predict>.
`model.predict` is an API for inference. For details about the API, see <https://www.mindspore.cn/api/en/r0.6/api/python/mindspore/mindspore.html#mindspore.Model.predict>.
## Inference on the Ascend 310 AI processor
...
...
@@ -74,7 +74,7 @@ MindSpore supports the following inference scenarios based on the hardware platf
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or GEIR format. For inference on the Ascend 310 AI processor, perform the following steps:
1. Generate a model in ONNX or GEIR format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX or GEIR format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/r0.6/use/saving_and_loading_model_parameters.html#geironnx).
2. Convert the ONNX or GEIR model file into an OM model file and perform inference.
- For performing inference in the cloud environment (ModelArt), see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html).
...
...
@@ -88,7 +88,7 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/r0.6/use/saving_and_loading_model_parameters.html#geironnx).
2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
...
...
@@ -100,10 +100,10 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
Similar to the inference on a GPU, the following steps are required:
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Generate a model in ONNX format on the training platform. For details, see [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/r0.6/use/saving_and_loading_model_parameters.html#geironnx).
2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime).
## On-Device Inference
MindSpore Predict is an inference engine for on-device inference. For details, see [On-Device Inference](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html).
\ No newline at end of file
MindSpore Predict is an inference engine for on-device inference. For details, see [On-Device Inference](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/on_device_inference.html).
MindSpore Model Zoo中已经实现了ResNet模型,可以采用[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py)。调用方法如下:
MindSpore Model Zoo中已经实现了ResNet模型,可以采用[ResNet-50](https://gitee.com/mindspore/mindspore/blob/r0.6/model_zoo/official/cv/resnet/src/resnet.py)。调用方法如下:
export MS_SCHED_HOST=XXX.XXX.XXX.XXX # Scheduler IP address
export MS_SCHED_PORT=XXXX # Scheduler port
export MS_SCHED_POST=XXXX # Scheduler port
export MS_ROLE=MS_SCHED # The role of this process: MS_SCHED represents the scheduler, MS_WORKER represents the worker, MS_PSERVER represents the Server
```
...
...
@@ -75,7 +75,7 @@ export MS_ROLE=MS_SCHED # The role of this process: MS_SCHED repre
export MS_SERVER_NUM=1
export MS_WORKER_NUM=1
export MS_SCHED_HOST=XXX.XXX.XXX.XXX
export MS_SCHED_PORT=XXXX
export MS_SCHED_POST=XXXX
export MS_ROLE=MS_SCHED
python train.py
```
...
...
@@ -87,7 +87,7 @@ export MS_ROLE=MS_SCHED # The role of this process: MS_SCHED repre
export MS_SERVER_NUM=1
export MS_WORKER_NUM=1
export MS_SCHED_HOST=XXX.XXX.XXX.XXX
export MS_SCHED_PORT=XXXX
export MS_SCHED_POST=XXXX
export MS_ROLE=MS_PSERVER
python train.py
```
...
...
@@ -99,7 +99,7 @@ export MS_ROLE=MS_SCHED # The role of this process: MS_SCHED repre