@@ -148,8 +148,8 @@ Currently, the following syntax is not supported in network constructors:
## Network Definition Constraints
### Instance Types on the Entire Network
* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
### Network Input Type
* The training data input parameters of the entire network must be of the Tensor type.
...
...
@@ -162,13 +162,13 @@ Currently, the following syntax is not supported in network constructors:
| Category | Content
| :----------- |:--------
| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| Member function of a `Cell` instance | Member functions of other classes in the construct function of Cell can be called.
| Function | Custom Python functions and system functions listed in the preceding content.
| Dataclass instance | Class decorated with @dataclass.
| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
@@ -21,7 +21,7 @@ This document describes how to quickly install MindSpore on a Ubuntu system with
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> same as the executable file installation dependencies. |
| MindSpore 0.5.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> same as the executable file installation dependencies. |
- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -97,7 +97,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.5.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.5.0-beta<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -20,7 +20,7 @@ This document describes how to quickly install MindSpore on a Windows system wit
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh <br> - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 <br> - [CMake](https://cmake.org/download/) 3.14.1 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindSpore 0.5.0-beta | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh <br> - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 <br> - [CMake](https://cmake.org/download/) 3.14.1 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00T100)的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
@@ -32,7 +32,7 @@ This document describes how to quickly install MindSpore on an Ascend AI process
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore master | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100) <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindSpore 0.5.0-beta | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100) <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [gmp](https://gmplib.org/download/gmp/) 6.1.2 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
- GCC 7.3.0 can be installed by using apt command.
...
...
@@ -159,7 +159,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -226,7 +226,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.5.0-beta | - Ubuntu 18.04 aarch64 <br> - Ubuntu 18.04 x86_64 <br> - EulerOS 2.8 aarch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.5.0-beta<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during `.whl` package installation. In other cases, you need to manually install dependency items.
- MindSpore reduces dependency on Autoconf, Libtool, Automake versions for the convenience of users, default versions of these tools built in their systems are now supported.
...
...
@@ -123,7 +123,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master<br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
| MindInsight 0.5.0-beta | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.5.0-beta<br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.5/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br>**Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
...
...
@@ -190,7 +190,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore master<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
| MindArmour 0.5.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.5.0-beta<br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.5/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -74,7 +74,7 @@ A: MindSpore has basic support for common training scenarios, please refer to [R
Q: What are the available recommendation or text generation networks or models provided by MindSpore?
A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r0.5/model_zoo).
### Backend Support
...
...
@@ -92,13 +92,13 @@ A: MindSpore provides pluggable device management interface so that developer co
Q: What hardware does MindSpore require?
A: Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [RoadMap](https://www.mindspore.cn/docs/en/master/roadmap.html) and project [Release Notes](https://gitee.com/mindspore/mindspore/blob/r0.5/RELEASE.md).
A: Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [RoadMap](https://www.mindspore.cn/docs/en/r0.5/roadmap.html) and project [Release Notes](https://gitee.com/mindspore/mindspore/blob/r0.5/RELEASE.md).
### System Support
Q: Does MindSpore support Windows 10?
A: The MindSpore CPU version can be installed on Windows 10. For details about the installation procedure, see tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindspore_cpu_win_install.html).
A: The MindSpore CPU version can be installed on Windows 10. For details about the installation procedure, see tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/mindspore_cpu_win_install.html).
### Programming Language
...
...
@@ -122,7 +122,7 @@ A: The MindSpore framework does not support FCA. For semantic models, you can ca
Q: Where can I view the sample code or tutorial of MindSpore training and inference?
A: Please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/index.html).
A: Please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/r0.5/index.html).
## Features
...
...
@@ -140,7 +140,7 @@ A: Automatic parallelism on CPUs and GPUs are being improved. You are advised to
Q: What is the relationship between MindSpore and ModelArts? Can MindSpore be used on ModelArts?
A: ModelArts is an online training and inference platform on HUAWEI CLOUD. MindSpore is a Huawei deep learning framework. You can view the tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html) to learn how to train MindSpore models on ModelArts.
A: ModelArts is an online training and inference platform on HUAWEI CLOUD. MindSpore is a Huawei deep learning framework. You can view the tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/zh-CN/r0.5/advanced_use/use_on_the_cloud.html) to learn how to train MindSpore models on ModelArts.
## Capabilities
...
...
@@ -152,7 +152,7 @@ A: The TensorFlow's object detection pipeline API belongs to the TensorFlow's Mo
Q: How do I migrate scripts or models of other frameworks to MindSpore?
A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/advanced_use/network_migration.html).
A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/network_migration.html).
A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r0.5/model_zoo)。
A:目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/docs/zh-CN/master/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r0.5/RELEASE.md)获取最新信息。
A:目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/docs/zh-CN/r0.5/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r0.5/RELEASE.md)获取最新信息。
3. Execute stage 2 training: There are two devices in stage 2 training environment. The weight shape of the MatMul operator on each device is \[4, 8]. Load the initialized model parameter data from the integrated checkpoint file and then perform training.
> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/distributed_training.html).
>
> This document provides the example code for integrating checkpoint files and loading checkpoint files before distributed training. The code is for reference only.
The LeNet model and MNIST dataset are used as an example to describe how to use the differential privacy optimizer to train a neural network model on MindSpore.
> This example is for the Ascend 910 AI processor and supports PYNATIVE_MODE. You can download the complete sample code from <https://gitee.com/mindspore/mindarmour/blob/master/example/mnist_demo/lenet5_dp_model_train.py>.
> This example is for the Ascend 910 AI processor and supports PYNATIVE_MODE. You can download the complete sample code from <https://gitee.com/mindspore/mindarmour/blob/r0.5/example/mnist_demo/lenet5_dp_model_train.py>.
@@ -29,7 +29,7 @@ At the beginning of AI algorithm design, related security threats are sometimes
This section describes how to use MindArmour in adversarial attack and defense by taking the Fast Gradient Sign Method (FGSM) attack algorithm and Natural Adversarial Defense (NAD) algorithm as examples.
> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/model_safety>
> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/r0.5/tutorials/tutorial_code/model_safety>
@@ -29,9 +29,9 @@ Before you start working on your scripts, prepare your operator assessment and h
### Operator Assessment
Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/r0.5/operator_list.html).
Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/r0.5/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
If any operator does not exist, you are advised to perform the following operations:
...
...
@@ -57,17 +57,17 @@ Prepare the hardware environment, find a platform corresponding to your environm
MindSpore differs from TensorFlow and PyTorch in the network structure. Before migration, you need to clearly understand the original script and information of each layer, such as shape.
> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/r0.5/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
The ResNet-50 network migration and training on the Ascend 910 is used as an example.
1. Import MindSpore modules.
Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see <https://www.mindspore.cn/api/en/master/index.html>.
Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see <https://www.mindspore.cn/api/en/r0.5/index.html>.
2. Load and preprocess a dataset.
Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_preparation.html).
Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/r0.5/use/data_preparation/data_preparation.html).
In this example, the CIFAR-10 dataset is loaded, which supports both single-GPU and multi-GPU scenarios.
...
...
@@ -235,7 +235,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/master/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
@@ -264,15 +264,15 @@ The accuracy optimization process is as follows:
#### On-Cloud Integration
Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html).
Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/r0.5/advanced_use/use_on_the_cloud.html).
### Inference Phase
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps.
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/r0.5/use/multi_platform_inference.html) for detailed steps.
@@ -85,7 +85,7 @@ Currently, MindSpore GPU and CPU supports SentimentNet network based on the long
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. For details, refer to online documentation.
3. After the model is obtained, use the validation dataset to check the accuracy of model.
> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/lstm>
> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:<https://gitee.com/mindspore/docs/tree/r0.5/tutorials/tutorial_code/lstm>
> - `main.py`: code file, including code for data preprocessing, network definition, and model training.
> - `config.py`: some configurations on the network, including the `batch size` and number of training epochs.
The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/mindinsight_commands.html).
@@ -83,7 +83,7 @@ Currently, the `os` libraries are required. For ease of understanding, other req
importos
```
For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/master/index.html).
For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/r0.5/index.html).
### Configuring the Running Information
...
...
@@ -179,7 +179,7 @@ In the preceding information:
Perform the shuffle and batch operations, and then perform the repeat operation to ensure that data during an epoch is unique.
> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html).
> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/r0.5/use/data_preparation/data_processing_and_augmentation.html).
@@ -27,14 +27,14 @@ The related concepts are as follows:
- Operator implementation: describes the implementation of the internal computation logic for an operator through the DSL API provided by the Tensor Boost Engine (TBE). The TBE supports the development of custom operators based on the Ascend AI chip. You can apply for Open Beta Tests (OBTs) by visiting <https://www.huaweicloud.com/ascend/tbe>.
- Operator information: describes basic information about a TBE operator, such as the operator name and supported input and output types. It is the basis for the backend to select and map operators.
This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/r0.5/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
## Registering the Operator Primitive
The primitive of an operator is a subclass inherited from `PrimitiveWithInfer`. The type name of the subclass is the operator name.
The definition of the custom operator primitive is the same as that of the built-in operator primitive.
- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/r0.5/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
- The input and output names are defined by the `init_prim_io_names` function.
- The shape inference method of the output tensor is defined in the `infer_shape` function, and the dtype inference method of the output tensor is defined in the `infer_dtype` function.
@@ -149,7 +149,7 @@ MindSpore can also read datasets in the `TFRecord` data format through the `TFRe
## Loading a Custom Dataset
In real scenarios, there are virous datasets. For a custom dataset or a dataset that can't be loaded by APIs directly, there are tow ways.
One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/r0.5/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
The following shows how to use `GeneratorDataset`.
1. Define an iterable object to generate a dataset. There are two examples following. One is a customized function which contains `yield`. The other one is a customized class which contains `__getitem__`.
@@ -26,16 +26,16 @@ Models based on MindSpore training can be used for inference on different hardwa
2. Inference on the Ascend 310 AI processor
1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/r0.5/use/saving_and_loading_model_parameters.html#geironnx).
2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package.
3. Inference on a GPU
1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/r0.5/use/saving_and_loading_model_parameters.html#geironnx).
2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
## On-Device Inference
The On-Device Inference is based on the MindSpore Predict. Please refer to [On-Device Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html) for details.
The On-Device Inference is based on the MindSpore Predict. Please refer to [On-Device Inference Tutorial](https://www.mindspore.cn/tutorial/en/r0.5/advanced_use/on_device_inference.html) for details.