提交 3e931e1f 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!165 Update use tutorials

Merge pull request !165 from TingWang/update-use-tutorials
......@@ -266,27 +266,7 @@ Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https
### Inference Phase
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms.
1. Inference on the Ascend 910 AI processor
Similar to the `estimator.evaluate()` API of TensorFlow, MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see <https://gitee.com/mindspore/mindspore/blob/master/example/resnet50_cifar10/eval.py>.
```python
res = model.eval(dataset)
```
2. Inference on the Ascend 310 AI processor
1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package.
3. Inference on a GPU
1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps.
## Examples
......
......@@ -19,7 +19,9 @@ MindSpore Tutorials
:caption: Use
use/data_preparation/data_preparation
use/defining_the_network
use/saving_and_loading_model_parameters
use/multi_platform_inference
.. toctree::
:glob:
......
Defining the Network
====================
.. toctree::
:maxdepth: 1
Network List <https://www.mindspore.cn/docs/en/master/network_list.html>
\ No newline at end of file
# Multi-platform Inference
<!-- TOC -->
- [Multi-platform Inference](#multi-platform-inference)
- [Overview](#overview)
- [On-Device Inference](#on-device-inference)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/tutorials/source_en/advanced_use/multi_platform_inference.md" target="_blank"><img src="../_static/logo_source.png"></a>
## Overview
Models based on MindSpore training can be used for inference on different hardware platforms. This document introduces the inference process on each platform.
1. Inference on the Ascend 910 AI processor
MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see <https://gitee.com/mindspore/mindspore/blob/master/example/resnet50_cifar10/eval.py>.
```python
res = model.eval(dataset)
```
In addition, the` model.predict ()` interface can be used for inference. For detailed usage, please refer to API description.
2. Inference on the Ascend 310 AI processor
1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package.
3. Inference on a GPU
1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx).
2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
## On-Device Inference
The On-Device Inference is based on the MindSpore Predict. Please refer to [On-Device Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html) for details.
......@@ -261,27 +261,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
### 推理阶段
在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。
1. Ascend 910 AI处理器上推理
类似于TensorFlow的`estimator.evaluate()`接口,MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考<https://gitee.com/mindspore/mindspore/blob/master/example/resnet50_cifar10/eval.py>。
```python
res = model.eval(dataset)
```
2. Ascend 310 AI处理器上推理
1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。
2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。
3. GPU上推理
1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。
2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。
在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)
## 样例参考
......@@ -289,4 +269,4 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
2. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)
3. [预训练模型Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
\ No newline at end of file
3. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
\ No newline at end of file
......@@ -22,6 +22,7 @@ MindSpore教程
use/data_preparation/data_preparation
use/defining_the_network
use/saving_and_loading_model_parameters
use/multi_platform_inference
.. toctree::
:glob:
......
......@@ -4,4 +4,5 @@
.. toctree::
:maxdepth: 1
网络支持 <https://www.mindspore.cn/docs/zh-CN/master/network_list.html>
custom_operator
\ No newline at end of file
# 多平台推理
<!-- TOC -->
- [多平台推理](#多平台推理)
- [概述](#概述)
- [Ascend 910 AI处理器上推理](#ascend-910-ai处理器上推理)
- [Ascend 310 AI处理器上推理](#ascend-310-ai处理器上推理)
- [GPU上推理](#gpu上推理)
- [端侧推理](#端侧推理)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/advanced_use/multi_platform_inference.md" target="_blank"><img src="../_static/logo_source.png"></a>
## 概述
基于MindSpore训练后的模型,支持在不同的硬件平台上执行推理。本文介绍各平台上的推理流程。
## Ascend 910 AI处理器上推理
MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考<https://gitee.com/mindspore/mindspore/blob/master/example/resnet50_cifar10/eval.py>
```python
res = model.eval(dataset)
```
此外,也可以通过`model.predict()`接口来进行推理操作,详细用法可参考API说明。
## Ascend 310 AI处理器上推理
1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。
2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。
## GPU上推理
1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。
2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。
## 端侧推理
端侧推理需使用MindSpore Predict推理引擎,详细操作请参考[端侧推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/on_device_inference.html)
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册