From 7fa5a6d8c8c4603aafcdb38c89ffb0207c66df36 Mon Sep 17 00:00:00 2001 From: Ting Wang Date: Tue, 26 May 2020 15:16:05 +0800 Subject: [PATCH] update use tutorials Signed-off-by: Ting Wang --- .../advanced_use/network_migration.md | 22 +--------- tutorials/source_en/index.rst | 2 + .../source_en/use/defining_the_network.rst | 7 +++ .../source_en/use/multi_platform_inference.md | 41 +++++++++++++++++ .../advanced_use/network_migration.md | 24 +--------- tutorials/source_zh_cn/index.rst | 1 + .../source_zh_cn/use/defining_the_network.rst | 1 + .../use/multi_platform_inference.md | 44 +++++++++++++++++++ 8 files changed, 99 insertions(+), 43 deletions(-) create mode 100644 tutorials/source_en/use/defining_the_network.rst create mode 100644 tutorials/source_en/use/multi_platform_inference.md create mode 100644 tutorials/source_zh_cn/use/multi_platform_inference.md diff --git a/tutorials/source_en/advanced_use/network_migration.md b/tutorials/source_en/advanced_use/network_migration.md index 78a6aa74..131e852c 100644 --- a/tutorials/source_en/advanced_use/network_migration.md +++ b/tutorials/source_en/advanced_use/network_migration.md @@ -266,27 +266,7 @@ Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https ### Inference Phase -Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. - -1. Inference on the Ascend 910 AI processor - - Similar to the `estimator.evaluate()` API of TensorFlow, MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see . - - ```python - res = model.eval(dataset) - ``` - -2. Inference on the Ascend 310 AI processor - - 1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). - - 2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package. - -3. Inference on a GPU - - 1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). - - 2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). +Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps. ## Examples diff --git a/tutorials/source_en/index.rst b/tutorials/source_en/index.rst index ee273800..8328339b 100644 --- a/tutorials/source_en/index.rst +++ b/tutorials/source_en/index.rst @@ -19,7 +19,9 @@ MindSpore Tutorials :caption: Use use/data_preparation/data_preparation + use/defining_the_network use/saving_and_loading_model_parameters + use/multi_platform_inference .. toctree:: :glob: diff --git a/tutorials/source_en/use/defining_the_network.rst b/tutorials/source_en/use/defining_the_network.rst new file mode 100644 index 00000000..dacc6c83 --- /dev/null +++ b/tutorials/source_en/use/defining_the_network.rst @@ -0,0 +1,7 @@ +Defining the Network +==================== + +.. toctree:: + :maxdepth: 1 + + Network List \ No newline at end of file diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/source_en/use/multi_platform_inference.md new file mode 100644 index 00000000..eb311cca --- /dev/null +++ b/tutorials/source_en/use/multi_platform_inference.md @@ -0,0 +1,41 @@ +# Multi-platform Inference + + + +- [Multi-platform Inference](#multi-platform-inference) + - [Overview](#overview) + - [On-Device Inference](#on-device-inference) + + + + + +## Overview + +Models based on MindSpore training can be used for inference on different hardware platforms. This document introduces the inference process on each platform. + +1. Inference on the Ascend 910 AI processor + + MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see . + + ```python + res = model.eval(dataset) + ``` + + In addition, the` model.predict ()` interface can be used for inference. For detailed usage, please refer to API description. + +2. Inference on the Ascend 310 AI processor + + 1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). + + 2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package. + +3. Inference on a GPU + + 1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). + + 2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). + +## On-Device Inference + +The On-Device Inference is based on the MindSpore Predict. Please refer to [On-Device Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html) for details. diff --git a/tutorials/source_zh_cn/advanced_use/network_migration.md b/tutorials/source_zh_cn/advanced_use/network_migration.md index 742f3888..f1d8aa4f 100644 --- a/tutorials/source_zh_cn/advanced_use/network_migration.md +++ b/tutorials/source_zh_cn/advanced_use/network_migration.md @@ -261,27 +261,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 ### 推理阶段 -在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。 - -1. Ascend 910 AI处理器上推理 - - 类似于TensorFlow的`estimator.evaluate()`接口,MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考。 - - ```python - res = model.eval(dataset) - ``` - -2. Ascend 310 AI处理器上推理 - - 1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。 - - 2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。 - -3. GPU上推理 - - 1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。 - - 2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。 +在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。 ## 样例参考 @@ -289,4 +269,4 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 2. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html) -3. [预训练模型Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo) \ No newline at end of file +3. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo) \ No newline at end of file diff --git a/tutorials/source_zh_cn/index.rst b/tutorials/source_zh_cn/index.rst index 69d0866d..7c8769ab 100644 --- a/tutorials/source_zh_cn/index.rst +++ b/tutorials/source_zh_cn/index.rst @@ -22,6 +22,7 @@ MindSpore教程 use/data_preparation/data_preparation use/defining_the_network use/saving_and_loading_model_parameters + use/multi_platform_inference .. toctree:: :glob: diff --git a/tutorials/source_zh_cn/use/defining_the_network.rst b/tutorials/source_zh_cn/use/defining_the_network.rst index 4f875363..d6d2bfba 100644 --- a/tutorials/source_zh_cn/use/defining_the_network.rst +++ b/tutorials/source_zh_cn/use/defining_the_network.rst @@ -4,4 +4,5 @@ .. toctree:: :maxdepth: 1 + 网络支持 custom_operator \ No newline at end of file diff --git a/tutorials/source_zh_cn/use/multi_platform_inference.md b/tutorials/source_zh_cn/use/multi_platform_inference.md new file mode 100644 index 00000000..f1b8f5a2 --- /dev/null +++ b/tutorials/source_zh_cn/use/multi_platform_inference.md @@ -0,0 +1,44 @@ +# 多平台推理 + + + +- [多平台推理](#多平台推理) + - [概述](#概述) + - [Ascend 910 AI处理器上推理](#ascend-910-ai处理器上推理) + - [Ascend 310 AI处理器上推理](#ascend-310-ai处理器上推理) + - [GPU上推理](#gpu上推理) + - [端侧推理](#端侧推理) + + + + + +## 概述 + +基于MindSpore训练后的模型,支持在不同的硬件平台上执行推理。本文介绍各平台上的推理流程。 + +## Ascend 910 AI处理器上推理 + +MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考。 + +```python +res = model.eval(dataset) +``` + +此外,也可以通过`model.predict()`接口来进行推理操作,详细用法可参考API说明。 + +## Ascend 310 AI处理器上推理 + +1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。 + +2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。 + +## GPU上推理 + +1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。 + +2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。 + +## 端侧推理 + +端侧推理需使用MindSpore Predict推理引擎,详细操作请参考[端侧推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/on_device_inference.html)。 \ No newline at end of file -- GitLab