diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/source_en/use/multi_platform_inference.md index 16b0e99c4fcf9d7ff8c5d7671b16a84761e2acf8..9ccd33368e54eb022a014172d1bb81fbc3ba101c 100644 --- a/tutorials/source_en/use/multi_platform_inference.md +++ b/tutorials/source_en/use/multi_platform_inference.md @@ -14,7 +14,6 @@ - [Inference on a CPU](#inference-on-a-cpu) - [Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-2) - [Inference Using an ONNX File](#inference-using-an-onnx-file-1) - - [On-Device Inference](#on-device-inference) @@ -150,7 +149,3 @@ Similar to the inference on a GPU, the following steps are required: 2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime). -## On-Device Inference - -MindSpore Predict is an inference engine for on-device inference. For details, see [On-Device Inference](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/on_device_inference.html). - diff --git a/tutorials/source_zh_cn/use/multi_platform_inference.md b/tutorials/source_zh_cn/use/multi_platform_inference.md index 6c54bcb0c9450a194b0365424fe50a7ab29ef383..0145bea6bba49a20fb6526dcd05fd9e5bc9a6b32 100644 --- a/tutorials/source_zh_cn/use/multi_platform_inference.md +++ b/tutorials/source_zh_cn/use/multi_platform_inference.md @@ -14,7 +14,6 @@ - [CPU上推理](#cpu上推理) - [使用checkpoint格式文件推理](#使用checkpoint格式文件推理-2) - [使用ONNX格式文件推理](#使用onnx格式文件推理-1) - - [端侧推理](#端侧推理) @@ -145,7 +144,3 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需 1. 在训练平台上生成ONNX格式模型,具体步骤请参考[模型导出-导出GEIR模型和ONNX模型](https://www.mindspore.cn/tutorial/zh-CN/r0.6/use/saving_and_loading_model_parameters.html#geironnx)。 2. 在CPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如使用ONNX Runtime,可以参考[ONNX Runtime说明文档](https://github.com/microsoft/onnxruntime)。 - -## 端侧推理 - -端侧推理需使用MindSpore Predict推理引擎,详细操作请参考[端侧推理教程](https://www.mindspore.cn/tutorial/zh-CN/r0.6/advanced_use/on_device_inference.html)。