-[Inference on the Ascend 910 AI processor](#inference-on-the-ascend-910-ai-processor)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file)
-[Inference on the Ascend 310 AI processor](#inference-on-the-ascend-310-ai-processor)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-1)
-[Inference Using an ONNX or GEIR File](#inference-using-an-onnx-or-geir-file)
-[Inference on a GPU](#inference-on-a-gpu)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-2)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-1)
-[Inference Using an ONNX File](#inference-using-an-onnx-file)
-[Inference on a CPU](#inference-on-a-cpu)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-3)
-[Inference Using a Checkpoint File](#inference-using-a-checkpoint-file-2)
-[Inference Using an ONNX File](#inference-using-an-onnx-file-1)
-[On-Device Inference](#on-device-inference)
...
...
@@ -71,10 +70,6 @@ MindSpore supports the following inference scenarios based on the hardware platf
## Inference on the Ascend 310 AI processor
### Inference Using a Checkpoint File
The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX or GEIR File
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or GEIR format. For inference on the Ascend 310 AI processor, perform the following steps: