MindSpore Lite is a lightweight deep neural network inference engine that provides the inference function for models on the device side. Models can be trained by MindSpore or imported from third-party, such as TensorFlow Lite, ONNX and Caffe. This tutorial describes how to use and compile MindSpore Lite for its own model.
![](./images/on_device_inference_frame.jpg)
Figure 1 On-device inference frame diagram
Mindspore Lite's framework consists of frontend, IR, backend, Lite, RT and micro.
- Frontend: It used for model generation. Users can use the model building interface to build models, or transform third-party models into mindspore models.
- IR: It includes the tensor definition, operator prototype definition and graph definition of mindspore. The back-end optimization is based on IR.
- Backend: Graph optimization and high level optimization are independent of hardware, such as operator fusion and constant folding, while low level optimization is related to hardware; quantization includes weight quantization, activation value quantization and other post training quantization methods.
- Lite RT: In the inference runtime, session provides the external interface, kernel registry is operator registry, scheduler is operator heterogeneous scheduler and executor is operator executor. Lite RT and shares with Micro the underlying infrastructure layers such as operator library, memory allocation, runtime thread pool and parallel primitives.
- Micro: Code Gen generates .c files according to the model, and infrastructure such as the underlying operator library is shared with Lite RT.
## Compilation Method
You need to compile the MindSpore Lite by yourself. This section describes how to perform cross compilation in the Ubuntu environment.
The environment requirements are as follows:
- Hardware requirements
- Memory: 1 GB or above
- Hard disk space: 10 GB or above
- System requirements
- System is limited on Linux: Ubuntu = 18.04.02LTS
2. Run the following command in the root directory of the source code to compile MindSpore Lite.
- Compile converter tool:
```bash
bash build.sh -I x86_64
```
- Compile inference platform:
Set path of ANDROID_NDK:
```bash
export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b
```
According to the phone,you can choose`arm64`:
```bash
bash build.sh -I arm64
```
or`arm32`:
```bash
bash build.sh -I arm32
```
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `mindspore-lite-0.7.0-converter-ubuntu.tar.gz` to get the result `mindspore-lite-0.7.0` after building.
When MindSpore is used to perform model inference in the APK project of an app, preprocessing input is required before model inference. For example, before an image is converted into the tensor format required by MindSpore inference, the image needs to be resized. After MindSpore completes model inference, postprocess the model inference result and sends the processed output to the app.
This section describes how to use MindSpore to perform model inference. The setup of an APK project and pre- and post-processing of model inference are not described here.
To perform on-device model inference using MindSpore, perform the following steps.
### Generating an On-Device Model File
1. After training is complete, load the generated checkpoint file to the defined network.
3. In `mindspore/output/mindspore-lite-0.7.0/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.mindir`) to on_device inference model file (`.ms`).
INFO [converter/converter.cc:146] Runconverter] CONVERTER RESULT: SUCCESS!
```
This means that the model has been successfully converted to the mindspore on_device inference model.
### Implementing On-Device Inference
Use the `.ms` model file and image data as input to create a session and implement inference on the device.
![](./images/side_infer_process.jpg)
Take the `lenet.ms` model as an example to implement on-device inference, the specific steps are:
1. Load the `.ms` model file to the memory buffer. The ReadFile function needs to be implemented by users, according to the [C++ tutorial](http://www.cplusplus.com/doc/tutorial/files/).
2. Call the `CreateSession` API to get a session.
3. Call the `CompileGraph` API in previous `Session` and transport model.
4. Call the `GetInputs` API to get input `tensor`, then set graph information as `data`, `data` will be used in to perform model inference.
5. Call the `RunGraph` API in the `Session` to perform inference.
6. Call the `GetOutputs` API to obtain the output.
The complete sample code is as follows:
```CPP
#include <iostream>
#include <fstream>
#include "schema/model_generated.h"
#include "include/model.h"
#include "include/lite_session.h"
#include "include/errorcode.h"
#include "ir/dtype/type_id.h"
char *ReadFile(const char *file, size_t *size) {
if (file == nullptr) {
std::cerr << "file is nullptr" << std::endl;
return nullptr;
}
if (size == nullptr) {
std::cerr << "size is nullptr" << std::endl;
return nullptr;
}
std::ifstream ifs(file);
if (!ifs.good()) {
std::cerr << "file: " << file << " is not exist" << std::endl;
@@ -148,4 +148,4 @@ Similar to the inference on a GPU, the following steps are required:
## On-Device Inference
MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html).
MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/lite/en).