MindSpore Lite is a lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. This tutorial describes how to use and compile MindSpore Lite.
MindSpore Lite is a lightweight deep neural network inference engine that provides the inference function for models on the device side. Models can be trained by MindSpore or imported from third-party, such as TensorFlow Lite, ONNX and Caffe. This tutorial describes how to use and compile MindSpore Lite for its own model.
![](./images/on_device_inference_frame.jpg)
...
...
@@ -41,48 +41,43 @@ The environment requirements are as follows:
- Hard disk space: 10 GB or above
- System requirements
- System is limited on Linux
- Recommend system: Ubuntu = 18.04.02LTS
- System is limited on Linux: Ubuntu = 18.04.02LTS
3. Run the following command in the root directory of the source code to compile MindSpore Lite.
2. Run the following command in the root directory of the source code to compile MindSpore Lite.
```bash
cd mindspore/lite
sh build.sh
bash build.sh -I x86_64
```
4. Obtain the compilation result.
Go to the `lite/build` directory of the source code to view the generated documents. Then you can use various tools after changing directory.
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `MSLite-0.6.0-linux_x86_64.tar.gz` to get the result after building.
```bash
tar xvf MSLite-0.6.0-linux_x86_64.tar.gz
```
## Use of On-Device Inference
...
...
@@ -159,7 +154,7 @@ To perform on-device model inference using MindSpore, perform the following step
else:
print("checkpoint file does not exist.")
```
3.Calling MindSpore convert tool named `converter_lite`, convert model file (`.pb`) to on_device inference model file (`.ms`).
3.In `mindspore/output/MSLite-0.6.0-linux_x86_64/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.pb`) to on_device inference model file (`.ms`).
@@ -175,54 +170,172 @@ Use the `.ms` model file and image data as input to create a session and impleme
![](./images/side_infer_process.jpg)
Take the `lenet.ms` model as an example to implement on-device inference, the specific steps are:
1. Load the `.ms` model file to the memory buffer. The ReadFile function needs to be implemented by users, according to the [C++ tutorial](http://www.cplusplus.com/doc/tutorial/files/).