> - `Android_NDK` needs to be installed only when the Arm version is compiled. Skip this dependency when the x86_64 version is compiled.
> - To install and use `Android_NDK`, you need to configure environment variables. The command example is `export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`.
- Compilation dependencies (additional dependencies required by the MindSpore Lite model conversion tool, which is required only for compilation of the x86_64 version)
MindSpore Lite provides multiple compilation options. You can select different compilation options as required.
| Parameter | Parameter Description | Value Range | Mandatory or Not |
| -------- | ----- | ---- | ---- |
| -d | If this parameter is set, the debug version is compiled. Otherwise, the release version is compiled. | - | No |
| -i | If this parameter is set, incremental compilation is performed. Otherwise, full compilation is performed. | - | No |
| -j[n] | Sets the number of threads used during compilation. Otherwise, the number of threads is set to 8 by default. | - | No |
| -I | Selects an applicable architecture. | arm64, arm32, or x86_64 | Yes |
| -e | In the Arm architecture, select the backend operator and set the `gpu` parameter. The built-in GPU operator of the framework is compiled at the same time. | GPU | No |
| -h | Displays the compilation help information. | - | No |
> When the `-I` parameter changes, that is, the applicable architecture is changed, the `-i` parameter cannot be used for incremental compilation.
## Output Description
After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is named `mindspore-lite-{version}-{function}-{OS}.tar.gz`. After decompression, the tool package named `mindspore-lite-{version}-{function}-{OS}` can be obtained.
> version: version of the output, consistent with that of the MindSpore.
>
> function: function of the output. `convert` indicates the output of the conversion tool and `runtime` indicates the output of the inference framework.
Generally, the compiled output files include the following types. The architecture selection affects the types of output files.
> For the x86 architecture, you can obtain the output of the conversion tool; for the Arm 64-bit architecture, you can obtain the output of the `arm64-cpu` inference framework. If `-e gpu` is added, you can obtain the output of the `arm64-cpu` inference framework. The compilation for arm 64-bit is the same as that for arm 32-bit.
| Directory | Description | x86_64 | Arm 64-bit | Arm 32-bit |
| --- | --- | --- | --- | --- |
| include | Inference framework header file | No | Yes | Yes |
| time_profiler | Time consumption analysis tool at the model network layer | Yes | Yes | Yes |
| converter | Model conversion tool | Yes | No | No |
| third_party | Header file and library of the third-party library | Yes | Yes | Yes |
The contents of `third party` vary depending on the architecture as follows:
- x86_64: `protobuf` (Protobuf dynamic library).
- arm: `flatbuffers` (FlatBuffers header file).
> Before running the tools in the `converter`, `benchmark`, or `time_profiler` directory, you need to configure environment variables and set the paths of the dynamic libraries of MindSpore Lite and Protobuf to the paths of the system dynamic libraries. The following uses the 0.7.0-beta version as an example: `export LD_LIBRARY_PATH=./mindspore-lite-0.7.0/lib:./mindspore-lite-0.7.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`.
## Compilation Example
First, download source code from the MindSpore code repository.
Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions:
- Debug version of the x86_64 architecture:
```bash
bash build.sh -I x86_64 -d
```
- Release version of the x86_64 architecture, with the number of threads set:
```bash
bash build.sh -I x86_64 -j32
```
- Release version of the Arm 64-bit architecture in incremental compilation mode, with the number of threads set:
```bash
bash build.sh -I arm64 -i-j32
```
- Release version of the Arm 64-bit architecture in incremental compilation mode, with the built-in GPU operator compiled:
```bash
bash build.sh -I arm64 -e gpu
```
> - In the `build.sh` script, run the `git clone` command to obtain the code in the third-party dependency library. Ensure that the network settings of Git are correct.
Take the 0.7.0-beta version as an example. After the release version of the x86_64 architecture is compiled, go to the `mindspore/output` directory and run the following decompression command to obtain the output files `include`, `lib`, `benchmark`, `time_profiler`, `converter`, and `third_party`:
MindSpore Lite provides a tool for offline model conversion. It supports conversion of multiple types of models and visualization of converted models. The converted models can be used for inference. The command line parameters contain multiple personalized options, providing a convenient conversion method for users.
Currently, the following input formats are supported: MindSpore, TensorFlow Lite, Caffe, and ONNX.
## Environment Preparation
To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows:
- Compilation: Install basic and additional compilation dependencies and perform compilation. The compilation version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id2) and [Compilation Example] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id5) in the deployment document.
- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id4) in the deployment document.
## Parameter Description
You can use `./converter_lite ` to complete the conversion. In addition, you can set multiple parameters as required.
You can enter `./converter_lite --help` to obtain help information in real time.
The following describes the parameters in detail.
| Parameter | Mandatory or Not | Parameter Description | Value Range | Default Value |
| -------- | ------- | ----- | --- | ---- |
| `--help` | No | Prints all help information. | - | - |
| `--fmk=<FMK>` | Yes | Original format of the input model. | MS, CAFFE, TFLITE, or ONNX | - |
| `--modelFile=<MODELFILE>` | Yes | Path of the input model. | - | - |
| `--outputFile=<OUTPUTFILE>` | Yes | Path of the output model. (If the path does not exist, a directory will be automatically created.) The suffix `.ms` can be automatically generated. | - | - |
| `--weightFile=<WEIGHTFILE>` | Yes (for Caffe models only) | Path of the weight file of the input model. | - | - |
| `--quantType=<QUANTTYPE>` | No | Sets the training type of the model. | PostTraining: quantization after training <br>AwareTraining: perceptual quantization | - |
> - The parameter name and parameter value are separated by an equal sign (=) and no space is allowed between them.
> - The Caffe model is divided into two files: model structure `*.prototxt`, corresponding to the `--modelFile` parameter; model weight `*.caffemodel`, corresponding to the `--weightFile` parameter
## Model Visualization
The model visualization tool provides a method for checking the model conversion result. You can run the JSON command to generate a `*.json` file and compare it with the original model to determine the conversion effect.
TODO: This function is under development now.
## Example
First, in the root directory of the source code, run the following command to perform compilation. For details, see `deploy.md`.
```bash
bash build.sh -I x86_64
```
> Currently, the model conversion tool supports only the x86_64 architecture.
The following describes how to use the conversion command by using several common examples.
- Take the Caffe model LeNet as an example. Run the following conversion command:
In this example, the Caffe model is used. Therefore, the model structure and model weight files are required. Two more parameters `fmk` and `outputFile` are also required.
The output is as follows:
```
INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
```
This indicates that the Caffe model is successfully converted into the MindSpore Lite model and the new file `lenet.ms` is generated.
- The following uses the MindSpore, TensorFlow Lite, ONNX and perception quantization models as examples to describe how to run the conversion command.
In the preceding scenarios, the following information is displayed, indicating that the conversion is successful. In addition, the target file `model.ms` is obtained.
```
INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
```
You can use the model visualization tool to visually check the converted MindSpore Lite model. This function is under development.
@@ -90,10 +90,10 @@ The compilation procedure is as follows:
bash build.sh -I arm32
```
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `mindspore-lite-0.6.0-converter-ubuntu.tar.gz` to get the result `mindspore-lite-0.6.0` after building.
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `mindspore-lite-0.7.0-converter-ubuntu.tar.gz` to get the result `mindspore-lite-0.7.0` after building.
@@ -171,7 +171,7 @@ To perform on-device model inference using MindSpore, perform the following step
else:
print("checkpoint file does not exist.")
```
3. In `mindspore/output/mindspore-lite-0.6.0/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.mindir`) to on_device inference model file (`.ms`).
3. In `mindspore/output/mindspore-lite-0.7.0/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.mindir`) to on_device inference model file (`.ms`).