提交 89a1fcf0 编写于 作者: L liuxiao78

add english version

上级 b1fbf0ee
# Deploy
<a href="https://gitee.com/mindspore/docs/blob/master/lite/tutorials/source_en/deploy.md" target="_blank"><img src="../_static/logo_source.png"></a>
<!-- TOC -->
- [Deployment](#deployment)
- [Environment Requirements](#environment-requirements)
- [Compilation Options](#compilation-options)
- [Output Description](#output-description)
- [Compilation Example](#compilation-example)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/lite/tutorials/source_en/deploy.md" target="_blank"><img src="./_static/logo_source.png"></a>
This document describes how to quickly install MindSpore Lite on the Ubuntu system.
## Environment Requirements
- The compilation environment supports Linux x86_64 only. Ubuntu 18.04.02 LTS is recommended.
- Compilation dependencies (basics):
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
- [Python](https://www.python.org/) >= 3.7
- [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
> - `Android_NDK` needs to be installed only when the Arm version is compiled. Skip this dependency when the x86_64 version is compiled.
> - To install and use `Android_NDK`, you need to configure environment variables. The command example is `export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`.
- Compilation dependencies (additional dependencies required by the MindSpore Lite model conversion tool, which is required only for compilation of the x86_64 version)
- [Autoconf](http://ftp.gnu.org/gnu/autoconf/) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool/) >= 2.4.6
- [LibreSSL](http://www.libressl.org/) >= 3.1.3
- [Automake](https://www.gnu.org/software/automake/) >= 1.11.6
- [Libevent](https://libevent.org) >= 2.0
- [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
- [OpenSSL](https://www.openssl.org/) >= 1.1.1
## Compilation Options
MindSpore Lite provides multiple compilation options. You can select different compilation options as required.
| Parameter | Parameter Description | Value Range | Mandatory or Not |
| -------- | ----- | ---- | ---- |
| -d | If this parameter is set, the debug version is compiled. Otherwise, the release version is compiled. | - | No |
| -i | If this parameter is set, incremental compilation is performed. Otherwise, full compilation is performed. | - | No |
| -j[n] | Sets the number of threads used during compilation. Otherwise, the number of threads is set to 8 by default. | - | No |
| -I | Selects an applicable architecture. | arm64, arm32, or x86_64 | Yes |
| -e | In the Arm architecture, select the backend operator and set the `gpu` parameter. The built-in GPU operator of the framework is compiled at the same time. | GPU | No |
| -h | Displays the compilation help information. | - | No |
> When the `-I` parameter changes, that is, the applicable architecture is changed, the `-i` parameter cannot be used for incremental compilation.
## Output Description
After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is named `mindspore-lite-{version}-{function}-{OS}.tar.gz`. After decompression, the tool package named `mindspore-lite-{version}-{function}-{OS}` can be obtained.
> version: version of the output, consistent with that of the MindSpore.
>
> function: function of the output. `convert` indicates the output of the conversion tool and `runtime` indicates the output of the inference framework.
>
> OS: OS on which the output will be deployed.
```bash
tar -xvf mindspore-lite-{version}-{function}-{OS}.tar.gz
```
Generally, the compiled output files include the following types. The architecture selection affects the types of output files.
> For the x86 architecture, you can obtain the output of the conversion tool; for the Arm 64-bit architecture, you can obtain the output of the `arm64-cpu` inference framework. If `-e gpu` is added, you can obtain the output of the `arm64-cpu` inference framework. The compilation for arm 64-bit is the same as that for arm 32-bit.
| Directory | Description | x86_64 | Arm 64-bit | Arm 32-bit |
| --- | --- | --- | --- | --- |
| include | Inference framework header file | No | Yes | Yes |
| lib | Inference framework dynamic library | Yes | Yes | Yes |
| benchmark | Benchmark test tool | Yes | Yes | Yes |
| time_profiler | Time consumption analysis tool at the model network layer | Yes | Yes | Yes |
| converter | Model conversion tool | Yes | No | No |
| third_party | Header file and library of the third-party library | Yes | Yes | Yes |
The contents of `third party` vary depending on the architecture as follows:
- x86_64: `protobuf` (Protobuf dynamic library).
- arm: `flatbuffers` (FlatBuffers header file).
> Before running the tools in the `converter`, `benchmark`, or `time_profiler` directory, you need to configure environment variables and set the paths of the dynamic libraries of MindSpore Lite and Protobuf to the paths of the system dynamic libraries. The following uses the 0.7.0-beta version as an example: `export LD_LIBRARY_PATH=./mindspore-lite-0.7.0/lib:./mindspore-lite-0.7.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`.
## Compilation Example
First, download source code from the MindSpore code repository.
```bash
git clone https://gitee.com/mindspore/mindspore.git
```
Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions:
- Debug version of the x86_64 architecture:
```bash
bash build.sh -I x86_64 -d
```
- Release version of the x86_64 architecture, with the number of threads set:
```bash
bash build.sh -I x86_64 -j32
```
- Release version of the Arm 64-bit architecture in incremental compilation mode, with the number of threads set:
```bash
bash build.sh -I arm64 -i -j32
```
- Release version of the Arm 64-bit architecture in incremental compilation mode, with the built-in GPU operator compiled:
```bash
bash build.sh -I arm64 -e gpu
```
> - In the `build.sh` script, run the `git clone` command to obtain the code in the third-party dependency library. Ensure that the network settings of Git are correct.
Take the 0.7.0-beta version as an example. After the release version of the x86_64 architecture is compiled, go to the `mindspore/output` directory and run the following decompression command to obtain the output files `include`, `lib`, `benchmark`, `time_profiler`, `converter`, and `third_party`:
```bash
tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz
tar -xvf mindspore-lite-0.7.0-runtime-x86-cpu.tar.gz
```
\ No newline at end of file
# Converter Tool
<!-- TOC -->
- [Model Conversion Tool](#model-conversion-tool)
- [Overview](#overview)
- [Environment Preparation](#environment-preparation)
- [Parameter Description](#parameter-description)
- [Model Visualization](#model-visualization)
- [Example](#example)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/lite/tutorials/source_en/use/converter_tool.md" target="_blank"><img src="../_static/logo_source.png"></a>
## Overview
MindSpore Lite provides a tool for offline model conversion. It supports conversion of multiple types of models and visualization of converted models. The converted models can be used for inference. The command line parameters contain multiple personalized options, providing a convenient conversion method for users.
Currently, the following input formats are supported: MindSpore, TensorFlow Lite, Caffe, and ONNX.
## Environment Preparation
To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows:
- Compilation: Install basic and additional compilation dependencies and perform compilation. The compilation version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id2) and [Compilation Example] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id5) in the deployment document.
- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id4) in the deployment document.
## Parameter Description
You can use `./converter_lite ` to complete the conversion. In addition, you can set multiple parameters as required.
You can enter `./converter_lite --help` to obtain help information in real time.
The following describes the parameters in detail.
| Parameter | Mandatory or Not | Parameter Description | Value Range | Default Value |
| -------- | ------- | ----- | --- | ---- |
| `--help` | No | Prints all help information. | - | - |
| `--fmk=<FMK>` | Yes | Original format of the input model. | MS, CAFFE, TFLITE, or ONNX | - |
| `--modelFile=<MODELFILE>` | Yes | Path of the input model. | - | - |
| `--outputFile=<OUTPUTFILE>` | Yes | Path of the output model. (If the path does not exist, a directory will be automatically created.) The suffix `.ms` can be automatically generated. | - | - |
| `--weightFile=<WEIGHTFILE>` | Yes (for Caffe models only) | Path of the weight file of the input model. | - | - |
| `--quantType=<QUANTTYPE>` | No | Sets the training type of the model. | PostTraining: quantization after training <br>AwareTraining: perceptual quantization | - |
> - The parameter name and parameter value are separated by an equal sign (=) and no space is allowed between them.
> - The Caffe model is divided into two files: model structure `*.prototxt`, corresponding to the `--modelFile` parameter; model weight `*.caffemodel`, corresponding to the `--weightFile` parameter
## Model Visualization
The model visualization tool provides a method for checking the model conversion result. You can run the JSON command to generate a `*.json` file and compare it with the original model to determine the conversion effect.
TODO: This function is under development now.
## Example
First, in the root directory of the source code, run the following command to perform compilation. For details, see `deploy.md`.
```bash
bash build.sh -I x86_64
```
> Currently, the model conversion tool supports only the x86_64 architecture.
The following describes how to use the conversion command by using several common examples.
- Take the Caffe model LeNet as an example. Run the following conversion command:
```bash
./converter_lite --fmk=CAFFE --modelFile=lenet.prototxt --weightFile=lenet.caffemodel --outputFile=lenet
```
In this example, the Caffe model is used. Therefore, the model structure and model weight files are required. Two more parameters `fmk` and `outputFile` are also required.
The output is as follows:
```
INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
```
This indicates that the Caffe model is successfully converted into the MindSpore Lite model and the new file `lenet.ms` is generated.
- The following uses the MindSpore, TensorFlow Lite, ONNX and perception quantization models as examples to describe how to run the conversion command.
- MindSpore model `model.mindir`
```bash
./converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite model `model.tflite`
```bash
./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model
```
- ONNX model `model.onnx`
```bash
./converter_lite --fmk=ONNX --modelFile=model.onnx --outputFile=model
```
- TensorFlow Lite perceptual quantization model `model_quant.tflite`
```bash
./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining
```
In the preceding scenarios, the following information is displayed, indicating that the conversion is successful. In addition, the target file `model.ms` is obtained.
```
INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
```
You can use the model visualization tool to visually check the converted MindSpore Lite model. This function is under development.
\ No newline at end of file
# 部署
本文档介绍如何在Ubuntu系统上快速安装MindSpore Lite。
<!-- TOC -->
- [部署](#部署)
......@@ -12,7 +10,9 @@
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/lite/tutorials/source_zh_cn/deploy.md" target="_blank"><img src="../_static/logo_source.png"></a>
<a href="https://gitee.com/mindspore/docs/blob/master/lite/tutorials/source_zh_cn/deploy.md" target="_blank"><img src="./_static/logo_source.png"></a>
本文档介绍如何在Ubuntu系统上快速安装MindSpore Lite。
## 环境要求
......@@ -54,10 +54,12 @@ MindSpore Lite提供多种编译方式,用户可根据需要选择不同的编
## 输出件说明
编译完成后,进入源码的`mindspore/output`目录,可查看编译后生成的文件,命名为`mindspore-lite-{version}-{function}-{OS}.tar.gz`。解压后,即可获得编译后的工具包,名称为`mindspore-lite-{version}`
编译完成后,进入源码的`mindspore/output`目录,可查看编译后生成的文件,命名为`mindspore-lite-{version}-{function}-{OS}.tar.gz`。解压后,即可获得编译后的工具包,名称为`mindspore-lite-{version}-{function}-{OS}`
> version:输出件版本,与所编译的MindSpore版本一致。
>
> function:输出件功能,`convert`表示为转换工具的输出件,`runtime`表示为推理框架的输出件。
>
> OS:输出件应部署的操作系统。
```bash
......@@ -83,7 +85,7 @@ tar -xvf mindspore-lite-{version}-{function}-{OS}.tar.gz
- x86_64:`protobuf`(Protobuf的动态库)。
- ARM:`flatbuffers`(FlatBuffers头文件)。
> 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.6.0-beta版本为例:`export LD_LIBRARY_PATH=./mindspore-lite-0.6.0/lib:./mindspore-lite-0.6.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`。
> 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本为例:`export LD_LIBRARY_PATH=./mindspore-lite-0.7.0/lib:./mindspore-lite-0.7.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`。
## 编译示例
......@@ -117,8 +119,9 @@ git clone https://gitee.com/mindspore/mindspore.git
> `build.sh`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
以0.6.0-beta版本为例,x86_64架构Release版本编译完成之后,进入`mindspore/output`目录,执行如下解压缩命令,即可获取输出件`include``lib``benchmark``time_profiler``converter``third_party`
以0.7.0-beta版本为例,x86_64架构Release版本编译完成之后,进入`mindspore/output`目录,执行如下解压缩命令,即可获取输出件`include``lib``benchmark``time_profiler``converter``third_party`
```bash
tar -xvf mindspore-lite-0.6.0-converter-ubuntu.tar.gz
tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz
tar -xvf mindspore-lite-0.7.0-runtime-x86-cpu.tar.gz
```
......@@ -90,10 +90,10 @@ The compilation procedure is as follows:
bash build.sh -I arm32
```
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `mindspore-lite-0.6.0-converter-ubuntu.tar.gz` to get the result `mindspore-lite-0.6.0` after building.
3. Go to the `mindspore/output` directory of the source code to obtain the compilation result. Unzip `mindspore-lite-0.7.0-converter-ubuntu.tar.gz` to get the result `mindspore-lite-0.7.0` after building.
```bash
tar -xvf mindspore-lite-0.6.0-converter-ubuntu.tar.gz
tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz
```
## Use of On-Device Inference
......@@ -171,7 +171,7 @@ To perform on-device model inference using MindSpore, perform the following step
else:
print("checkpoint file does not exist.")
```
3. In `mindspore/output/mindspore-lite-0.6.0/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.mindir`) to on_device inference model file (`.ms`).
3. In `mindspore/output/mindspore-lite-0.7.0/converter` directory, calling MindSpore convert tool named `converter_lite`, convert model file (`.mindir`) to on_device inference model file (`.ms`).
```
./converter_lite --fmk=MS --modelFile=./lenet.mindir --outputFile=lenet
```
......
......@@ -89,10 +89,10 @@ MindSpore Lite的框架主要由Frontend、IR、Backend、Lite RT、Micro构成
bash build.sh -I arm32
```
3. 进入源码的`mindspore/output`目录,获取编译结果`mindspore-lite-0.6.0-converter-ubuntu.tar.gz`。执行解压缩命令,获得编译后的工具包`mindspore-lite-0.6.0`
3. 进入源码的`mindspore/output`目录,获取编译结果`mindspore-lite-0.7.0-converter-ubuntu.tar.gz`。执行解压缩命令,获得编译后的工具包`mindspore-lite-0.7.0`
```bash
tar -xvf mindspore-lite-0.6.0-converter-ubuntu.tar.gz
tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz
```
......@@ -172,7 +172,7 @@ MindSpore进行端侧模型推理的步骤如下。
else:
print("checkpoint file does not exist.")
```
3.`mindspore/output/mindspore-lite-0.6.0/converter`路径下,调用MindSpore端侧转换工具`converter_lite`,将模型文件(`.mindir`)转换为端侧模型文件(`.ms`)。
3.`mindspore/output/mindspore-lite-0.7.0/converter`路径下,调用MindSpore端侧转换工具`converter_lite`,将模型文件(`.mindir`)转换为端侧模型文件(`.ms`)。
```
./converter_lite --fmk=MS --modelFile=./lenet.mindir --outputFile=lenet
```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册