build.md 9.2 KB
Newer Older
H
hangq 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13
# Build

<!-- TOC -->

- [compilation](#compilation)
    - [Linux Environment Compilation](#linux-environment-compilation)
        - [Environment Requirements](#environment-requirements)
        - [Compilation Options](#compilation-options)
        - [Compilation Example](#compilation-example)        
        - [Output Description](#output-description)
            - [Description of Converter's Directory Structure](#description-of-converter-directory-structure)
            - [Description of Runtime and Other tools' Directory Structure](#description-of-runtime-and-other-tools-directory-structure)         
        
M
meng_chunyang 已提交
14

H
hangq 已提交
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
<!-- /TOC -->

<a href="https://gitee.com/mindspore/docs/blob/r0.7/lite/tutorials/source_en/build.md" target="_blank"><img src="./_static/logo_source.png"></a>

This chapter introduces how to quickly compile MindSpore Lite, which includes the following modules:

| Module | Support Platform | Description |
| --- | ---- | ---- |
| converter | Linux | Model Conversion Tool |
| runtime | Linux、Android | Model Inference Framework |
| benchmark | Linux、Android | Benchmarking Tool |
| time_profiler | Linux、Android | Performance Analysis Tool |

## Linux Environment Compilation

### Environment Requirements

- The compilation environment supports Linux x86_64 only. Ubuntu 18.04.02 LTS is recommended.

- Compilation dependencies of runtime、benchmark and time_profiler:
  - [CMake](https://cmake.org/download/) >= 3.14.1
  - [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
  - [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
  - [Git](https://git-scm.com/downloads) >= 2.28.0

- Compilation dependencies of converter:
  - [CMake](https://cmake.org/download/) >= 3.14.1
  - [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
  - [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
  - [Git](https://git-scm.com/downloads) >= 2.28.0
  - [Autoconf](http://ftp.gnu.org/gnu/autoconf/) >= 2.69
  - [Libtool](https://www.gnu.org/software/libtool/) >= 2.4.6
  - [LibreSSL](http://www.libressl.org/) >= 3.1.3
  - [Automake](https://www.gnu.org/software/automake/) >= 1.11.6
  - [Libevent](https://libevent.org) >= 2.0
  - [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
  - [OpenSSL](https://www.openssl.org/) >= 1.1.1
  
> - To install and use `Android_NDK`, you need to configure environment variables. The command example is `export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`.
> - In the `build.sh` script, run the `git clone` command to obtain the code in the third-party dependency library. Ensure that the network settings of Git are correct.

### Compilation Options

MindSpore Lite provides a compilation script `build.sh` for one-click compilation, located in the root directory of MindSpore. This script can be used to compile the code of training and inference. The following describes the compilation options of MindSpore Lite.

| Parameter  |  Parameter Description  | Value Range | Mandatory or Not |
| -------- | ----- | ---- | ---- |
| **-I** | **Selects an applicable architecture. This option is required when compile MindSpore Lite.** | **arm64, arm32, or x86_64** | **Yes** |
| -d | If this parameter is set, the debug version is compiled. Otherwise, the release version is compiled. | None | No |
| -i | If this parameter is set, incremental compilation is performed. Otherwise, full compilation is performed. | None | No |
| -j[n] | Sets the number of threads used during compilation. Otherwise, the number of threads is set to 8 by default. | Integer | No |
| -e | In the Arm architecture, select the backend operator and set the `gpu` parameter. The built-in GPU operator of the framework is compiled at the same time. | GPU | No |
| -h | Displays the compilation help information. | None | No |

> When the `-I` parameter changes, such as `-I x86_64` is converted to `-I arm64`, adding `-i` for parameter compilation does not take effect.

### Compilation Example

First, download source code from the MindSpore code repository.

```bash
git clone https://gitee.com/mindspore/mindspore.git
```

Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions:

- Debug version of the x86_64 architecture:
    ```bash
    bash build.sh -I x86_64 -d
    ```

- Release version of the x86_64 architecture, with the number of threads set:
    ```bash
    bash build.sh -I x86_64 -j32
    ```

- Release version of the Arm 64-bit architecture in incremental compilation mode, with the number of threads set:
    ```bash
    bash build.sh -I arm64 -i -j32
    ```

- Release version of the Arm 64-bit architecture in incremental compilation mode, with the built-in GPU operator compiled:
    ```bash
    bash build.sh -I arm64 -e gpu
    ```

### Output Description

After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into two parts.
- `mindspore-lite-{version}-converter-{os}.tar.gz`:Contains model conversion tool.
- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:Contains model inference framework、benchmarking tool and performance analysis tool.

> version: version of the output, consistent with that of the MindSpore.
>
> device: Currently divided into cpu (built-in CPU operator) and gpu (built-in CPU and GPU operator).
>
> os: Operating system on which the output will be deployed.

Execute the decompression command to obtain the compiled output:

```bash
tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz
tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
```
#### Description of Converter's Directory Structure

The conversion tool is only available under the `-I x86_64` compilation option, and the content includes the following parts:

```
|
├── mindspore-lite-{version}-converter-{os} 
│   └── converter # Model conversion Ttool
│   └── third_party # Header files and libraries of third party libraries
│       ├── protobuf # Dynamic library of Protobuf

```

#### Description of Runtime and Other tools' Directory Structure

The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I arm32` compilation options, and the content includes the following parts:

- When the compilation option is `-I x86_64`:
    ```
    |
    ├── mindspore-lite-{version}-runtime-x86-cpu 
    │   └── benchmark # Benchmarking Tool
    │   └── lib # Inference framework dynamic library
    │       ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
    │   └── third_party # Header files and libraries of third party libraries
    │       ├── flatbuffers # Header files of FlatBuffers
    │   └── include # Header files of inference framework
    │   └── time_profiler # Model network layer time-consuming analysis tool
    
    ```
  
- When the compilation option is `-I arm64`:  
    ```
    |
    ├── mindspore-lite-{version}-runtime-arm64-cpu
    │   └── benchmark # Benchmarking Tool
    │   └── lib # Inference framework dynamic library
    │       ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
    │       ├── liboptimize.so # Operator performance optimization library in MindSpore Lite  
    │   └── third_party # Header files and libraries of third party libraries
    │       ├── flatbuffers # Header files of FlatBuffers
    │   └── include # Header files of inference framework
    │   └── time_profiler # Model network layer time-consuming analysis tool
      
    ```

- When the compilation option is `-I arm32`:  
    ```
    |
    ├── mindspore-lite-{version}-runtime-arm64-cpu
    │   └── benchmark # Benchmarking Tool
    │   └── lib # Inference framework dynamic library
    │       ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
    │   └── third_party # Header files and libraries of third party libraries
    │       ├── flatbuffers # Header files of FlatBuffers
    │   └── include # Header files of inference framework
    │   └── time_profiler # Model network layer time-consuming analysis tool
      
    ```

> 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
> 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
M
meng_chunyang 已提交
181
> 3. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and time_profiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.