提交 33ed8965 编写于 作者: C Chery

update quick start.

上级 166168d2
# Quick Start (Lite)
# Implementing an Image Classification Application
<a href="https://gitee.com/mindspore/docs/blob/r0.7/lite/tutorials/source_en/quick_start/quick_start_lite.md" target="_blank"><img src="../_static/logo_source.png"></a>
<!-- TOC -->
- [Implementing an Image Classification Application](#implementing-an-image-classification-application)
- [Overview](#overview)
- [Selecting a Model](#selecting-a-model)
- [Converting a Model](#converting-a-model)
- [Deploying an Application](#deploying-an-application)
- [Running Dependencies](#running-dependencies)
- [Building and Running](#building-and-running)
- [Detailed Description of the Sample Program](#detailed-description-of-the-sample-program)
- [Sample Program Structure](#sample-program-structure)
- [Configuring MindSpore Lite Dependencies](#configuring-mindspore-lite-dependencies)
- [Downloading and Deploying a Model File](#downloading-and-deploying-a-model-file)
- [Compiling On-Device Inference Code](#compiling-on-device-inference-code)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/r0.7/lite/tutorials/source_en/quick_start/quick_start.md" target="_blank"><img src="../_static/logo_source.png"></a>
## Overview
It is recommended that you start from the image classification demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related APIs.
This tutorial demonstrates the on-device deployment process based on the image classification sample program on the Android device provided by the MindSpore team.
1. Select an image classification model.
2. Convert the model into a MindSpore Lite model.
3. Use the MindSpore Lite inference model on the device. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen.
> Click to find [Android image classification models](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite) and [sample code](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official/lite/image_classification).
## Selecting a Model
The MindSpore team provides a series of preset device models that you can use in your application.
Click [here](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) to download image classification models in MindSpore ModelZoo.
In addition, you can use the preset model to perform migration learning to implement your image classification tasks.
## Converting a Model
After you retrain a model provided by MindSpore, export the model in the [.mindir format](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#mindir). Use the MindSpore Lite [model conversion tool](https://www.mindspore.cn/lite/tutorial/en/r0.7/use/converter_tool.html) to convert the .mindir model to a .ms model.
Take the mobilenetv2 model as an example. Execute the following script to convert a model into a MindSpore Lite model for on-device inference.
```bash
./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## Deploying an Application
The following section describes how to build and execute an on-device image classification task on MindSpore Lite.
### Running Dependencies
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
- Native development kit (NDK) 21.3
- CMake 3.10.2
- Android software development kit (SDK) 26 or later
- OpenCV 4.0.0 or later (included in the sample code)
### Building and Running
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
![start_home](../images/lite_quick_start_home.png)
Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
![start_sdk](../images/lite_quick_start_sdk.png)
(Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.
![project_structure](../images/lite_quick_start_project_structure.png)
2. Connect to an Android device and runs the image classification application.
Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
![run_app](../images/lite_quick_start_run_app.PNG)
For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
![result](../images/lite_quick_start_app_result.jpg)
## Detailed Description of the Sample Program
This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/en/r0.7/use/runtime.html).
> This following describes the JNI layer implementation of the sample program. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge.
### Sample Program Structure
```
app
|
├── libs # library files that store MindSpore Lite dependencies
│ └── arm64-v8a
│ ├── libopencv_java4.so
│ └── libmindspore-lite.so
├── opencv # dependency files related to OpenCV
│ └── ...
|
├── src/main
│ ├── assets # resource files
| | └── model.ms # model file
│ |
│ ├── cpp # main logic encapsulation classes for model loading and prediction
| | ├── include # header files related to MindSpore calling
| | | └── ...
│ | |
| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
│ | └── MindSporeNetnative.h # header file
│ |
│ ├── java # application code at the Java layer
│ │ └── com.huawei.himindsporedemo
│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
│ │ │ └── ...
│ │ └── obejctdetect # implementation related to camera enabling and drawing
│ │ └── ...
│ │
│ ├── res # resource files related to Android
│ └── AndroidManifest.xml # Android configuration file
├── CMakeList.txt # CMake compilation entry file
├── build.gradle # Other Android configuration file
└── ...
```
### Configuring MindSpore Lite Dependencies
When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html) to generate the `libmindspore-lite.so` library file.
In Android Studio, place the compiled `libmindspore-lite.so` library file (which can contain multiple compatible architectures) in the `app/libs/ARM64-V8a` (Arm64) or `app/libs/armeabi-v7a` (Arm32) directory of the application project. In the `build.gradle` file of the application, configure the compilation support of CMake, `arm64-v8a`, and `armeabi-v7a`.  
```
android{
defaultConfig{
externalNativeBuild{
cmake{
arguments "-DANDROID_STL=c++_shared"
}
}
ndk{
abiFilters'armeabi-v7a', 'arm64-v8a'
}
}
}
```
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
# Set MindSpore Lite Dependencies.
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
add_library(mindspore-lite SHARED IMPORTED )
set_target_properties(mindspore-lite PROPERTIES
IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
# Set OpenCV Dependecies.
include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
add_library(lib-opencv SHARED IMPORTED )
set_target_properties(lib-opencv PROPERTIES
IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
# Link target library.
target_link_libraries(
...
mindspore-lite
lib-opencv
...
)
```
In this example, the download.gradle File configuration auto download ` libmindspot-lite.so `and `libopencv_ Java4.so` library file, placed in the 'app / libs / arm64-v8a' directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
libmindspore-lite.so [libmindspore-lite.so]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
libmindspore-lite include [libmindspore-lite include]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
libopencv_java4.so [libopencv_java4.so](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
libopencv include [libopencv include]( https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
### Downloading and Deploying a Model File
In this example, the download.gradle File configuration auto download `mobilenetv2.ms `and placed in the 'app / libs / arm64-v8a' directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
### Compiling On-Device Inference Code
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
- Load a model file. Create and configure the context for model inference.
```cpp
// Buffer is the model data passed in by the Java layer
jlong bufferLen = env->GetDirectBufferCapacity(buffer);
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- Create a session.
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
lite::Context *context = new lite::Context;
context->device_ctx_.type = lite::DT_CPU;
context->thread_num_ = numThread; //Specify the number of threads to run inference
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
delete(context);
```
- Load the model file and build a computational graph for inference.
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
{
CreateSession(modelBuffer, bufferLen, ctx);
session = mindspore::session::LiteSession::CreateSession(ctx);
auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen);
int ret = session->CompileGraph(model);
}
```
2. Convert the input image into the Tensor format of the MindSpore model.
Convert the image data to be detected into the Tensor format of the MindSpore model.
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
BitmapToMat(env, srcBitmap, matImageSrc);
// Processing such as zooming the picture size.
matImgPreprocessed = PreProcessImageData(matImageSrc);
ImgDims inputDims;
inputDims.channel = matImgPreprocessed.channels();
inputDims.width = matImgPreprocessed.cols;
inputDims.height = matImgPreprocessed.rows;
float *dataHWC = new float[inputDims.channel * inputDims.width * inputDims.height]
// Copy the image data to be detected to the dataHWC array.
// The dataHWC[image_size] array here is the intermediate variable of the input MindSpore model tensor.
float *ptrTmp = reinterpret_cast<float *>(matImgPreprocessed.data);
for(int i = 0; i < inputDims.channel * inputDims.width * inputDims.height; i++){
dataHWC[i] = ptrTmp[i];
}
// Assign dataHWC[image_size] to the input tensor variable.
auto msInputs = mSession->GetInputs();
auto inTensor = msInputs.front();
memcpy(inTensor->MutableData(), dataHWC,
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
delete[] (dataHWC);
```
3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
- Perform graph execution and on-device inference.
```cpp
// After the model and image tensor data is loaded, run inference.
auto status = mSession->RunGraph();
```
- Obtain the output data.
```cpp
auto msOutputs = mSession->GetOutputMapByNode();
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
- Perform post-processing of the output data.
```cpp
std::string ProcessRunnetResult(std::unordered_map<std::string,
std::vector<mindspore::tensor::MSTensor *>> msOutputs,
int runnetRet) {
// Get model output results.
std::unordered_map<std::string, std::vector<mindspore::tensor::MSTensor *>>::iterator iter;
iter = msOutputs.begin();
auto brach1_string = iter->first;
auto branch1_tensor = iter->second;
int OUTPUTS_LEN = branch1_tensor[0]->ElementsNum();
float *temp_scores = static_cast<float * >(branch1_tensor[0]->MutableData());
float scores[RET_CATEGORY_SUM];
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
scores[i] = temp_scores[i];
}
// Converted to text information that needs to be displayed in the APP.
std::string retStr = "";
if (runnetRet == 0) {
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
if (scores[i] > 0.3){
retStr += g_labels_name_map[i];
retStr += ":";
std::string score_str = std::to_string(scores[i]);
retStr += score_str;
retStr += ";";
}
}
else {
MS_PRINT("MindSpore run net failed!");
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
retStr += " :0.0;";
}
}
return retStr;
}
```
\ No newline at end of file
# 快速入门
# 实现一个图像分类应用
<!-- TOC -->
- [快速入门](#快速入门)
- [实现一个图像分类应用](#实现一个图像分类应用)
- [概述](#概述)
- [选择模型](#选择模型)
- [转换模型](#转换模型)
......@@ -28,32 +28,32 @@
2. 将模型转换成MindSpore Lite模型格式。
3. 在端侧使用MindSpore Lite推理模型。详细说明如何在端侧利用MindSpore Lite C++ API(Android JNI)和MindSpore Lite图像分类模型完成端侧推理,实现对设备摄像头捕获的内容进行分类,并在APP图像预览界面中,显示出最可能的分类结果。
> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/lite/image_classif)。
> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/lite/image_classification)。
## 选择模型
MindSpore团队提供了一系列预置终端模型,你可以在应用程序中使用这些预置的终端模型。
MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)
同时,你也可以使用预置模型做迁移学习,以实现自己的图像分类任务,操作流程参见[重训练章节](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/saving_and_loading_model_parameters.html#id6)
MindSpore Model Zoo中图像分类模型可[在此下载](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
同时,你也可以使用预置模型做迁移学习,以实现自己的图像分类任务。
## 转换模型
如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/use/converter_tool.html)将.mindir模型转换成.ms格式。
如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/r.07/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/use/converter_tool.html)将.mindir模型转换成.ms格式。
MindSpore MobilenetV2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
mobilenetv2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
```bash
./converter_lite --fmk=MS --modelFile=mobilenet_v2.mindir --outputFile=mobilenet_v2.ms
./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## 部署应用
接下来介绍如何构建和执行mindspore Lite端侧图像分类任务。
接下来介绍如何构建和执行MindSpore Lite端侧图像分类任务。
### 运行依赖
- Android Studio >= 3.2 (推荐4.0以上版本)
- NDK 21.3
- CMake
- CMake 3.10.2
- Android SDK >= 26
- OpenCV >= 4.0.0 (本示例代码已包含)
......@@ -83,9 +83,9 @@ MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)。
![install](../images/lite_quick_start_install.png)
如下图所示,成功识别出图中内容是键盘和鼠标
识别结果如下图所示
![result](../images/lite_quick_start_app_result.png)
![result](../images/lite_quick_start_app_result.jpg)
## 示例程序详细说明
......@@ -112,9 +112,7 @@ app
| | └── model.ms # 存放模型文件
│ |
│ ├── cpp # 模型加载和预测主要逻辑封装类
| | ├── include # 存放MindSpore调用相关的头文件
| | | └── ...
│ | |
| | ├── ..
| | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法
│ | └── MindSporeNetnative.h # 头文件
│ |
......@@ -136,9 +134,21 @@ app
### 配置MindSpore Lite依赖项
Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r0.7/deploy.html)生成`libmindspore-lite.so`库文件,或直接下载MindSpore Lite提供的已编译完成的AMR64、ARM32、x86等[软件包](#TODO)
Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html)生成`libmindspore-lite.so`库文件。
本示例中,bulid过程由download.gradle文件配置自动下载`libmindspore-lite.so`以及OpenCV的`libopencv_java4.so`库文件,并放置在`app/libs/arm64-v8a`目录下。
注: 若自动下载失败,请手动下载相关库文件并将其放在对应位置:
libmindspore-lite.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
libmindspore-lite include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
libopencv_java4.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
libopencv include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
在Android Studio中将编译完成的`libmindspore-lite.so`库文件(可包含多个兼容架构),分别放置在APP工程的`app/libs/ARM64-V8a`(ARM64)或`app/libs/armeabi-v7a`(ARM32)目录下,并在应用的`build.gradle`文件中配置CMake编译支持,以及`arm64-v8a``armeabi-v7a`的编译支持。  
```
android{
......@@ -156,7 +166,7 @@ android{
}
```
`app/CMakeLists.txt`文件中建立`.so``.a`库文件链接,如下所示。
`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
```
# Set MindSpore Lite Dependencies.
......@@ -182,7 +192,9 @@ target_link_libraries(
### 下载及部署模型文件
从MindSpore Model Hub中下载模型文件,本示例程序中使用的终端图像分类模型文件为`mobilenet_v2.ms`,放置在`app/src/main/assets`工程目录下。
从MindSpore Model Hub中下载模型文件,本示例程序中使用的终端图像分类模型文件为`mobilenetv2.ms`,同样通过`download.gradle`脚本在APP构建时自动下载,并放置在`app/src/main/assets`工程目录下。
注:若下载失败请手工下载模型文件,mobilenetv2.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
### 编写端侧推理代码
......@@ -207,7 +219,6 @@ target_link_libraries(
// Create context.
lite::Context *context = new lite::Context;
context->cpu_bind_mode_ = lite::NO_BIND;
context->device_ctx_.type = lite::DT_CPU;
context->thread_num_ = numThread; //Specify the number of threads to run inference
......@@ -224,7 +235,7 @@ target_link_libraries(
CreateSession(modelBuffer, bufferLen, ctx);
session = mindspore::session::LiteSession::CreateSession(ctx);
auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen);
int ret = session->CompileGraph(model); // Compile Graph
int ret = session->CompileGraph(model);
}
```
......@@ -257,8 +268,8 @@ target_link_libraries(
memcpy(inTensor->MutableData(), dataHWC,
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
delete[] (dataHWC);
```
```
3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
- 图执行,端测推理。
......@@ -270,7 +281,7 @@ target_link_libraries(
- 获取输出数据。
```cpp
auto msOutputs = mSession->GetOutputs();
auto msOutputs = mSession->GetOutputMapByNode();
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
......@@ -288,19 +299,12 @@ target_link_libraries(
int OUTPUTS_LEN = branch1_tensor[0]->ElementsNum();
MS_PRINT("OUTPUTS_LEN:%d", OUTPUTS_LEN);
float *temp_scores = static_cast<float * >(branch1_tensor[0]->MutableData());
float scores[RET_CATEGORY_SUM];
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
if (temp_scores[i] > 0.5){
MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
}
scores[i] = temp_scores[i];
}
scores[i] = temp_scores[i];
}
// Converted to text information that needs to be displayed in the APP.
std::string retStr = "";
if (runnetRet == 0) {
......@@ -308,12 +312,12 @@ target_link_libraries(
if (scores[i] > 0.3){
retStr += g_labels_name_map[i];
retStr += ":";
std::string score_str = std::to_string(scores[i]);
std::string score_str = std::to_string(scores[i]);
retStr += score_str;
retStr += ";";
}
}
} else {
}
else {
MS_PRINT("MindSpore run net failed!");
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
retStr += " :0.0;";
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册