When the above code command is completed, there will be ``MobileNetV3_large_x1_0.nb` in the current directory, which is the converted model file.
When the above code command is completed, there will be ``MobileNetV3_large_x1_0.nb` in the current directory, which is the converted model file.
<a name="2.1.4"></a>
#### 2.1.4 Compile to get the executable file clas_system
```shell
# Clone the Autolog repository to get automation logs
cd PaddleClas_root_path
cd deploy/lite/
git clone https://github.com/LDOUBLEV/AutoLog.git
```
```shell
# Compile
make -j
```
After executing the `make` command, the `clas_system` executable file is generated in the current directory, which is used for Lite prediction.
<a name="2.2"></a>
<a name="2.2"></a>
## 2.2 Run optimized model on Phone
## 2.2 Run optimized model on Phone
...
@@ -172,7 +190,7 @@ When the above code command is completed, there will be ``MobileNetV3_large_x1_0
...
@@ -172,7 +190,7 @@ When the above code command is completed, there will be ``MobileNetV3_large_x1_0
* Install ADB for windows
* Install ADB for windows
If install ADB fo Windows, you need to download from Google's Android platform: [Download Link](https://developer.android.com/studio).
If install ADB fo Windows, you need to download from Google's Android platform: [Download Link](https://developer.android.com/studio).
First, make sure the phone is connected to the computer, turn on the `USB debugging` option of the phone, and select the `file transfer` mode. Verify whether ADB is installed successfully as follows:
3. First, make sure the phone is connected to the computer, turn on the `USB debugging` option of the phone, and select the `file transfer` mode. Verify whether ADB is installed successfully as follows:
```shell
```shell
$ adb devices
$ adb devices
...
@@ -183,42 +201,22 @@ When the above code command is completed, there will be ``MobileNetV3_large_x1_0
...
@@ -183,42 +201,22 @@ When the above code command is completed, there will be ``MobileNetV3_large_x1_0
If there is `device` output like the above, it means the installation was successful.
If there is `device` output like the above, it means the installation was successful.
4. Prepare optimized model, inference library files, test image and dictionary file used.
4. Push the optimized model, prediction library file, test image and class map file to the phone.
```shell
```shell
cd PaddleClas_root_path
```shell
cd deploy/lite/
adb shell mkdir -p /data/local/tmp/arm_cpu/
adb push clas_system /data/local/tmp/arm_cpu/
# prepare.sh will put the inference library files, the test image and the dictionary files in demo/cxx/clas
The `prepare.sh` take `PaddleClas/deploy/lite/imgs/tabby_cat.jpg` as the test image, and copy it to the `demo/cxx/clas/debug/` directory.
You should put the model that optimized by `paddle_lite_opt` under the `demo/cxx/clas/debug/` directory. In this example, use `MobileNetV3_large_x1_0.nb` model file generated in [2.1.3](#2.1.3).
You should put the model that optimized by `paddle_lite_opt` under the `demo/cxx/clas/debug/` directory. In this example, use `MobileNetV3_large_x1_0.nb` model file generated in [2.1.3](#2.1.3).
The structure of the clas demo is as follows after the above command is completed:
```
demo/cxx/clas/
|-- debug/
| |--MobileNetV3_large_x1_0.nb class model
| |--tabby_cat.jpg test image
| |--imagenet1k_label_list.txt dictionary file
| |--libpaddle_light_api_shared.so C++ .so file
| |--config.txt config file
|-- config.txt config file
|-- image_classfication.cpp source code
|-- Makefile compile file
```
**NOTE**:
**NOTE**:
* `Imagenet1k_label_list.txt` is the category mapping file of the `ImageNet1k` dataset. If use a custom category, you need to replace the category mapping file.
* `Imagenet1k_label_list.txt` is the category mapping file of the `ImageNet1k` dataset. If use a custom category, you need to replace the category mapping file.
...
@@ -229,33 +227,22 @@ clas_model_file ./MobileNetV3_large_x1_0.nb # path of model file
...
@@ -229,33 +227,22 @@ clas_model_file ./MobileNetV3_large_x1_0.nb # path of model file
label_path ./imagenet1k_label_list.txt # path of category mapping file
label_path ./imagenet1k_label_list.txt # path of category mapping file
resize_short_size 256 # the short side length after resize
resize_short_size 256 # the short side length after resize
crop_size 224 # side length used for inference after cropping
crop_size 224 # side length used for inference after cropping
visualize 0 # whether to visualize. If you set it to 1, an image file named 'clas_result.png' will be generated in the current directory.
visualize 0 # whether to visualize. If you set it to 1, an image file named 'clas_result.png' will be generated in the current directory.
num_threads 1 # The number of threads, the default is 1
precision FP32 # Precision type, you can choose FP32 or INT8, the default is FP32
runtime_device arm_cpu # Device type, the default is arm_cpu
enable_benchmark 0 # Whether to enable benchmark, the default is 0
tipc_benchmark 0 # Whether to enable tipc_benchmark, the default is 0
```
```
5. Run Model on Phone
5. Run Model on Phone
```shell
Execute the following command to complete the prediction on the mobile phone.
# run compile to get the executable file 'clas_system'
make -j
# move the compiled executable file to the debug folder
运行环境配置请参考[文档](https://github.com/PaddlePaddle/models/blob/release/2.2/tutorials/mobilenetv3_prod/Step6/deploy/lite_infer_cpp_arm_cpu/README.md) 的内容配置 TIPC Lite 的运行环境。