@@ -34,7 +34,7 @@ For the compilation process of different development environments, please refer
...
@@ -34,7 +34,7 @@ For the compilation process of different development environments, please refer
### 1.2 Prepare Paddle-Lite library
### 1.2 Prepare Paddle-Lite library
There are two ways to obtain the Paddle-Lite library:
There are two ways to obtain the Paddle-Lite library:
- 1. Download directly, the download link of the Paddle-Lite library is as follows:
- 1. [Recommended] Download directly, the download link of the Paddle-Lite library is as follows:
| Platform | Paddle-Lite library download link |
| Platform | Paddle-Lite library download link |
|---|---|
|---|---|
...
@@ -43,7 +43,9 @@ There are two ways to obtain the Paddle-Lite library:
...
@@ -43,7 +43,9 @@ There are two ways to obtain the Paddle-Lite library:
Note: 1. The above Paddle-Lite library is compiled from the Paddle-Lite 2.10 branch. For more information about Paddle-Lite 2.10, please refer to [link](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.10).
Note: 1. The above Paddle-Lite library is compiled from the Paddle-Lite 2.10 branch. For more information about Paddle-Lite 2.10, please refer to [link](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.10).
- 2. [Recommended] Compile Paddle-Lite to get the prediction library. The compilation method of Paddle-Lite is as follows:
**Note: It is recommended to use paddlelite>=2.10 version of the prediction library, other prediction library versions [download link](https://github.com/PaddlePaddle/Paddle-Lite/tags)**
- 2. Compile Paddle-Lite to get the prediction library. The compilation method of Paddle-Lite is as follows:
@@ -104,21 +106,17 @@ If you directly use the model in the above table for deployment, you can skip th
...
@@ -104,21 +106,17 @@ If you directly use the model in the above table for deployment, you can skip th
If the model to be deployed is not in the above table, you need to follow the steps below to obtain the optimized model.
If the model to be deployed is not in the above table, you need to follow the steps below to obtain the optimized model.
The `opt` tool can be obtained by compiling Paddle Lite.
- Step 1: Refer to [document](https://www.paddlepaddle.org.cn/lite/v2.10/user_guides/opt/opt_python.html) to install paddlelite, which is used to convert paddle inference model to paddlelite required for running nb model
pip install paddlelite==2.10 # The paddlelite version should be the same as the prediction library version
cd Paddle-Lite
git checkout release/v2.10
./lite/tools/build.sh build_optimize_tool
```
```
After installation, the following commands can view the help information
After the compilation is complete, the opt file is located under build.opt/lite/api/, You can view the operating options and usage of opt in the following ways:
```
```
cd build.opt/lite/api/
paddle_lite_opt
./opt
```
```
Introduction to paddle_lite_opt parameters:
|Options|Description|
|Options|Description|
|---|---|
|---|---|
|--model_dir|The path of the PaddlePaddle model to be optimized (non-combined form)|
|--model_dir|The path of the PaddlePaddle model to be optimized (non-combined form)|
...
@@ -131,6 +129,8 @@ cd build.opt/lite/api/
...
@@ -131,6 +129,8 @@ cd build.opt/lite/api/
`--model_dir` is suitable for the non-combined mode of the model to be optimized, and the inference model of PaddleOCR is the combined mode, that is, the model structure and model parameters are stored in a single file.
`--model_dir` is suitable for the non-combined mode of the model to be optimized, and the inference model of PaddleOCR is the combined mode, that is, the model structure and model parameters are stored in a single file.
- Step 2: Use paddle_lite_opt to convert the inference model to the mobile model format.
The following takes the ultra-lightweight Chinese model of PaddleOCR as an example to introduce the use of the compiled opt file to complete the conversion of the inference model to the Paddle-Lite optimized model
The following takes the ultra-lightweight Chinese model of PaddleOCR as an example to introduce the use of the compiled opt file to complete the conversion of the inference model to the Paddle-Lite optimized model
```
```
...
@@ -240,6 +240,7 @@ det_db_thresh 0.3 # Used to filter the binarized image of DB prediction,
...
@@ -240,6 +240,7 @@ det_db_thresh 0.3 # Used to filter the binarized image of DB prediction,
det_db_box_thresh 0.5 # DDB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate
det_db_box_thresh 0.5 # DDB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate
det_db_unclip_ratio 1.6 # Indicates the compactness of the text box, the smaller the value, the closer the text box to the text
det_db_unclip_ratio 1.6 # Indicates the compactness of the text box, the smaller the value, the closer the text box to the text
use_direction_classify 0 # Whether to use the direction classifier, 0 means not to use, 1 means to use
use_direction_classify 0 # Whether to use the direction classifier, 0 means not to use, 1 means to use
rec_image_height 32 # The height of the input image of the recognition model, the PP-OCRv3 model needs to be set to 48, and the PP-OCRv2 model needs to be set to 32
```
```
5. Run Model on phone
5. Run Model on phone
...
@@ -258,8 +259,15 @@ After the above steps are completed, you can use adb to push the file to the pho
...
@@ -258,8 +259,15 @@ After the above steps are completed, you can use adb to push the file to the pho
cd /data/local/tmp/debug
cd /data/local/tmp/debug
export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH
# The use of ocr_db_crnn is:
# The use of ocr_db_crnn is:
# ./ocr_db_crnn Detection model file Orientation classifier model file Recognition model file Test image path Dictionary file path
# ./ocr_db_crnn Mode Detection model file Orientation classifier model file Recognition model file Hardware Precision Threads Batchsize Test image path Dictionary file path
If you modify the code, you need to recompile and push to the phone.
If you modify the code, you need to recompile and push to the phone.
...
@@ -283,3 +291,7 @@ A2: Replace the .jpg test image under ./debug with the image you want to test, a
...
@@ -283,3 +291,7 @@ A2: Replace the .jpg test image under ./debug with the image you want to test, a
Q3: How to package it into the mobile APP?
Q3: How to package it into the mobile APP?
A3: This demo aims to provide the core algorithm part that can run OCR on mobile phones. Further, PaddleOCR/deploy/android_demo is an example of encapsulating this demo into a mobile app for reference.
A3: This demo aims to provide the core algorithm part that can run OCR on mobile phones. Further, PaddleOCR/deploy/android_demo is an example of encapsulating this demo into a mobile app for reference.
Q4: When running the demo, an error is reported `Error: This model is not supported, because kernel for 'io_copy' is not supported by Paddle-Lite.`
A4: The problem is that the installed paddlelite version does not match the downloaded prediction library version. Make sure that the paddleliteopt tool matches your prediction library version, and try to switch to the nb model again.