diff --git a/deploy/android_demo/README.md b/deploy/android_demo/README.md index e35e757914aa355c97293662652b1e02676e32eb..285f7a84387e5f50ac3286cee10b0c4bea0deb31 100644 --- a/deploy/android_demo/README.md +++ b/deploy/android_demo/README.md @@ -2,7 +2,7 @@ ### 1. 安装最新版本的Android Studio 可以从https://developer.android.com/studio 下载。本Demo使用是4.0版本Android Studio编写。 -### 2. 按照NDK 20 以上版本 +### 2. 按照NDK 20 以上版本 Demo测试的时候使用的是NDK 20b版本,20版本以上均可以支持编译成功。 如果您是初学者,可以用以下方式安装和测试NDK编译环境。 @@ -17,3 +17,10 @@ Demo测试的时候使用的是NDK 20b版本,20版本以上均可以支持编 - Demo APP:可使用手机扫码安装,方便手机端快速体验文字识别 - SDK:模型被封装为适配不同芯片硬件和操作系统SDK,包括完善的接口,方便进行二次开发 + + +# FAQ: +Q1: 更新1.1版本的模型后,demo报错? + + +A1. 如果要更换V1.1 版本的模型,请更新模型的同时,更新预测库文件,建议使用[PaddleLite 2.6.3](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.6.3)版本的预测库文件,OCR移动端部署参考[教程](../lite/readme.md)。 diff --git a/deploy/lite/readme.md b/deploy/lite/readme.md index 0695ef0360181c37d6b849d3d2c2e42d3ff66402..589617438b96e7813add0b9142b9c3afa6c1730a 100644 --- a/deploy/lite/readme.md +++ b/deploy/lite/readme.md @@ -22,21 +22,20 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理 ### 1.2 准备预测库 预测库有两种获取方式: -- 1. 直接下载,预测库下载链接如下: +- 1. [推荐]直接下载,预测库下载链接如下: |平台|预测库下载链接| |-|-| - |Android|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv7.gcc.c++_static.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv8.gcc.c++_static.with_extra.CV_ON.tar.gz)| - |IOS|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios.armv7.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios64.armv8.with_extra.CV_ON.tar.gz)| + |Android|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.android.armv7.gcc.c++_shared.with_extra.with_cv.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz)| + |IOS|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.ios.armv7.with_cv.with_extra.with_log.tiny_publish.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.ios.armv8.with_cv.with_extra.with_log.tiny_publish.tar.gz)| - 注:1. 如果是从下Paddle-Lite[官网文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/release_lib.html#android-toolchain-gcc)下载的预测库, - 注意选择`with_extra=ON,with_cv=ON`的下载链接。2. 如果使用量化的模型部署在端侧,建议使用Paddle-Lite develop分支编译预测库。 + 注:1. 上述预测库为PaddleLite 2.6.3分支编译得到,有关PaddleLite 2.6.3 详细信息可参考[链接](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.6.3)。 -- 2. [建议]编译Paddle-Lite得到预测库,Paddle-Lite的编译方式如下: +- 2. 编译Paddle-Lite得到预测库,Paddle-Lite的编译方式如下: ``` git clone https://github.com/PaddlePaddle/Paddle-Lite.git cd Paddle-Lite -# 务必使用develop分支编译预测库 -git checkout develop +# 切换到Paddle-Lite 2.6.3稳定分支 +git checkout release/v2.6 ./lite/tools/build_android.sh --arch=armv8 --with_cv=ON --with_extra=ON ``` @@ -235,7 +234,7 @@ max_side_len 960 # 输入图像长宽大于960时,等比例缩放图 det_db_thresh 0.3 # 用于过滤DB预测的二值化图像,设置为0.-0.3对结果影响不明显 det_db_box_thresh 0.5 # DB后处理过滤box的阈值,如果检测存在漏框情况,可酌情减小 det_db_unclip_ratio 1.6 # 表示文本框的紧致程度,越小则文本框更靠近文本 -use_direction_classify 1 # 是否使用方向分类器,0表示不使用,1表示使用 +use_direction_classify 0 # 是否使用方向分类器,0表示不使用,1表示使用 ``` 5. 启动调试 diff --git a/deploy/lite/readme_en.md b/deploy/lite/readme_en.md index 02491d31b212b584eedeca0e2e060221c081ebbb..7abe0e5f045e72a9ce6ba3baf24e23020c7196cb 100644 --- a/deploy/lite/readme_en.md +++ b/deploy/lite/readme_en.md @@ -18,14 +18,14 @@ deployment solutions for end-side deployment issues. 2. [Linux](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#linux) 3. [MAC OS](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#mac-os) -## 3. Download prebuild library for android and ios +## 3. [Recommend]Download prebuild library for android and ios |Platform|Prebuild library Download Link| |-|-| -|Android|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv7.gcc.c++_static.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv8.gcc.c++_static.with_extra.CV_ON.tar.gz)| -|IOS|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios.armv7.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios64.armv8.with_extra.CV_ON.tar.gz)| +|Android|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.android.armv7.gcc.c++_shared.with_extra.with_cv.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz)| +|IOS|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.ios.armv7.with_cv.with_extra.with_log.tiny_publish.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.6.3/inference_lite_lib.ios.armv8.with_cv.with_extra.with_log.tiny_publish.tar.gz)| -note: It is recommended to build prebuild library using [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) develop branch if developer wants to deploy the [quantitative](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README_en.md) model to mobile phone. +note: The above pre-build inference library is compiled from the PaddleLite `release/2.6.3` branch. For more information about PaddleLite 2.6.3, please refer to [link](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.6.3). The structure of the prediction library is as follows: @@ -199,7 +199,7 @@ max_side_len 960 # Limit the maximum image height and width to 960 det_db_thresh 0.3 # Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result det_db_box_thresh 0.5 # DDB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate det_db_unclip_ratio 1.6 # Indicates the compactness of the text box, the smaller the value, the closer the text box to the text -use_direction_classify 1 # Whether to use the direction classifier, 0 means not to use, 1 means to use +use_direction_classify 0 # Whether to use the direction classifier, 0 means not to use, 1 means to use ``` 5. Run Model on phone