diff --git a/deploy/lite/readme.md b/deploy/lite/readme.md index e2d2482f34fbd55fc1a6afd535eac2a1eaae54b0..751e52e607d049f89d6e8a6f2ba991f65534d19a 100644 --- a/deploy/lite/readme.md +++ b/deploy/lite/readme.md @@ -86,7 +86,7 @@ Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括 |模型版本|模型简介|模型大小|检测模型|文本方向分类模型|识别模型|Paddle-Lite版本| |---|---|---|---|---|---|---| |V2.0|超轻量中文OCR 移动端模型|7.8M|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_opt.nb)|v2.9| -|V2.0(slim)|超轻量中文OCR 移动端模型|7.8M|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_slim_opt.nb)|v2.9| +|V2.0(slim)|超轻量中文OCR 移动端模型|3.3M|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_slim_opt.nb)|v2.9| 如果直接使用上述表格中的模型进行部署,可略过下述步骤,直接阅读 [2.2节](#2.2与手机联调)。 @@ -124,9 +124,9 @@ cd build.opt/lite/api/ ``` # 【推荐】 下载PaddleOCR V2.0版本的中英文 inference模型 -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_slim_infer.tar -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_nfer.tar && tar xf ch_ppocr_mobile_v2.0_rec_slim_infer.tar -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_cls_slim_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_slim_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_rec_slim_nfer.tar && tar xf ch_ppocr_mobile_v2.0_rec_slim_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_cls_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_cls_slim_infer.tar # 转换V2.0检测模型 ./opt --model_file=./ch_ppocr_mobile_v2.0_det_slim_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_det_slim_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_det_slim_opt --valid_targets=arm --optimize_out_type=naive_buffer # 转换V2.0识别模型 @@ -239,9 +239,8 @@ use_direction_classify 0 # 是否使用方向分类器,0表示不使用,1 ``` # 执行编译,得到可执行文件ocr_db_crnn - # ocr_db_crnn可执行文件的使用方式为: - # ./ocr_db_crnn 检测模型文件 方向分类器模型文件 识别模型文件 测试图像路径 字典文件路径 make -j + # 将编译的可执行文件移动到debug文件夹中 mv ocr_db_crnn ./debug/ # 将debug文件夹push到手机上 @@ -249,6 +248,8 @@ use_direction_classify 0 # 是否使用方向分类器,0表示不使用,1 adb shell cd /data/local/tmp/debug export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH + # 开始使用,ocr_db_crnn可执行文件的使用方式为: + # ./ocr_db_crnn 检测模型文件 方向分类器模型文件 识别模型文件 测试图像路径 字典文件路径 ./ocr_db_crnn ch_ppocr_mobile_v2.0_det_slim_opt.nb ch_ppocr_mobile_v2.0_rec_slim_opt.nb ch_ppocr_mobile_v2.0_cls_slim_opt.nb ./11.jpg ppocr_keys_v1.txt ``` @@ -264,7 +265,7 @@ use_direction_classify 0 # 是否使用方向分类器,0表示不使用,1 ## FAQ Q1:如果想更换模型怎么办,需要重新按照流程走一遍吗? -A1:如果已经走通了上述步骤,更换模型只需要替换 .nb 模型文件即可,同时要注意字典更新 +A1:如果已经走通了上述步骤,更换模型只需要替换 .nb 模型文件即可,同时要注意更新字典 Q2:换一个图测试怎么做? diff --git a/deploy/lite/readme_en.md b/deploy/lite/readme_en.md index e667bc8654130f711f42056257936853186add5b..f9373cd4bcdef418fef03a642fc269fcc7b5f6ba 100644 --- a/deploy/lite/readme_en.md +++ b/deploy/lite/readme_en.md @@ -28,10 +28,10 @@ There are two ways to obtain the Paddle-Lite library: | Platform | Paddle-Lite library download link | |---|---| - |Android|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.8/inference_lite_lib.android.armv7.gcc.c++_shared.with_extra.with_cv.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.8/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz)| - |IOS|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.8/inference_lite_lib.ios.armv7.with_cv.with_extra.with_log.tiny_publish.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.8/inference_lite_lib.ios.armv8.with_cv.with_extra.with_log.tiny_publish.tar.gz)| + |Android|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv7.gcc.c++_shared.with_extra.with_cv.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz)| + |IOS|[arm7](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.ios.armv7.with_cv.with_extra.with_log.tiny_publish.tar.gz) / [arm8](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.ios.armv8.with_cv.with_extra.with_log.tiny_publish.tar.gz)| - Note: 1. The above Paddle-Lite library is compiled from the Paddle-Lite 2.8 branch. For more information about Paddle-Lite 2.8, please refer to [link](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.8). + Note: 1. The above Paddle-Lite library is compiled from the Paddle-Lite 2.9 branch. For more information about Paddle-Lite 2.9, please refer to [link](https://github.com/PaddlePaddle/Paddle-Lite/releases/tag/v2.9). - 2. [Recommended] Compile Paddle-Lite to get the prediction library. The compilation method of Paddle-Lite is as follows: ``` @@ -87,7 +87,8 @@ The following table also provides a series of models that can be deployed on mob |Version|Introduction|Model size|Detection model|Text Direction model|Recognition model|Paddle-Lite branch| |---|---|---|---|---|---|---| -|V2.0|extra-lightweight chinese OCR optimized model|7.8M|[download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_opt.nb)|[download lin](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_opt.nb)|[download lin](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_opt.nb)|v2.8| +|V2.0|extra-lightweight chinese OCR optimized model|7.8M|[download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_opt.nb)|[download lin](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_opt.nb)|[download lin](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_opt.nb)|v2.9| +|V2.0(slim)|extra-lightweight chinese OCR optimized model|3.3M|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_cls_slim_opt.nb)|[下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_rec_slim_opt.nb)|v2.9| If you directly use the model in the above table for deployment, you can skip the following steps and directly read [Section 2.2](#2.2 Run optimized model on Phone). @@ -97,7 +98,7 @@ The `opt` tool can be obtained by compiling Paddle Lite. ``` git clone https://github.com/PaddlePaddle/Paddle-Lite.git cd Paddle-Lite -git checkout release/v2.8 +git checkout release/v2.9 ./lite/tools/build.sh build_optimize_tool ``` @@ -124,20 +125,19 @@ The following takes the ultra-lightweight Chinese model of PaddleOCR as an examp ``` # [Recommendation] Download the Chinese and English inference model of PaddleOCR V2.0 -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar -wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar && tar xf ch_ppocr_mobile_v2.0_cls_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_slim_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_rec_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_slim_infer.tar +wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_cls_slim_infer.tar && tar xf ch_ppocr_mobile_v2.0_cls_slim_infer.tar # Convert V2.0 detection model -./opt --model_file=./ch_ppocr_mobile_v2.0_det_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_det_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_det_opt --valid_targets=arm --optimize_out_type=naive_buffer -# 转换V2.0识别模型 +./opt --model_file=./ch_ppocr_mobile_v2.0_det_slim_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_det_slim_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_det_slim_opt --valid_targets=arm --optimize_out_type=naive_buffer # Convert V2.0 recognition model -./opt --model_file=./ch_ppocr_mobile_v2.0_rec_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_rec_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_rec_opt --valid_targets=arm --optimize_out_type=naive_buffer +./opt --model_file=./ch_ppocr_mobile_v2.0_rec_slim_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_rec_slim_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_rec_slim_opt --valid_targets=arm --optimize_out_type=naive_buffer # Convert V2.0 angle classifier model -./opt --model_file=./ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_cls_opt --valid_targets=arm --optimize_out_type=naive_buffer +./opt --model_file=./ch_ppocr_mobile_v2.0_cls_slim_infer/inference.pdmodel --param_file=./ch_ppocr_mobile_v2.0_cls_slim_infer/inference.pdiparams --optimize_out=./ch_ppocr_mobile_v2.0_cls_slim_opt --valid_targets=arm --optimize_out_type=naive_buffer ``` -After the conversion is successful, there will be more files ending with `.nb` in the current directory, which is the successfully converted model file. +After the conversion is successful, there will be more files ending with `.nb` in the inference model directory, which is the successfully converted model file. ### 2.2 Run optimized model on Phone @@ -194,9 +194,9 @@ The structure of the OCR demo is as follows after the above command is executed: ``` demo/cxx/ocr/ |-- debug/ -| |--ch_ppocr_mobile_v2.0_det_opt.nb Detection model -| |--ch_ppocr_mobile_v2.0_rec_opt.nb Recognition model -| |--ch_ppocr_mobile_v2.0_cls_opt.nb Text direction classification model +| |--ch_ppocr_mobile_v2.0_det_slim_opt.nb Detection model +| |--ch_ppocr_mobile_v2.0_rec_slim_opt.nb Recognition model +| |--ch_ppocr_mobile_v2.0_cls_slim_opt.nb Text direction classification model | |--11.jpg Image for OCR | |--ppocr_keys_v1.txt Dictionary file | |--libpaddle_light_api_shared.so C++ .so file @@ -238,8 +238,6 @@ After the above steps are completed, you can use adb to push the file to the pho ``` # Execute the compilation and get the executable file ocr_db_crnn - # The use of ocr_db_crnn is: - # ./ocr_db_crnn Detection model file Orientation classifier model file Recognition model file Test image path Dictionary file path make -j # Move the compiled executable file to the debug folder mv ocr_db_crnn ./debug/ @@ -248,6 +246,8 @@ After the above steps are completed, you can use adb to push the file to the pho adb shell cd /data/local/tmp/debug export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH + # The use of ocr_db_crnn is: + # ./ocr_db_crnn Detection model file Orientation classifier model file Recognition model file Test image path Dictionary file path ./ocr_db_crnn ch_ppocr_mobile_v2.0_det_opt.nb ch_ppocr_mobile_v2.0_rec_opt.nb ch_ppocr_mobile_v2.0_cls_opt.nb ./11.jpg ppocr_keys_v1.txt ```