From bee2b15ea8de1af90c5d2280fda54b016edd67f0 Mon Sep 17 00:00:00 2001 From: LDOUBLEV Date: Fri, 17 Jul 2020 16:09:26 +0800 Subject: [PATCH] update lite-deme readme, add prepare.sh --- deploy/lite/prepare.sh | 9 +++++++++ deploy/lite/readme.md | 16 ++++++++++------ deploy/lite/readme_en.md | 40 +++++++++++++++++++++++++++++++++++----- 3 files changed, 54 insertions(+), 11 deletions(-) create mode 100644 deploy/lite/prepare.sh diff --git a/deploy/lite/prepare.sh b/deploy/lite/prepare.sh new file mode 100644 index 00000000..daaa30c4 --- /dev/null +++ b/deploy/lite/prepare.sh @@ -0,0 +1,9 @@ +#!/bin/bash + +mkdir -p $1/demo/cxx/ocr/debug/ +cp ../../ppocr/utils/ppocr_keys_v1.txt $1/demo/cxx/ocr/debug/ +cp -r ./* $1/demo/cxx/ocr/ +cp ./config.txt $1/demo/cxx/ocr/debug/ +cp ../../doc/imgs/11.jpg $1/demo/cxx/ocr/debug/ + +echo "Prepare Done" diff --git a/deploy/lite/readme.md b/deploy/lite/readme.md index bfd15e4b..378a3eec 100644 --- a/deploy/lite/readme.md +++ b/deploy/lite/readme.md @@ -168,17 +168,21 @@ wget https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar && tar ``` 4. 准备优化后的模型、预测库文件、测试图像和使用的字典文件。 - 在预测库`inference_lite_lib.android.armv8/demo/cxx/`下新建一个`ocr/`文件夹, - 将PaddleOCR repo中`PaddleOCR/deploy/lite/` 下的除`readme.md`所有文件放在新建的ocr文件夹下。在`ocr`文件夹下新建一个`debug`文件夹, - 将C++预测库so文件复制到debug文件夹下。 - ``` + ``` + git clone https://github.com/PaddlePaddle/PaddleOCR.git + cd PaddleOCR/deploy/lite/ + # 运行prepare.sh,准备预测库文件、测试图像和使用的字典文件,并放置在预测库中的demo/cxx/ocr文件夹下 + sh prepare.sh /{lite prediction library path}/inference_lite_lib.android.armv8 + # 进入OCR demo的工作目录 + cd /{lite prediction library path}/inference_lite_lib.android.armv8/ cd demo/cxx/ocr/ # 将C++预测动态库so文件复制到debug文件夹中 cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ - ``` + ``` + 准备测试图像,以`PaddleOCR/doc/imgs/11.jpg`为例,将测试的图像复制到`demo/cxx/ocr/debug/`文件夹下。 - 准备字典文件,中文超轻量模型的字典文件是`PaddleOCR/ppocr/utils/ppocr_keys_v1.txt`,将其复制到`demo/cxx/ocr/debug/`文件夹下。 + 准备lite opt工具优化后的模型文件,`ch_det_mv3_db_opt.nb,ch_rec_mv3_crnn_opt.nb`,放置在`demo/cxx/ocr/debug/`文件夹下。 执行完成后,ocr文件夹下将有如下文件格式: diff --git a/deploy/lite/readme_en.md b/deploy/lite/readme_en.md index b80c95e8..a8c9f604 100644 --- a/deploy/lite/readme_en.md +++ b/deploy/lite/readme_en.md @@ -125,19 +125,49 @@ When the above code command is completed, there will be two more files `ch_det_m If there is `device` output, it means the installation was successful. -4. Prepare optimized models, prediction library files, test images and dictionary files used. Create a new `ocr/` folder under the prediction library `inference_lite_lib.android.armv8/demo/cxx/`, and place all the files under `PaddleOCR/deploy/lite/` in the PaddleOCR repo except `readme.md` under the newly created ocr folder. Create a new debug folder under the ocr folder, and copy the C++ prediction library so file to the debug folder +4. Prepare optimized models, prediction library files, test images and dictionary files used. ``` + git clone https://github.com/PaddlePaddle/PaddleOCR.git + cd PaddleOCR/deploy/lite/ + # run prepare.sh + sh prepare.sh /{lite prediction library path}/inference_lite_lib.android.armv8 -cd inference_lite_lib.android.armv8/demo/cxx/ocr/ -cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ + # + cd /{lite prediction library path}/inference_lite_lib.android.armv8/ + cd demo/cxx/ocr/ + # copy paddle-lite C++ .so file to debug/ directory + cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ + + cd inference_lite_lib.android.armv8/demo/cxx/ocr/ + cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ ``` -Prepare the test image, taking `PaddleOCR/doc/imgs/11.jpg` as an example, copy the image file to the `demo/cxx/ocr/debug/` folder. The dictionary file for the Chinese super lightweight model is `PaddleOCR/ppocr/utils/ppocr_keys_v1.txt`, and copy it to the `demo/cxx/ocr/debug/` folder. +Prepare the test image, taking `PaddleOCR/doc/imgs/11.jpg` as an example, copy the image file to the `demo/cxx/ocr/debug/` folder. +Prepare the model files optimized by the lite opt tool, `ch_det_mv3_db_opt.nb, ch_rec_mv3_crnn_opt.nb`, +and place them under the `demo/cxx/ocr/debug/` folder. -After the execution is completed, the following file formats will be in the ocr folder: +The structure of the OCR demo is as follows after the above command is executed: +``` +demo/cxx/ocr/ +|-- debug/ +| |--ch_det_mv3_db_opt.nb Detection model +| |--ch_rec_mv3_crnn_opt.nb Recognition model +| |--11.jpg image for OCR +| |--ppocr_keys_v1.txt Dictionary file +| |--libpaddle_light_api_shared.so C++ .so file +| |--config.txt Config file +|-- config.txt +|-- crnn_process.cc +|-- crnn_process.h +|-- db_post_process.cc +|-- db_post_process.h +|-- Makefile +|-- ocr_db_crnn.cc + +``` 5. Run Model on phone -- GitLab