提交 bee2b15e 编写于 作者: L LDOUBLEV

update lite-deme readme, add prepare.sh

上级 beec2c1e
#!/bin/bash
mkdir -p $1/demo/cxx/ocr/debug/
cp ../../ppocr/utils/ppocr_keys_v1.txt $1/demo/cxx/ocr/debug/
cp -r ./* $1/demo/cxx/ocr/
cp ./config.txt $1/demo/cxx/ocr/debug/
cp ../../doc/imgs/11.jpg $1/demo/cxx/ocr/debug/
echo "Prepare Done"
...@@ -168,17 +168,21 @@ wget https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar && tar ...@@ -168,17 +168,21 @@ wget https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar && tar
``` ```
4. 准备优化后的模型、预测库文件、测试图像和使用的字典文件。 4. 准备优化后的模型、预测库文件、测试图像和使用的字典文件。
在预测库`inference_lite_lib.android.armv8/demo/cxx/`下新建一个`ocr/`文件夹, ```
将PaddleOCR repo中`PaddleOCR/deploy/lite/` 下的除`readme.md`所有文件放在新建的ocr文件夹下。在`ocr`文件夹下新建一个`debug`文件夹, git clone https://github.com/PaddlePaddle/PaddleOCR.git
将C++预测库so文件复制到debug文件夹下。 cd PaddleOCR/deploy/lite/
``` # 运行prepare.sh,准备预测库文件、测试图像和使用的字典文件,并放置在预测库中的demo/cxx/ocr文件夹下
sh prepare.sh /{lite prediction library path}/inference_lite_lib.android.armv8
# 进入OCR demo的工作目录 # 进入OCR demo的工作目录
cd /{lite prediction library path}/inference_lite_lib.android.armv8/
cd demo/cxx/ocr/ cd demo/cxx/ocr/
# 将C++预测动态库so文件复制到debug文件夹中 # 将C++预测动态库so文件复制到debug文件夹中
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/
``` ```
准备测试图像,以`PaddleOCR/doc/imgs/11.jpg`为例,将测试的图像复制到`demo/cxx/ocr/debug/`文件夹下。 准备测试图像,以`PaddleOCR/doc/imgs/11.jpg`为例,将测试的图像复制到`demo/cxx/ocr/debug/`文件夹下。
准备字典文件,中文超轻量模型的字典文件是`PaddleOCR/ppocr/utils/ppocr_keys_v1.txt`,将其复制到`demo/cxx/ocr/debug/`文件夹下。 准备lite opt工具优化后的模型文件,`ch_det_mv3_db_opt.nb,ch_rec_mv3_crnn_opt.nb`,放置在`demo/cxx/ocr/debug/`文件夹下。
执行完成后,ocr文件夹下将有如下文件格式: 执行完成后,ocr文件夹下将有如下文件格式:
......
...@@ -125,19 +125,49 @@ When the above code command is completed, there will be two more files `ch_det_m ...@@ -125,19 +125,49 @@ When the above code command is completed, there will be two more files `ch_det_m
If there is `device` output, it means the installation was successful. If there is `device` output, it means the installation was successful.
4. Prepare optimized models, prediction library files, test images and dictionary files used. Create a new `ocr/` folder under the prediction library `inference_lite_lib.android.armv8/demo/cxx/`, and place all the files under `PaddleOCR/deploy/lite/` in the PaddleOCR repo except `readme.md` under the newly created ocr folder. Create a new debug folder under the ocr folder, and copy the C++ prediction library so file to the debug folder 4. Prepare optimized models, prediction library files, test images and dictionary files used.
``` ```
git clone https://github.com/PaddlePaddle/PaddleOCR.git
cd PaddleOCR/deploy/lite/
# run prepare.sh
sh prepare.sh /{lite prediction library path}/inference_lite_lib.android.armv8
cd inference_lite_lib.android.armv8/demo/cxx/ocr/ #
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/ cd /{lite prediction library path}/inference_lite_lib.android.armv8/
cd demo/cxx/ocr/
# copy paddle-lite C++ .so file to debug/ directory
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/
cd inference_lite_lib.android.armv8/demo/cxx/ocr/
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/
``` ```
Prepare the test image, taking `PaddleOCR/doc/imgs/11.jpg` as an example, copy the image file to the `demo/cxx/ocr/debug/` folder. The dictionary file for the Chinese super lightweight model is `PaddleOCR/ppocr/utils/ppocr_keys_v1.txt`, and copy it to the `demo/cxx/ocr/debug/` folder. Prepare the test image, taking `PaddleOCR/doc/imgs/11.jpg` as an example, copy the image file to the `demo/cxx/ocr/debug/` folder.
Prepare the model files optimized by the lite opt tool, `ch_det_mv3_db_opt.nb, ch_rec_mv3_crnn_opt.nb`,
and place them under the `demo/cxx/ocr/debug/` folder.
After the execution is completed, the following file formats will be in the ocr folder:
The structure of the OCR demo is as follows after the above command is executed:
```
demo/cxx/ocr/
|-- debug/
| |--ch_det_mv3_db_opt.nb Detection model
| |--ch_rec_mv3_crnn_opt.nb Recognition model
| |--11.jpg image for OCR
| |--ppocr_keys_v1.txt Dictionary file
| |--libpaddle_light_api_shared.so C++ .so file
| |--config.txt Config file
|-- config.txt
|-- crnn_process.cc
|-- crnn_process.h
|-- db_post_process.cc
|-- db_post_process.h
|-- Makefile
|-- ocr_db_crnn.cc
```
5. Run Model on phone 5. Run Model on phone
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册