diff --git a/README.md b/README.md index 867e3cbcbffe1a7981409d26ef58793acea28522..b7dec5f677e4d7bdd7ce5b2545b1ab048b41b9e8 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Andr | ------------------------------------------------------------ | ---------------------------- | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | | Chinese and English ultra-lightweight OCR model (8.1M) | ch_ppocr_mobile_v1.1_xx | Mobile & server | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile/rec/ch_ppocr_mobile_v1.1_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/mobile/rec/ch_ppocr_mobile_v1.1_rec_pre.tar) | | Chinese and English general OCR model (155.1M) | ch_ppocr_server_v1.1_xx | Server | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/server/det/ch_ppocr_server_v1.1_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/server/det/ch_ppocr_server_v1.1_det_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/server/rec/ch_ppocr_server_v1.1_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/20-09-22/server/rec/ch_ppocr_server_v1.1_rec_pre.tar) | -| Chinese and English ultra-lightweight compressed OCR model (3.5M) | ch_ppocr_mobile_slim_v1.1_xx | Mobile | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_opt.nb) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_cls_quant_opt.nb) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_opt.nb) | +| Chinese and English ultra-lightweight compressed OCR model (3.5M) | ch_ppocr_mobile_slim_v1.1_xx | Mobile | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_opt.nb) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_opt.nb) | [inference model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_infer.tar) / [slim model](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_opt.nb) | For more model downloads (including multiple languages), please refer to [PP-OCR v1.1 series model downloads](./doc/doc_en/models_list_en.md) diff --git a/README_ch.md b/README_ch.md index 58b1acb982d2b880a949a4592026f6923a6bd343..b04caaa623fec909c0ad796cfe793e66d6c00ecd 100644 --- a/README_ch.md +++ b/README_ch.md @@ -53,7 +53,7 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力 | ------------ | --------------- | ----------------|---- | ---------- | -------- | | 中英文超轻量OCR模型(8.1M) | ch_ppocr_mobile_v1.1_xx |移动端&服务器端|[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_train.tar)|[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile/rec/ch_ppocr_mobile_v1.1_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile/rec/ch_ppocr_mobile_v1.1_rec_pre.tar) | | 中英文通用OCR模型(155.1M) |ch_ppocr_server_v1.1_xx|服务器端 |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/server/det/ch_ppocr_server_v1.1_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/server/det/ch_ppocr_server_v1.1_det_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/server/rec/ch_ppocr_server_v1.1_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/20-09-22/server/rec/ch_ppocr_server_v1.1_rec_pre.tar) | -| 中英文超轻量压缩OCR模型(3.5M) | ch_ppocr_mobile_slim_v1.1_xx| 移动端 |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_opt.nb) |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_cls_quant_opt.nb)| [推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_opt.nb)| +| 中英文超轻量压缩OCR模型(3.5M) | ch_ppocr_mobile_slim_v1.1_xx| 移动端 |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/det/ch_ppocr_mobile_v1.1_det_prune_opt.nb) |[推理模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_quant_opt.nb)| [推理模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_infer.tar) / [slim模型](https://paddleocr.bj.bcebos.com/20-09-22/mobile-slim/rec/ch_ppocr_mobile_v1.1_rec_quant_opt.nb)| 更多模型下载(包括多语言),可以参考[PP-OCR v1.1 系列模型下载](./doc/doc_ch/models_list.md) diff --git a/deploy/lite/config.txt b/deploy/lite/config.txt index f08f8e49dfe45ab0f4e47c994d471e0515771df6..670b2ff00efd7bd289c34c460df4582dd72a2df6 100644 --- a/deploy/lite/config.txt +++ b/deploy/lite/config.txt @@ -1,4 +1,5 @@ max_side_len 960 det_db_thresh 0.3 det_db_box_thresh 0.5 -det_db_unclip_ratio 1.6 \ No newline at end of file +det_db_unclip_ratio 1.6 +use_direction_classify 0 \ No newline at end of file diff --git a/deploy/lite/ocr_db_crnn.cc b/deploy/lite/ocr_db_crnn.cc index 2786d15ad0581bf10f9e8de4b06f94cc5d4027aa..f3c43ebde93678e14609f0ddc90207b0a5ab965c 100644 --- a/deploy/lite/ocr_db_crnn.cc +++ b/deploy/lite/ocr_db_crnn.cc @@ -114,6 +114,7 @@ cv::Mat RunClsModel(cv::Mat img, std::shared_ptr predictor_cls, cv::Mat srcimg; img.copyTo(srcimg); cv::Mat crop_img; + img.copyTo(crop_img); cv::Mat resize_img; int index = 0; @@ -154,7 +155,8 @@ void RunRecModel(std::vector>> boxes, cv::Mat img, std::vector &rec_text, std::vector &rec_text_score, std::vector charactor_dict, - std::shared_ptr predictor_cls) { + std::shared_ptr predictor_cls, + int use_direction_classify) { std::vector mean = {0.5f, 0.5f, 0.5f}; std::vector scale = {1 / 0.5f, 1 / 0.5f, 1 / 0.5f}; @@ -166,7 +168,9 @@ void RunRecModel(std::vector>> boxes, cv::Mat img, int index = 0; for (int i = boxes.size() - 1; i >= 0; i--) { crop_img = GetRotateCropImage(srcimg, boxes[i]); - crop_img = RunClsModel(crop_img, predictor_cls); + if (use_direction_classify >= 1) { + crop_img = RunClsModel(crop_img, predictor_cls); + } float wh_ratio = static_cast(crop_img.cols) / static_cast(crop_img.rows); @@ -378,6 +382,7 @@ int main(int argc, char **argv) { //// load config from txt file auto Config = LoadConfigTxt("./config.txt"); + int use_direction_classify = int(Config["use_direction_classify"]); auto start = std::chrono::system_clock::now(); @@ -393,8 +398,9 @@ int main(int argc, char **argv) { std::vector rec_text; std::vector rec_text_score; + RunRecModel(boxes, srcimg, rec_predictor, rec_text, rec_text_score, - charactor_dict, cls_predictor); + charactor_dict, cls_predictor, use_direction_classify); auto end = std::chrono::system_clock::now(); auto duration = diff --git a/deploy/lite/readme.md b/deploy/lite/readme.md index 93f38e057258edad0ac5dd3bbbb7b3789011cefb..ab4a0024b96db5031d5193149cd3642512af0766 100644 --- a/deploy/lite/readme.md +++ b/deploy/lite/readme.md @@ -79,7 +79,7 @@ inference_lite_lib.android.armv8/ Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括量化、子图融合、混合调度、Kernel优选等方法,使用Paddle-lite的opt工具可以自动 对inference模型进行优化,优化后的模型更轻量,模型运行速度更快。 -下述表格中提供了优化好的超轻量中文模型: +下述表格中提供了一系列移动端模型: |模型版本|模型简介|模型大小|检测模型|文本方向分类模型|识别模型|Paddle-Lite版本| |-|-|-|-|-|-|-| @@ -141,11 +141,9 @@ wget https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar && tar ./opt --model_file=./ch_rec_mv3_crnn/model --param_file=./ch_rec_mv3_crnn/params --optimize_out_type=naive_buffer --optimize_out=./ch_rec_mv3_crnn_opt --valid_targets=arm ``` -# 转换V1.1检测模型 - 转换成功后,当前目录下会多出`.nb`结尾的文件,即是转换成功的模型文件。 -注意:使用paddle-lite部署时,需要使用opt工具优化后的模型。 opt 转换的输入模型是paddle保存的inference模型 +注意:使用paddle-lite部署时,需要使用opt工具优化后的模型。 opt 工具的输入模型是paddle保存的inference模型 ### 2.2 与手机联调 @@ -204,7 +202,7 @@ demo/cxx/ocr/ | |--ch_ppocr_mobile_v1.1_rec_quant_opt.nb 优化后的识别模型文件 | |--ch_ppocr_mobile_cls_quant_opt.nb 优化后的文字方向分类器模型文件 | |--11.jpg 待测试图像 -| |--ppocr_keys_v1.txt 字典文件 +| |--ppocr_keys_v1.txt 中文字典文件 | |--libpaddle_light_api_shared.so C++预测库文件 | |--config.txt DB-CRNN超参数配置 |-- config.txt DB-CRNN超参数配置 @@ -214,7 +212,27 @@ demo/cxx/ocr/ |-- db_post_process.h |-- Makefile 编译文件 |-- ocr_db_crnn.cc C++预测源文件 +``` +#### 注意: +1. ppocr_keys_v1.txt是中文字典文件,如果使用的 nb 模型是英文数字或其他语言的模型,需要更换为对应语言的字典。 +PaddleOCR 在ppocr/utils/下存放了多种字典,包括: +``` +french_dict.txt # 法语字典 +german_dict.txt # 德语字典 +ic15_dict.txt # 英文字典 +japan_dict.txt # 日语字典 +korean_dict.txt # 韩语字典 +ppocr_keys_v1.txt # 中文字典 +``` + +2. `config.txt` 包含了检测器、分类器的超参数,如下: +``` +max_side_len 960 # 输入图像长宽大于960时,等比例缩放图像,使得图像最长边为960 +det_db_thresh 0.3 # 用于过滤DB预测的二值化图像,设置为0.-0.3对结果影响不明显 +det_db_box_thresh 0.5 # DB后处理过滤box的阈值,如果检测存在漏框情况,可酌情减小 +det_db_unclip_ratio 1.6 # 表示文本框的紧致程度,越小则文本框更靠近文本 +use_direction_classify 1 # 是否使用方向分类器,0表示不使用,1表示使用 ``` 5. 启动调试 @@ -224,7 +242,7 @@ demo/cxx/ocr/ ``` # 执行编译,得到可执行文件ocr_db_crnn # ocr_db_crnn可执行文件的使用方式为: - # ./ocr_db_crnn 检测模型文件 识别模型文件 测试图像路径 + # ./ocr_db_crnn 检测模型文件 方向分类器模型文件 识别模型文件 测试图像路径 字典文件路径 make -j # 将编译的可执行文件移动到debug文件夹中 mv ocr_db_crnn ./debug/ diff --git a/deploy/lite/readme_en.md b/deploy/lite/readme_en.md index a62f0478843fb28b21404f282d981daf7a0c294f..7db0f1f9783c6c7bb1e1ff909992d73728afc3fe 100644 --- a/deploy/lite/readme_en.md +++ b/deploy/lite/readme_en.md @@ -178,6 +178,28 @@ demo/cxx/ocr/ ``` +#### Note: +1. ppocr_keys_v1.txt is a Chinese dictionary file. +If the nb model is used for English recognition or other language recognition, dictionary file should be replaced with a dictionary of the corresponding language. +PaddleOCR provides a variety of dictionaries under ppocr/utils/, including: +``` +french_dict.txt # french +german_dict.txt # german +ic15_dict.txt # english +japan_dict.txt # japan +korean_dict.txt # korean +ppocr_keys_v1.txt # chinese +``` + +2. `config.txt` of the detector and classifier, as shown below: +``` +max_side_len 960 # Limit the maximum image height and width to 960 +det_db_thresh 0.3 # Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result +det_db_box_thresh 0.5 # DDB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate +det_db_unclip_ratio 1.6 # Indicates the compactness of the text box, the smaller the value, the closer the text box to the text +use_direction_classify 1 # Whether to use the direction classifier, 0 means not to use, 1 means to use +``` + 5. Run Model on phone ```