diff --git a/MANIFEST.in b/MANIFEST.in index cd1c9636d4d23cc4d0f745403ec8ca407d1cc1a8..1ca129b15787978e5e890498eb4d6609d8a4cba0 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -2,7 +2,7 @@ include LICENSE include README.md recursive-include ppocr/utils *.txt utility.py logging.py network.py -recursive-include ppocr/data/ *.py +recursive-include ppocr/data *.py recursive-include ppocr/postprocess *.py recursive-include tools/infer *.py -recursive-include ppocr/utils/e2e_utils/ *.py \ No newline at end of file +recursive-include ppocr/utils/e2e_utils *.py \ No newline at end of file diff --git a/deploy/cpp_infer/docs/windows_vs2019_build.md b/deploy/cpp_infer/docs/windows_vs2019_build.md index 0f243bf8f54b5cd50e9fa2faab29b064e694e45c..be3dd3833f57d5592f7074e8149cbe9fdbe7ade1 100644 --- a/deploy/cpp_infer/docs/windows_vs2019_build.md +++ b/deploy/cpp_infer/docs/windows_vs2019_build.md @@ -93,3 +93,5 @@ cd D:\projects\PaddleOCR\deploy\cpp_infer\out\build\x64-Release ### 注意 * 在Windows下的终端中执行文件exe时,可能会发生乱码的现象,此时需要在终端中输入`CHCP 65001`,将终端的编码方式由GBK编码(默认)改为UTF-8编码,更加具体的解释可以参考这篇博客:[https://blog.csdn.net/qq_35038153/article/details/78430359](https://blog.csdn.net/qq_35038153/article/details/78430359)。 + +* 编译时,如果报错`错误:C1083 无法打开包括文件:"dirent.h":No such file or directory`,可以参考该[文档](https://blog.csdn.net/Dora_blank/article/details/117740837#41_C1083_direnthNo_such_file_or_directory_54),新建`dirent.h`文件,并添加到`VC++`的包含目录中。 diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index 66be846516f1926f45bf6978fbb5b9b17a035cbd..30b8628517d605c74008378078aef3f03528e7cf 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -18,6 +18,7 @@ PaddleOCR模型部署。 * 首先需要从opencv官网上下载在Linux环境下源码编译的包,以opencv3.4.7为例,下载命令如下。 ``` +cd deploy/cpp_infer wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz tar -xf 3.4.7.tar.gz ``` diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index 6c0a18db4f76d4e2971cea16130216434ff01d7b..b03187a7659a5f3bb7ca67970febe853dd201fa1 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -18,6 +18,7 @@ PaddleOCR model deployment. * First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows. ``` +cd deploy/cpp_infer wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz tar -xf 3.4.7.tar.gz ``` diff --git a/deploy/hubserving/readme.md b/deploy/hubserving/readme.md index 9351fa8d4fb8ee507d8e4f838397ecb615c20612..11b843fec1052c3ad401ca0b7d1cb602401af8f8 100755 --- a/deploy/hubserving/readme.md +++ b/deploy/hubserving/readme.md @@ -29,6 +29,7 @@ deploy/hubserving/ocr_system/ ### 1. 准备环境 ```shell # 安装paddlehub +# paddlehub 需要 python>3.6.2 pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` diff --git a/deploy/hubserving/readme_en.md b/deploy/hubserving/readme_en.md index 98ffcad63c822b4b03e58ae088cafd584aa824ab..539ad722cae78b8315b87d35f9af6ab81140c5b3 100755 --- a/deploy/hubserving/readme_en.md +++ b/deploy/hubserving/readme_en.md @@ -30,6 +30,7 @@ The following steps take the 2-stage series service as an example. If only the d ### 1. Prepare the environment ```shell # Install paddlehub +# python>3.6.2 is required bt paddlehub pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` diff --git a/doc/doc_ch/detection.md b/doc/doc_ch/detection.md index 08b309cbc0def059df1f12a180be39d94511a78c..6fc85992c04123a10ad937f2694b513b50a37876 100644 --- a/doc/doc_ch/detection.md +++ b/doc/doc_ch/detection.md @@ -18,9 +18,9 @@ PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支 ``` # 将官网下载的标签文件转换为 train_icdar2015_label.txt -python gen_label.py --mode="det" --root_path="icdar_c4_train_imgs/" \ - --input_path="ch4_training_localization_transcription_gt" \ - --output_label="train_icdar2015_label.txt" +python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \ + --input_path="/path/to/ch4_training_localization_transcription_gt" \ + --output_label="/path/to/train_icdar2015_label.txt" ``` 解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是: diff --git a/doc/doc_ch/inference.md b/doc/doc_ch/inference.md index d4c566b134ce4461cecdebbfccacebf3a6feb534..b9be1e4cb2d1b256a05b82ef5d6db49dfcb2f31f 100755 --- a/doc/doc_ch/inference.md +++ b/doc/doc_ch/inference.md @@ -221,7 +221,7 @@ python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Gl ``` -**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,**可以执行如下命令: +SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,可以执行如下命令: ``` python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True ``` diff --git a/doc/doc_ch/recognition.md b/doc/doc_ch/recognition.md index 2efd80e6e15dbcfbb3c342633d795eddbfd7558a..0ff0513a2b9a3e5e732e78bd8b4f42ab9f79094f 100644 --- a/doc/doc_ch/recognition.md +++ b/doc/doc_ch/recognition.md @@ -330,6 +330,8 @@ PaddleOCR目前已支持80种(除中文外)语种识别,`configs/rec/multi ``` +意大利文由拉丁字母组成,因此执行完命令后会得到名为 rec_latin_lite_train.yml 的配置文件。 + 2. 手动修改配置文件 您也可以手动修改模版中的以下几个字段: diff --git a/doc/doc_en/inference_en.md b/doc/doc_en/inference_en.md index 16285c087c5463a9e4c209ab91a21adc5150cc9c..e30355fb8e29031bd4ce040a86ad0f57d18ce398 100755 --- a/doc/doc_en/inference_en.md +++ b/doc/doc_en/inference_en.md @@ -230,7 +230,7 @@ First, convert the model saved in the SAST text detection training process into python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.pretrained_model=./det_r50_vd_sast_totaltext_v2.0_train/best_accuracy Global.save_inference_dir=./inference/det_sast_tt ``` -**For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`**, run the following command: +For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`, run the following command: ``` python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True diff --git a/doc/doc_en/recognition_en.md b/doc/doc_en/recognition_en.md index 556b75a515ed676557142157aa412f2783005eec..634ec783aa5e1dd6c9202385cf2978d140ca44a1 100644 --- a/doc/doc_en/recognition_en.md +++ b/doc/doc_en/recognition_en.md @@ -329,6 +329,7 @@ There are two ways to create the required configuration file:: ... ``` +Italian is made up of Latin letters, so after executing the command, you will get the rec_latin_lite_train.yml. 2. Manually modify the configuration file diff --git a/doc/joinus.PNG b/doc/joinus.PNG index 94f4e95df8a1be2563e3a58e0a7daa3878da4157..b45f006c1850f39af4d4eb85279df3953331f7f7 100644 Binary files a/doc/joinus.PNG and b/doc/joinus.PNG differ diff --git a/doc/table/pipeline.jpg b/doc/table/pipeline.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8cea262149199e010450b69d3323b9b06e40c773 Binary files /dev/null and b/doc/table/pipeline.jpg differ diff --git a/doc/table/pipeline.png b/doc/table/pipeline.png deleted file mode 100644 index 4acfb3e2ef423402d9fd1fc1b8ad02f0a072049b..0000000000000000000000000000000000000000 Binary files a/doc/table/pipeline.png and /dev/null differ diff --git a/doc/table/pipeline_en.jpg b/doc/table/pipeline_en.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2e4d1a03546308ff79f4dfb6b67e8e83420951c5 Binary files /dev/null and b/doc/table/pipeline_en.jpg differ diff --git a/doc/table/tableocr_pipeline.jpg b/doc/table/tableocr_pipeline.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bd467b1bd38f89f887ccbf0aa0f060d738e6047b Binary files /dev/null and b/doc/table/tableocr_pipeline.jpg differ diff --git a/doc/table/tableocr_pipeline.png b/doc/table/tableocr_pipeline.png deleted file mode 100644 index 731b84da9b832b67db42225379fbe09120cbee6b..0000000000000000000000000000000000000000 Binary files a/doc/table/tableocr_pipeline.png and /dev/null differ diff --git a/doc/table/tableocr_pipeline_en.jpg b/doc/table/tableocr_pipeline_en.jpg new file mode 100644 index 0000000000000000000000000000000000000000..654366878e8262eede2d4330f311ea0819ff6533 Binary files /dev/null and b/doc/table/tableocr_pipeline_en.jpg differ diff --git a/ppocr/data/simple_dataset.py b/ppocr/data/simple_dataset.py index c8708a4ab94f1761551dc9ecbe17316ac0ab67f7..e9c3394cbe930d5169ae005e7582a2902e697b7e 100644 --- a/ppocr/data/simple_dataset.py +++ b/ppocr/data/simple_dataset.py @@ -14,7 +14,6 @@ import numpy as np import os import random -import traceback from paddle.io import Dataset from .imaug import transform, create_operators @@ -46,7 +45,6 @@ class SimpleDataSet(Dataset): self.seed = seed logger.info("Initialize indexs of datasets:%s" % label_file_list) self.data_lines = self.get_image_info_list(label_file_list, ratio_list) - self.check_data() self.data_idx_order_list = list(range(len(self.data_lines))) if self.mode == "train" and self.do_shuffle: self.shuffle_data_random() @@ -103,18 +101,25 @@ class SimpleDataSet(Dataset): def __getitem__(self, idx): file_idx = self.data_idx_order_list[idx] - data = self.data_lines[file_idx] + data_line = self.data_lines[file_idx] try: + data_line = data_line.decode('utf-8') + substr = data_line.strip("\n").split(self.delimiter) + file_name = substr[0] + label = substr[1] + img_path = os.path.join(self.data_dir, file_name) + data = {'img_path': img_path, 'label': label} + if not os.path.exists(img_path): + raise Exception("{} does not exist!".format(img_path)) with open(data['img_path'], 'rb') as f: img = f.read() data['image'] = img data['ext_data'] = self.get_ext_data() outs = transform(data, self.ops) - except: - error_meg = traceback.format_exc() + except Exception as e: self.logger.error( - "When parsing file {} and label {}, error happened with msg: {}".format( - data['img_path'],data['label'], error_meg)) + "When parsing line {}, error happened with msg: {}".format( + data_line, e)) outs = None if outs is None: # during evaluation, we should fix the idx to get same results for many times of evaluation. @@ -125,17 +130,3 @@ class SimpleDataSet(Dataset): def __len__(self): return len(self.data_idx_order_list) - - def check_data(self): - new_data_lines = [] - for data_line in self.data_lines: - data_line = data_line.decode('utf-8') - substr = data_line.strip("\n").strip("\r").split(self.delimiter) - file_name = substr[0] - label = substr[1] - img_path = os.path.join(self.data_dir, file_name) - if os.path.exists(img_path): - new_data_lines.append({'img_path': img_path, 'label': label}) - else: - self.logger.info("{} does not exist!".format(img_path)) - self.data_lines = new_data_lines \ No newline at end of file diff --git a/ppocr/modeling/architectures/distillation_model.py b/ppocr/modeling/architectures/distillation_model.py index 2b1d3aae3b7303a61b20db15df5ce4bd9bb7b235..1e95fe574433eaca6f322ff47c8547cc1a29a248 100644 --- a/ppocr/modeling/architectures/distillation_model.py +++ b/ppocr/modeling/architectures/distillation_model.py @@ -46,7 +46,7 @@ class DistillationModel(nn.Layer): pretrained = model_config.pop("pretrained") model = BaseModel(model_config) if pretrained is not None: - model = load_pretrained_params(model, pretrained) + load_pretrained_params(model, pretrained) if freeze_params: for param in model.parameters(): param.trainable = False diff --git a/ppocr/utils/gen_label.py b/ppocr/utils/gen_label.py index 43afe9ddf182ad0da8df023ff29cd3759011d890..fb78bd38bcfc1a59cac48a28bbb655ecb83bcb3f 100644 --- a/ppocr/utils/gen_label.py +++ b/ppocr/utils/gen_label.py @@ -1,16 +1,16 @@ -#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. # -#Licensed under the Apache License, Version 2.0 (the "License"); -#you may not use this file except in compliance with the License. -#You may obtain a copy of the License at +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # -#Unless required by applicable law or agreed to in writing, software -#distributed under the License is distributed on an "AS IS" BASIS, -#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -#See the License for the specific language governing permissions and -#limitations under the License. +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. import os import argparse import json @@ -31,7 +31,9 @@ def gen_det_label(root_path, input_dir, out_label): for label_file in os.listdir(input_dir): img_path = root_path + label_file[3:-4] + ".jpg" label = [] - with open(os.path.join(input_dir, label_file), 'r') as f: + with open( + os.path.join(input_dir, label_file), 'r', + encoding='utf-8-sig') as f: for line in f.readlines(): tmp = line.strip("\n\r").replace("\xef\xbb\xbf", "").split(',') diff --git a/test1/MANIFEST.in b/ppstructure/MANIFEST.in similarity index 72% rename from test1/MANIFEST.in rename to ppstructure/MANIFEST.in index 203e97d3104e5be44a4e58f87fbae08d59d3f537..713e4b06f3ac924070afe53de9c2ec48726185e6 100644 --- a/test1/MANIFEST.in +++ b/ppstructure/MANIFEST.in @@ -2,8 +2,8 @@ include LICENSE include README.md recursive-include ppocr/utils *.txt utility.py logging.py network.py -recursive-include ppocr/data/ *.py +recursive-include ppocr/data *.py recursive-include ppocr/postprocess *.py recursive-include tools/infer *.py -recursive-include test1 *.py +recursive-include ppstructure *.py diff --git a/ppstructure/README.md b/ppstructure/README.md new file mode 100644 index 0000000000000000000000000000000000000000..edd106a27149c8e10ee898f561132e8477af39ae --- /dev/null +++ b/ppstructure/README.md @@ -0,0 +1,105 @@ +# PaddleStructure + +PaddleStructure is an OCR toolkit for complex layout analysis. It can divide document data in the form of pictures into **text, table, title, picture and list** 5 types of areas, and extract the table area as excel +## 1. Quick start + +### install + +**install layoutparser** +```sh +pip3 install https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl +``` +**install paddlestructure** + +install by pypi + +```bash +pip install paddlestructure +``` + +build own whl package and install +```bash +python3 setup.py bdist_wheel +pip3 install dist/paddlestructure-x.x.x-py3-none-any.whl # x.x.x is the version of paddlestructure +``` + +### 1.2 Use + +#### 1.2.1 Use by command line + +```bash +paddlestructure --image_dir=../doc/table/1.png +``` + +#### 1.2.2 Use by code + +```python +import os +import cv2 +from paddlestructure import PaddleStructure,draw_result,save_res + +table_engine = PaddleStructure(show_log=True) + +save_folder = './output/table' +img_path = '../doc/table/1.png' +img = cv2.imread(img_path) +result = table_engine(img) +save_res(result, save_folder,os.path.basename(img_path).split('.')[0]) + +for line in result: + print(line) + +from PIL import Image + +font_path = '../doc/fonts/simfang.ttf' # PaddleOCR下提供字体包 +image = Image.open(img_path).convert('RGB') +im_show = draw_result(image, result,font_path=font_path) +im_show = Image.fromarray(im_show) +im_show.save('result.jpg') +``` + +#### 1.2.3 Parameter Description: + +| Parameter | Description | Default value | +| --------------- | ---------------------------------------- | ------------------------------------------- | +| output | The path where excel and recognition results are saved | ./output/table | +| table_max_len | The long side of the image is resized in table structure model | 488 | +| table_model_dir | inference model path of table structure model | None | +| table_char_type | dict path of table structure model | ../ppocr/utils/dict/table_structure_dict.tx | + +Most of the parameters are consistent with the paddleocr whl package, see [doc of whl](../doc/doc_en/whl_en.md) + +After running, each image will have a directory with the same name under the directory specified in the output field. Each table in the picture will be stored as an excel, and the excel file name will be the coordinates of the table in the image. + +## 2. PaddleStructure Pipeline + +the process is as follows +![pipeline](../doc/table/pipeline_en.jpg) + +In PaddleStructure, the image will be analyzed by layoutparser first. In the layout analysis, the area in the image will be classified, including **text, title, image, list and table** 5 categories. For the first 4 types of areas, directly use the PP-OCR to complete the text detection and recognition. The table area will be converted to an excel file of the same table style via Table OCR. + +### 2.1 LayoutParser + +Layout analysis divides the document data into regions, including the use of Python scripts for layout analysis tools, extraction of special category detection boxes, performance indicators, and custom training layout analysis models. For details, please refer to [document](layout/README.md). + +### 2.2 Table OCR + +Table OCR converts table image into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed, please refer to [document](table/README.md) + +### 3. Predictive by inference engine + +Use the following commands to complete the inference. + +```python +python3 table/predict_system.py --det_model_dir=path/to/det_model_dir --rec_model_dir=path/to/rec_model_dir --table_model_dir=path/to/table_model_dir --image_dir=../doc/table/1.png --rec_char_dict_path=../ppocr/utils/dict/table_dict.txt --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt --rec_char_type=EN --det_limit_side_len=736 --det_limit_type=min --output ../output/table +``` +After running, each image will have a directory with the same name under the directory specified in the output field. Each table in the picture will be stored as an excel, and the excel file name will be the coordinates of the table in the image. + +# 3. Model List + + +|model name|description|config|model size|download| +| --- | --- | --- | --- | --- | +|en_ppocr_mobile_v2.0_table_det|Text detection in English table scene|[ch_det_mv3_db_v2.0.yml](../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)| 4.7M |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar) | +|en_ppocr_mobile_v2.0_table_rec|Text recognition in English table scene|[rec_chinese_lite_train_v2.0.yml](..//configs/rec/rec_mv3_none_bilstm_ctc.yml)|6.9M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar) | +|en_ppocr_mobile_v2.0_table_structure|Table structure prediction for English table scenarios|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) | \ No newline at end of file diff --git a/ppstructure/README_ch.md b/ppstructure/README_ch.md new file mode 100644 index 0000000000000000000000000000000000000000..f9dc56ab264c377c81ba8328d5103cee801a000c --- /dev/null +++ b/ppstructure/README_ch.md @@ -0,0 +1,107 @@ +# PaddleStructure + +PaddleStructure是一个用于复杂版面分析的OCR工具包,其能够对图片形式的文档数据划分**文字、表格、标题、图片以及列表**5类区域,并将表格区域提取为excel + +## 1. 快速开始 + +### 1.1 安装 + +**安装 layoutparser** +```sh +pip3 install https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl +``` +**安装 paddlestructure** + +pip安装 +```bash +pip install paddlestructure +``` + +本地构建并安装 +```bash +python3 setup.py bdist_wheel +pip3 install dist/paddlestructure-x.x.x-py3-none-any.whl # x.x.x是 paddlestructure 的版本号 +``` + +### 1.2 PaddleStructure whl包使用 + +#### 1.2.1 命令行使用 + +```bash +paddlestructure --image_dir=../doc/table/1.png +``` + +#### 1.2.2 Python脚本使用 + +```python +import os +import cv2 +from paddlestructure import PaddleStructure,draw_result,save_res + +table_engine = PaddleStructure(show_log=True) + +save_folder = './output/table' +img_path = '../doc/table/1.png' +img = cv2.imread(img_path) +result = table_engine(img) +save_res(result, save_folder,os.path.basename(img_path).split('.')[0]) + +for line in result: + print(line) + +from PIL import Image + +font_path = '../doc/fonts/simfang.ttf' # PaddleOCR下提供字体包 +image = Image.open(img_path).convert('RGB') +im_show = draw_result(image, result,font_path=font_path) +im_show = Image.fromarray(im_show) +im_show.save('result.jpg') +``` + + +#### 1.2.3 参数说明 + +| 字段 | 说明 | 默认值 | +| --------------- | ---------------------------------------- | ------------------------------------------- | +| output | excel和识别结果保存的地址 | ./output/table | +| table_max_len | 表格结构模型预测时,图像的长边resize尺度 | 488 | +| table_model_dir | 表格结构模型 inference 模型地址 | None | +| table_char_type | 表格结构模型所用字典地址 | ../ppocr/utils/dict/table_structure_dict.tx | + +大部分参数和paddleocr whl包保持一致,见 [whl包文档](../doc/doc_ch/whl.md) + +运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,excel文件名为表格在图片里的坐标。 + + +## 2. PaddleStructure Pipeline + +流程如下 +![pipeline](../doc/table/pipeline.jpg) + +在PaddleStructure中,图片会先经由layoutparser进行版面分析,在版面分析中,会对图片里的区域进行分类,包括**文字、标题、图片、列表和表格**5类。对于前4类区域,直接使用PP-OCR完成对应区域文字检测与识别。对于表格类区域,经过Table OCR处理后,表格图片转换为相同表格样式的Excel文件。 + +### 2.1 LayoutParser + +版面分析对文档数据进行区域分类,其中包括版面分析工具的Python脚本使用、提取指定类别检测框、性能指标以及自定义训练版面分析模型,详细内容可以参考[文档](layout/README.md)。 + +### 2.2 Table OCR + +Table OCR将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md) + +### 3. 预测引擎推理 + +使用如下命令即可完成预测引擎的推理 + +```python +python3 table/predict_system.py --det_model_dir=path/to/det_model_dir --rec_model_dir=path/to/rec_model_dir --table_model_dir=path/to/table_model_dir --image_dir=../doc/table/1.png --rec_char_dict_path=../ppocr/utils/dict/table_dict.txt --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt --rec_char_type=EN --det_limit_side_len=736 --det_limit_type=min --output ../output/table +``` +运行完成后,每张图片会output字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,excel文件名为表格在图片里的坐标。 + +# 3. Model List + + +|模型名称|模型简介|配置文件|推理模型大小|下载地址| +| --- | --- | --- | --- | --- | +|en_ppocr_mobile_v2.0_table_det|英文表格场景的文字检测|[ch_det_mv3_db_v2.0.yml](../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)| 4.7M |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar) | +|en_ppocr_mobile_v2.0_table_rec|英文表格场景的文字识别|[rec_chinese_lite_train_v2.0.yml](..//configs/rec/rec_mv3_none_bilstm_ctc.yml)|6.9M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar) | +|en_ppocr_mobile_v2.0_table_structure|英文表格场景的表格结构预测|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) | \ No newline at end of file diff --git a/test1/__init__.py b/ppstructure/__init__.py similarity index 82% rename from test1/__init__.py rename to ppstructure/__init__.py index 7055bee443fb86648b80bcb892778a114bc47d71..3952b5ffb9f443e9aba9ba0a4a041b73d2caa9bc 100644 --- a/test1/__init__.py +++ b/ppstructure/__init__.py @@ -12,6 +12,6 @@ # See the License for the specific language governing permissions and # limitations under the License. -from .paddlestructure import PaddleStructure, draw_result, to_excel +from .paddlestructure import PaddleStructure, draw_result, save_res -__all__ = ['PaddleStructure', 'draw_result', 'to_excel'] +__all__ = ['PaddleStructure', 'draw_result', 'save_res'] diff --git a/test1/layout/README.md b/ppstructure/layout/README.md similarity index 100% rename from test1/layout/README.md rename to ppstructure/layout/README.md diff --git a/test1/layout/train_layoutparser_model.md b/ppstructure/layout/train_layoutparser_model.md similarity index 100% rename from test1/layout/train_layoutparser_model.md rename to ppstructure/layout/train_layoutparser_model.md diff --git a/test1/paddlestructure.py b/ppstructure/paddlestructure.py similarity index 97% rename from test1/paddlestructure.py rename to ppstructure/paddlestructure.py index bbe69d40460b72def4e4098001638414f80e4f19..d0009ae8a9be0a133e0d56734b739265034dc314 100644 --- a/test1/paddlestructure.py +++ b/ppstructure/paddlestructure.py @@ -24,9 +24,8 @@ import numpy as np from pathlib import Path from ppocr.utils.logging import get_logger -from test1.predict_system import OCRSystem, save_res -from test1.table.predict_table import to_excel -from test1.utility import init_args, draw_result +from ppstructure.predict_system import OCRSystem, save_res +from ppstructure.utility import init_args, draw_result logger = get_logger() from ppocr.utils.utility import check_and_read_gif, get_image_file_list @@ -145,4 +144,4 @@ def main(): for item in result: logger.info(item['res']) save_res(result, save_folder, img_name) - logger.info('result save to {}'.format(os.path.join(save_folder, img_name))) + logger.info('result save to {}'.format(os.path.join(save_folder, img_name))) \ No newline at end of file diff --git a/test1/predict_system.py b/ppstructure/predict_system.py similarity index 97% rename from test1/predict_system.py rename to ppstructure/predict_system.py index 9e99a48cdf033f1cdb2263fc7a655a26a53ded92..009c20521fc833d1da39699d6b39ba290cda81d0 100644 --- a/test1/predict_system.py +++ b/ppstructure/predict_system.py @@ -31,8 +31,8 @@ import layoutparser as lp from ppocr.utils.utility import get_image_file_list, check_and_read_gif from ppocr.utils.logging import get_logger from tools.infer.predict_system import TextSystem -from test1.table.predict_table import TableSystem, to_excel -from test1.utility import parse_args, draw_result +from ppstructure.table.predict_table import TableSystem, to_excel +from ppstructure.utility import parse_args, draw_result logger = get_logger() diff --git a/test1/setup.py b/ppstructure/setup.py similarity index 90% rename from test1/setup.py rename to ppstructure/setup.py index 1d07517790716717d65088ccd854a2557adc8888..c99d71fe37419b50badd3cf910fec6bb5d2cc67f 100644 --- a/test1/setup.py +++ b/ppstructure/setup.py @@ -23,14 +23,14 @@ with open('../requirements.txt', encoding="utf-8-sig") as f: def readme(): - with open('api_ch.md', encoding="utf-8-sig") as f: + with open('README_ch.md', encoding="utf-8-sig") as f: README = f.read() return README -shutil.copytree('./table', './test1/table') -shutil.copyfile('./predict_system.py', './test1/predict_system.py') -shutil.copyfile('./utility.py', './test1/utility.py') +shutil.copytree('./table', './ppstructure/table') +shutil.copyfile('./predict_system.py', './ppstructure/predict_system.py') +shutil.copyfile('./utility.py', './ppstructure/utility.py') shutil.copytree('../ppocr', './ppocr') shutil.copytree('../tools', './tools') shutil.copyfile('../LICENSE', './LICENSE') @@ -66,5 +66,5 @@ setup( shutil.rmtree('ppocr') shutil.rmtree('tools') -shutil.rmtree('test1') +shutil.rmtree('ppstructure') os.remove('LICENSE') diff --git a/test1/table/README.md b/ppstructure/table/README.md similarity index 62% rename from test1/table/README.md rename to ppstructure/table/README.md index 1fb00f011137df1e31ccaea44e8a2a98a98bb252..afcbe1696bb52154129b89f9a0c18d93ac11fbbe 100644 --- a/test1/table/README.md +++ b/ppstructure/table/README.md @@ -8,7 +8,7 @@ The ocr of the table mainly contains three models The table ocr flow chart is as follows -![tableocr_pipeline](../../doc/table/tableocr_pipeline.png) +![tableocr_pipeline](../../doc/table/tableocr_pipeline_en.jpg) 1. The coordinates of single-line text is detected by DB model, and then sends it to the recognition model to get the recognition result. 2. The table structure and cell coordinates is predicted by RARE model. @@ -19,7 +19,34 @@ The table ocr flow chart is as follows ### 2.1 Train -TBD + +In this chapter, we only introduce the training of the table structure model, For model training of [text detection](../../doc/doc_en/detection_en.md) and [text recognition](../../doc/doc_en/recognition_en.md), please refer to the corresponding documents + +#### data preparation +The training data uses public data set [PubTabNet](https://arxiv.org/abs/1911.10683 ), Can be downloaded from the official [website](https://github.com/ibm-aur-nlp/PubTabNet) 。The PubTabNet data set contains about 500,000 images, as well as annotations in html format。 + +#### Start training +*If you are installing the cpu version of paddle, please modify the `use_gpu` field in the configuration file to false* +```shell +# single GPU training +python3 tools/train.py -c configs/table/table_mv3.yml +# multi-GPU training +# Set the GPU ID used by the '--gpus' parameter. +python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/table/table_mv3.yml +``` + +In the above instruction, use `-c` to select the training to use the `configs/table/table_mv3.yml` configuration file. +For a detailed explanation of the configuration file, please refer to [config](../../doc/doc_en/config_en.md). + +#### load trained model and continue training + +If you expect to load trained model and continue the training again, you can specify the parameter `Global.checkpoints` as the model path to be loaded. + +```shell +python3 tools/train.py -c configs/table/table_mv3.yml -o Global.checkpoints=./your/trained/model +``` + +**Note**: The priority of `Global.checkpoints` is higher than that of `Global.pretrain_weights`, that is, when two parameters are specified at the same time, the model specified by `Global.checkpoints` will be loaded first. If the model path specified by `Global.checkpoints` is wrong, the one specified by `Global.pretrain_weights` will be loaded. ### 2.2 Eval First cd to the PaddleOCR/ppstructure directory diff --git a/test1/table/README_ch.md b/ppstructure/table/README_ch.md similarity index 93% rename from test1/table/README_ch.md rename to ppstructure/table/README_ch.md index 5c3c9a285f6452e763b499695f5d8d875f21cd44..4b912f3eb8d2b65898ab4eabe008bf50d9c07f50 100644 --- a/test1/table/README_ch.md +++ b/ppstructure/table/README_ch.md @@ -8,7 +8,7 @@ 具体流程图如下 -![tableocr_pipeline](../../doc/table/tableocr_pipeline.png) +![tableocr_pipeline](../../doc/table/tableocr_pipeline.jpg) 1. 图片由单行文字检测检测模型到单行文字的坐标,然后送入识别模型拿到识别结果。 2. 图片由表格结构和cell坐标预测模型拿到表格的结构信息和单元格的坐标信息。 @@ -17,8 +17,9 @@ ## 2. 使用 - ### 2.1 训练 +在这一章节中,我们仅介绍表格结构模型的训练,[文字检测](../../doc/doc_ch/detection.md)和[文字识别](../../doc/doc_ch/recognition.md)的模型训练请参考对应的文档。 + #### 数据准备 训练数据使用公开数据集[PubTabNet](https://arxiv.org/abs/1911.10683),可以从[官网](https://github.com/ibm-aur-nlp/PubTabNet)下载。PubTabNet数据集包含约50万张表格数据的图像,以及图像对应的html格式的注释。 @@ -31,7 +32,7 @@ python3 tools/train.py -c configs/table/table_mv3.yml python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/table/table_mv3.yml ``` -上述指令中,通过-c 选择训练使用configs/table/table_mv3.yml配置文件。有关配置文件的详细解释,请参考[链接](./config.md)。 +上述指令中,通过-c 选择训练使用configs/table/table_mv3.yml配置文件。有关配置文件的详细解释,请参考[链接](../../doc/doc_ch/config.md)。 #### 断点训练 diff --git a/test1/table/__init__.py b/ppstructure/table/__init__.py similarity index 100% rename from test1/table/__init__.py rename to ppstructure/table/__init__.py diff --git a/test1/table/eval_table.py b/ppstructure/table/eval_table.py similarity index 93% rename from test1/table/eval_table.py rename to ppstructure/table/eval_table.py index dc63e34e2a85657a6487e7abb081854e937cf669..15f549376566811813aac40bd88ffbcbdbddbf5b 100755 --- a/test1/table/eval_table.py +++ b/ppstructure/table/eval_table.py @@ -20,9 +20,9 @@ sys.path.append(os.path.abspath(os.path.join(__dir__, '../..'))) import cv2 import json from tqdm import tqdm -from test1.table.table_metric import TEDS -from test1.table.predict_table import TableSystem -from test1.utility import init_args +from ppstructure.table.table_metric import TEDS +from ppstructure.table.predict_table import TableSystem +from ppstructure.utility import init_args from ppocr.utils.logging import get_logger logger = get_logger() diff --git a/test1/table/matcher.py b/ppstructure/table/matcher.py similarity index 100% rename from test1/table/matcher.py rename to ppstructure/table/matcher.py diff --git a/test1/table/predict_structure.py b/ppstructure/table/predict_structure.py similarity index 98% rename from test1/table/predict_structure.py rename to ppstructure/table/predict_structure.py index 1070c93ea61ac0efea7e700d00c8144f4139fbd8..fc85327b3a446573259546d84c439f5f8e5b3ac7 100755 --- a/test1/table/predict_structure.py +++ b/ppstructure/table/predict_structure.py @@ -22,17 +22,14 @@ os.environ["FLAGS_allocator_strategy"] = 'auto_growth' import cv2 import numpy as np -import math import time -import traceback -import paddle import tools.infer.utility as utility from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger from ppocr.utils.utility import get_image_file_list, check_and_read_gif -from test1.utility import parse_args +from ppstructure.utility import parse_args logger = get_logger() diff --git a/test1/table/predict_table.py b/ppstructure/table/predict_table.py similarity index 98% rename from test1/table/predict_table.py rename to ppstructure/table/predict_table.py index b06a4f4d53402ca809f0ab846f83176795ca7217..352ae84de1f435f91258cf0ced4dce9345de1220 100644 --- a/test1/table/predict_table.py +++ b/ppstructure/table/predict_table.py @@ -30,9 +30,9 @@ import tools.infer.predict_rec as predict_rec import tools.infer.predict_det as predict_det from ppocr.utils.utility import get_image_file_list, check_and_read_gif from ppocr.utils.logging import get_logger -from test1.table.matcher import distance, compute_iou -from test1.utility import parse_args -import test1.table.predict_structure as predict_strture +from ppstructure.table.matcher import distance, compute_iou +from ppstructure.utility import parse_args +import ppstructure.table.predict_structure as predict_strture logger = get_logger() diff --git a/test1/table/table_metric/__init__.py b/ppstructure/table/table_metric/__init__.py similarity index 100% rename from test1/table/table_metric/__init__.py rename to ppstructure/table/table_metric/__init__.py diff --git a/test1/table/table_metric/parallel.py b/ppstructure/table/table_metric/parallel.py similarity index 100% rename from test1/table/table_metric/parallel.py rename to ppstructure/table/table_metric/parallel.py diff --git a/test1/table/table_metric/table_metric.py b/ppstructure/table/table_metric/table_metric.py similarity index 100% rename from test1/table/table_metric/table_metric.py rename to ppstructure/table/table_metric/table_metric.py diff --git a/test1/table/tablepyxl/__init__.py b/ppstructure/table/tablepyxl/__init__.py similarity index 100% rename from test1/table/tablepyxl/__init__.py rename to ppstructure/table/tablepyxl/__init__.py diff --git a/test1/table/tablepyxl/style.py b/ppstructure/table/tablepyxl/style.py similarity index 100% rename from test1/table/tablepyxl/style.py rename to ppstructure/table/tablepyxl/style.py diff --git a/test1/table/tablepyxl/tablepyxl.py b/ppstructure/table/tablepyxl/tablepyxl.py similarity index 100% rename from test1/table/tablepyxl/tablepyxl.py rename to ppstructure/table/tablepyxl/tablepyxl.py diff --git a/test1/utility.py b/ppstructure/utility.py similarity index 100% rename from test1/utility.py rename to ppstructure/utility.py diff --git a/test1/api.md b/test1/api.md deleted file mode 100644 index 6ce2e5904188643839e2a21c137eaa6cf78619c9..0000000000000000000000000000000000000000 --- a/test1/api.md +++ /dev/null @@ -1,86 +0,0 @@ -# PaddleStructure - -install layoutparser -```sh -wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl -pip3 install layoutparser-0.0.0-py3-none-any.whl -``` - -## 1. Introduction to pipeline - -PaddleStructure is a toolkit for complex layout text OCR, the process is as follows - -![pipeline](../doc/table/pipeline.png) - -In PaddleStructure, the image will be analyzed by layoutparser first. In the layout analysis, the area in the image will be classified, and the OCR process will be carried out according to the category. - -Currently layoutparser will output five categories: -1. Text -2. Title -3. Figure -4. List -5. Table - -Types 1-4 follow the traditional OCR process, and 5 follow the Table OCR process. - -## 2. LayoutParser - - -## 3. Table OCR - -[doc](table/README.md) - -## 4. Predictive by inference engine - -Use the following commands to complete the inference -```python -python3 table/predict_system.py --det_model_dir=path/to/det_model_dir --rec_model_dir=path/to/rec_model_dir --table_model_dir=path/to/table_model_dir --image_dir=../doc/table/1.png --rec_char_dict_path=../ppocr/utils/dict/table_dict.txt --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt --rec_char_type=EN --det_limit_side_len=736 --det_limit_type=min --output ../output/table -``` -After running, each image will have a directory with the same name under the directory specified in the output field. Each table in the picture will be stored as an excel, and the excel file name will be the coordinates of the table in the image. - -## 5. PaddleStructure whl package introduction - -### 5.1 Use - -5.1.1 Use by code -```python -import os -import cv2 -from paddlestructure import PaddleStructure,draw_result,save_res - -table_engine = PaddleStructure(show_log=True) - -save_folder = './output/table' -img_path = '../doc/table/1.png' -img = cv2.imread(img_path) -result = table_engine(img) -save_res(result, save_folder,os.path.basename(img_path).split('.')[0]) - -for line in result: - print(line) - -from PIL import Image - -font_path = 'path/tp/PaddleOCR/doc/fonts/simfang.ttf' -image = Image.open(img_path).convert('RGB') -im_show = draw_result(image, result,font_path=font_path) -im_show = Image.fromarray(im_show) -im_show.save('result.jpg') -``` - -5.1.2 Use by command line -```bash -paddlestructure --image_dir=../doc/table/1.png -``` - -### Parameter Description -Most of the parameters are consistent with the paddleocr whl package, see [whl package documentation](../doc/doc_ch/whl.md) - -| Parameter | Description | Default | -|------------------------|------------------------------------------------------|------------------| -| output | The path where excel and recognition results are saved | ./output/table | -| structure_max_len | When the table structure model predicts, the long side of the image is resized | 488 | -| structure_model_dir | Table structure inference model path | None | -| structure_char_type | Dictionary path used by table structure model | ../ppocr/utils/dict/table_structure_dict.tx | - - diff --git a/test1/api_ch.md b/test1/api_ch.md deleted file mode 100644 index 585379e8c6f717733ab436749441be0668b4c6d8..0000000000000000000000000000000000000000 --- a/test1/api_ch.md +++ /dev/null @@ -1,86 +0,0 @@ -# PaddleStructure - -安装layoutparser -```sh -wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl -pip3 install layoutparser-0.0.0-py3-none-any.whl -``` - -## 1. pipeline介绍 - -PaddleStructure 是一个用于复杂板式文字OCR的工具包,流程如下 -![pipeline](../doc/table/pipeline.png) - -在PaddleStructure中,图片会先经由layoutparser进行版面分析,在版面分析中,会对图片里的区域进行分类,根据根据类别进行对于的ocr流程。 - -目前layoutparser会输出五个类别: -1. Text -2. Title -3. Figure -4. List -5. Table - -1-4类走传统的OCR流程,5走表格的OCR流程。 - -## 2. LayoutParser - -[文档](layout/README.md) - -## 3. Table OCR - -[文档](table/README_ch.md) - -## 4. 预测引擎推理 - -使用如下命令即可完成预测引擎的推理 -```python -python3 table/predict_system.py --det_model_dir=path/to/det_model_dir --rec_model_dir=path/to/rec_model_dir --table_model_dir=path/to/table_model_dir --image_dir=../doc/table/1.png --rec_char_dict_path=../ppocr/utils/dict/table_dict.txt --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt --rec_char_type=EN --det_limit_side_len=736 --det_limit_type=min --output ../output/table -``` -运行完成后,每张图片会output字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,excel文件名为表格在图片里的坐标。 - -## 5. PaddleStructure whl包介绍 - -### 5.1 使用 - -5.1.1 代码使用 -```python -import os -import cv2 -from paddlestructure import PaddleStructure,draw_result,save_res - -table_engine = PaddleStructure(show_log=True) - -save_folder = './output/table' -img_path = '../doc/table/1.png' -img = cv2.imread(img_path) -result = table_engine(img) -save_res(result, save_folder,os.path.basename(img_path).split('.')[0]) - -for line in result: - print(line) - -from PIL import Image - -font_path = 'path/tp/PaddleOCR/doc/fonts/simfang.ttf' -image = Image.open(img_path).convert('RGB') -im_show = draw_result(image, result,font_path=font_path) -im_show = Image.fromarray(im_show) -im_show.save('result.jpg') -``` - -5.1.2 命令行使用 -```bash -paddlestructure --image_dir=../doc/table/1.png -``` - -### 参数说明 -大部分参数和paddleocr whl包保持一致,见 [whl包文档](../doc/doc_ch/whl.md) - -| 字段 | 说明 | 默认值 | -|------------------------|------------------------------------------------------|------------------| -| output | excel和识别结果保存的地址 | ./output/table | -| table_max_len | 表格结构模型预测时,图像的长边resize尺度 | 488 | -| table_model_dir | 表格结构模型 inference 模型地址 | None | -| table_char_type | 表格结构模型所用字典地址 | ../ppocr/utils/dict/table_structure_dict.tx | - -