diff --git a/README.md b/README.md index b1d464879bdbe64c8812a7ce335023ba5cca9727..4a938f2f049e4a6e05c1f4f4a795f0d898944ea9 100644 --- a/README.md +++ b/README.md @@ -152,7 +152,7 @@ For a new language request, please refer to [Guideline for new language_requests [1] PP-OCR is a practical ultra-lightweight OCR system. It is mainly composed of three parts: DB text detection, detection frame correction and CRNN text recognition. The system adopts 19 effective strategies from 8 aspects including backbone network selection and adjustment, prediction head design, data augmentation, learning rate transformation strategy, regularization parameter selection, pre-training model use, and automatic model tailoring and quantization to optimize and slim down the models of each module (as shown in the green box above). The final results are an ultra-lightweight Chinese and English OCR model with an overall size of 3.5M and a 2.8M English digital OCR model. For more details, please refer to the PP-OCR technical article (https://arxiv.org/abs/2009.09941). -[2] On the basis of PP-OCR, PP-OCRv2 is further optimized in five aspects. The detection model adopts CML(Collaborative Mutual Learning) knowledge distillation strategy and CopyPaste data expansion strategy. The recognition model adopts LCNet lightweight backbone network, U-DML knowledge distillation strategy and enhanced CTC loss function improvement (as shown in the red box above), which further improves the inference speed and prediction effect. For more details, please refer to the technical report of PP-OCRv2 (arXiv link is coming soon). +[2] On the basis of PP-OCR, PP-OCRv2 is further optimized in five aspects. The detection model adopts CML(Collaborative Mutual Learning) knowledge distillation strategy and CopyPaste data expansion strategy. The recognition model adopts LCNet lightweight backbone network, U-DML knowledge distillation strategy and enhanced CTC loss function improvement (as shown in the red box above), which further improves the inference speed and prediction effect. For more details, please refer to the technical report of PP-OCRv2 (https://arxiv.org/abs/2109.03144). diff --git a/configs/vqa/re/layoutlmv2.yml b/configs/vqa/re/layoutlmv2.yml new file mode 100644 index 0000000000000000000000000000000000000000..2fa5fd1165c20bbfa8d8505bbb53d48744daebef --- /dev/null +++ b/configs/vqa/re/layoutlmv2.yml @@ -0,0 +1,123 @@ +Global: + use_gpu: True + epoch_num: &epoch_num 200 + log_smooth_window: 10 + print_batch_step: 10 + save_model_dir: ./output/re_layoutlmv2/ + save_epoch_step: 2000 + # evaluation is run every 10 iterations after the 0th iteration + eval_batch_step: [ 0, 19 ] + cal_metric_during_train: False + save_inference_dir: + use_visualdl: False + seed: 2048 + infer_img: doc/vqa/input/zh_val_21.jpg + save_res_path: ./output/re/ + +Architecture: + model_type: vqa + algorithm: &algorithm "LayoutLMv2" + Transform: + Backbone: + name: LayoutLMv2ForRe + pretrained: True + checkpoints: + +Loss: + name: LossFromOutput + key: loss + reduction: mean + +Optimizer: + name: AdamW + beta1: 0.9 + beta2: 0.999 + clip_norm: 10 + lr: + learning_rate: 0.00005 + warmup_epoch: 10 + regularizer: + name: L2 + factor: 0.00000 + +PostProcess: + name: VQAReTokenLayoutLMPostProcess + +Metric: + name: VQAReTokenMetric + main_indicator: hmean + +Train: + dataset: + name: SimpleDataSet + data_dir: train_data/XFUND/zh_train/image + label_file_list: + - train_data/XFUND/zh_train/xfun_normalize_train.json + ratio_list: [ 1.0 ] + transforms: + - DecodeImage: # load image + img_mode: RGB + channel_first: False + - VQATokenLabelEncode: # Class handling label + contains_re: True + algorithm: *algorithm + class_path: &class_path ppstructure/vqa/labels/labels_ser.txt + - VQATokenPad: + max_seq_len: &max_seq_len 512 + return_attention_mask: True + - VQAReTokenRelation: + - VQAReTokenChunk: + max_seq_len: *max_seq_len + - Resize: + size: [224,224] + - NormalizeImage: + scale: 1./255. + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: 'hwc' + - ToCHWImage: + - KeepKeys: + keep_keys: [ 'input_ids', 'bbox', 'image', 'attention_mask', 'token_type_ids','entities', 'relations'] # dataloader will return list in this order + loader: + shuffle: True + drop_last: False + batch_size_per_card: 8 + num_workers: 8 + collate_fn: ListCollator + +Eval: + dataset: + name: SimpleDataSet + data_dir: train_data/XFUND/zh_val/image + label_file_list: + - train_data/XFUND/zh_val/xfun_normalize_val.json + transforms: + - DecodeImage: # load image + img_mode: RGB + channel_first: False + - VQATokenLabelEncode: # Class handling label + contains_re: True + algorithm: *algorithm + class_path: *class_path + - VQATokenPad: + max_seq_len: *max_seq_len + return_attention_mask: True + - VQAReTokenRelation: + - VQAReTokenChunk: + max_seq_len: *max_seq_len + - Resize: + size: [224,224] + - NormalizeImage: + scale: 1./255. + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: 'hwc' + - ToCHWImage: + - KeepKeys: + keep_keys: [ 'input_ids', 'bbox', 'image', 'attention_mask', 'token_type_ids','entities', 'relations'] # dataloader will return list in this order + loader: + shuffle: False + drop_last: False + batch_size_per_card: 8 + num_workers: 8 + collate_fn: ListCollator diff --git a/configs/vqa/re/layoutxlm.yml b/configs/vqa/re/layoutxlm.yml index ca6b0d29db534eb1189e305d1f033ece24c368b9..ff16120ac1be92e989ebfda6af3ccf346dde89cd 100644 --- a/configs/vqa/re/layoutxlm.yml +++ b/configs/vqa/re/layoutxlm.yml @@ -21,7 +21,7 @@ Architecture: Backbone: name: LayoutXLMForRe pretrained: True - checkpoints: + checkpoints: Loss: name: LossFromOutput @@ -35,6 +35,7 @@ Optimizer: clip_norm: 10 lr: learning_rate: 0.00005 + warmup_epoch: 10 regularizer: name: L2 factor: 0.00000 @@ -81,7 +82,7 @@ Train: shuffle: True drop_last: False batch_size_per_card: 8 - num_workers: 4 + num_workers: 8 collate_fn: ListCollator Eval: @@ -118,5 +119,5 @@ Eval: shuffle: False drop_last: False batch_size_per_card: 8 - num_workers: 4 + num_workers: 8 collate_fn: ListCollator diff --git a/configs/vqa/ser/layoutlmv2.yml b/configs/vqa/ser/layoutlmv2.yml new file mode 100644 index 0000000000000000000000000000000000000000..33406252b31adf4175d7ea2f57772b0faf33cdab --- /dev/null +++ b/configs/vqa/ser/layoutlmv2.yml @@ -0,0 +1,121 @@ +Global: + use_gpu: True + epoch_num: &epoch_num 200 + log_smooth_window: 10 + print_batch_step: 10 + save_model_dir: ./output/ser_layoutlmv2/ + save_epoch_step: 2000 + # evaluation is run every 10 iterations after the 0th iteration + eval_batch_step: [ 0, 19 ] + cal_metric_during_train: False + save_inference_dir: + use_visualdl: False + seed: 2022 + infer_img: doc/vqa/input/zh_val_0.jpg + save_res_path: ./output/ser/ + +Architecture: + model_type: vqa + algorithm: &algorithm "LayoutLMv2" + Transform: + Backbone: + name: LayoutLMv2ForSer + pretrained: True + checkpoints: + num_classes: &num_classes 7 + +Loss: + name: VQASerTokenLayoutLMLoss + num_classes: *num_classes + +Optimizer: + name: AdamW + beta1: 0.9 + beta2: 0.999 + lr: + name: Linear + learning_rate: 0.00005 + epochs: *epoch_num + warmup_epoch: 2 + regularizer: + + name: L2 + factor: 0.00000 + +PostProcess: + name: VQASerTokenLayoutLMPostProcess + class_path: &class_path ppstructure/vqa/labels/labels_ser.txt + +Metric: + name: VQASerTokenMetric + main_indicator: hmean + +Train: + dataset: + name: SimpleDataSet + data_dir: train_data/XFUND/zh_train/image + label_file_list: + - train_data/XFUND/zh_train/xfun_normalize_train.json + transforms: + - DecodeImage: # load image + img_mode: RGB + channel_first: False + - VQATokenLabelEncode: # Class handling label + contains_re: False + algorithm: *algorithm + class_path: *class_path + - VQATokenPad: + max_seq_len: &max_seq_len 512 + return_attention_mask: True + - VQASerTokenChunk: + max_seq_len: *max_seq_len + - Resize: + size: [224,224] + - NormalizeImage: + scale: 1 + mean: [ 123.675, 116.28, 103.53 ] + std: [ 58.395, 57.12, 57.375 ] + order: 'hwc' + - ToCHWImage: + - KeepKeys: + keep_keys: [ 'input_ids','labels', 'bbox', 'image', 'attention_mask', 'token_type_ids'] # dataloader will return list in this order + loader: + shuffle: True + drop_last: False + batch_size_per_card: 8 + num_workers: 4 + +Eval: + dataset: + name: SimpleDataSet + data_dir: train_data/XFUND/zh_val/image + label_file_list: + - train_data/XFUND/zh_val/xfun_normalize_val.json + transforms: + - DecodeImage: # load image + img_mode: RGB + channel_first: False + - VQATokenLabelEncode: # Class handling label + contains_re: False + algorithm: *algorithm + class_path: *class_path + - VQATokenPad: + max_seq_len: *max_seq_len + return_attention_mask: True + - VQASerTokenChunk: + max_seq_len: *max_seq_len + - Resize: + size: [224,224] + - NormalizeImage: + scale: 1 + mean: [ 123.675, 116.28, 103.53 ] + std: [ 58.395, 57.12, 57.375 ] + order: 'hwc' + - ToCHWImage: + - KeepKeys: + keep_keys: [ 'input_ids', 'labels', 'bbox', 'image', 'attention_mask', 'token_type_ids'] # dataloader will return list in this order + loader: + shuffle: False + drop_last: False + batch_size_per_card: 8 + num_workers: 4 diff --git a/doc/doc_ch/thirdparty.md b/doc/doc_ch/thirdparty.md index 7d9d820890c92021b1b040b4576232e544dfcb00..960aa1146ea3a174bead1f422bae26e1731031e8 100644 --- a/doc/doc_ch/thirdparty.md +++ b/doc/doc_ch/thirdparty.md @@ -39,6 +39,7 @@ PaddleOCR希望可以通过AI的力量助力任何一位有梦想的开发者实 | 应用部署 | [PaddleOCR-Paddlejs-Vue-Demo](https://github.com/Lovely-Pig/PaddleOCR-Paddlejs-Vue-Demo) | 使用Paddle.js和Vue部署PaddleOCR | [Lovely-Pig](https://github.com/Lovely-Pig) | | 应用部署 | [PaddleOCR-Paddlejs-React-Demo](https://github.com/Lovely-Pig/PaddleOCR-Paddlejs-React-Demo) | 使用Paddle.js和React部署PaddleOCR | [Lovely-Pig](https://github.com/Lovely-Pig) | | 学术前沿模型训练与推理 | [AI Studio项目](https://aistudio.baidu.com/aistudio/projectdetail/3397137) | StarNet-MobileNetV3算法–中文训练 | [xiaoyangyang2](https://github.com/xiaoyangyang2) | +| 学术前沿模型训练与推理 | [ABINet-paddle](https://github.com/Huntersdeng/abinet-paddle) | ABINet算法前向运算的paddle实现以及模型各部分的实现细节分析 | [Huntersdeng](https://github.com/Huntersdeng) | ### 1.2 为PaddleOCR新增功能 @@ -55,7 +56,7 @@ PaddleOCR希望可以通过AI的力量助力任何一位有梦想的开发者实 ### 1.4 文档优化与翻译 -- 非常感谢 **[RangeKing](https://github.com/RangeKing)** 贡献翻译《动手学OCR》notebook[电子书英文版](https://github.com/PaddlePaddle/PaddleOCR/tree/dygraph/notebook/notebook_en)。 +- 非常感谢 **[RangeKing](https://github.com/RangeKing),[HustBestCat](https://github.com/HustBestCat),[v3fc](https://github.com/v3fc)** 贡献翻译《动手学OCR》notebook[电子书英文版](https://github.com/PaddlePaddle/PaddleOCR/tree/dygraph/notebook/notebook_en)。 - 非常感谢 [thunderstudying](https://github.com/thunderstudying),[RangeKing](https://github.com/RangeKing),[livingbody](https://github.com/livingbody), [WZMIAOMIAO](https://github.com/WZMIAOMIAO),[haigang1975](https://github.com/haigang1975) 补充多个英文markdown文档。 - 非常感谢 **[fanruinet](https://github.com/fanruinet)** 润色和修复35篇英文文档([#5205](https://github.com/PaddlePaddle/PaddleOCR/pull/5205))。 - 非常感谢 [Khanh Tran](https://github.com/xxxpsyduck) 和 [Karl Horky](https://github.com/karlhorky) 贡献修改英文文档。 diff --git a/doc/joinus.PNG b/doc/joinus.PNG index d156f078237d09f33974daf34a65ee1d7a541371..b8d084f5a5897f050a07c0375ac40007df8f6035 100644 Binary files a/doc/joinus.PNG and b/doc/joinus.PNG differ diff --git a/ppocr/data/imaug/label_ops.py b/ppocr/data/imaug/label_ops.py index 786647f1f655dd40be1117df912f59c42108539e..ef962b17850b17517b37a754c63a77feb412c45a 100644 --- a/ppocr/data/imaug/label_ops.py +++ b/ppocr/data/imaug/label_ops.py @@ -799,7 +799,7 @@ class VQATokenLabelEncode(object): ocr_engine=None, **kwargs): super(VQATokenLabelEncode, self).__init__() - from paddlenlp.transformers import LayoutXLMTokenizer, LayoutLMTokenizer + from paddlenlp.transformers import LayoutXLMTokenizer, LayoutLMTokenizer, LayoutLMv2Tokenizer from ppocr.utils.utility import load_vqa_bio_label_maps tokenizer_dict = { 'LayoutXLM': { @@ -809,6 +809,10 @@ class VQATokenLabelEncode(object): 'LayoutLM': { 'class': LayoutLMTokenizer, 'pretrained_model': 'layoutlm-base-uncased' + }, + 'LayoutLMv2': { + 'class': LayoutLMv2Tokenizer, + 'pretrained_model': 'layoutlmv2-base-uncased' } } self.contains_re = contains_re diff --git a/ppocr/data/imaug/vqa/token/vqa_token_chunk.py b/ppocr/data/imaug/vqa/token/vqa_token_chunk.py index deb55b4d55b81d5949ed834693e45c3b40c4b762..1fa949e688289b320c6a7c121c944708febe2c9d 100644 --- a/ppocr/data/imaug/vqa/token/vqa_token_chunk.py +++ b/ppocr/data/imaug/vqa/token/vqa_token_chunk.py @@ -12,6 +12,8 @@ # See the License for the specific language governing permissions and # limitations under the License. +from collections import defaultdict + class VQASerTokenChunk(object): def __init__(self, max_seq_len=512, infer_mode=False, **kwargs): @@ -39,6 +41,8 @@ class VQASerTokenChunk(object): encoded_inputs_example[key] = data[key] encoded_inputs_all.append(encoded_inputs_example) + if len(encoded_inputs_all) == 0: + return None return encoded_inputs_all[0] @@ -101,17 +105,18 @@ class VQAReTokenChunk(object): "entities": self.reformat(entities_in_this_span), "relations": self.reformat(relations_in_this_span), }) - item['entities']['label'] = [ - self.entities_labels[x] for x in item['entities']['label'] - ] - encoded_inputs_all.append(item) + if len(item['entities']) > 0: + item['entities']['label'] = [ + self.entities_labels[x] for x in item['entities']['label'] + ] + encoded_inputs_all.append(item) + if len(encoded_inputs_all) == 0: + return None return encoded_inputs_all[0] def reformat(self, data): - new_data = {} + new_data = defaultdict(list) for item in data: for k, v in item.items(): - if k not in new_data: - new_data[k] = [] new_data[k].append(v) return new_data diff --git a/ppocr/modeling/backbones/__init__.py b/ppocr/modeling/backbones/__init__.py index a7db52d26704e0c8426e313b8788b656085983d6..b34b75507cbf047e9adb5f79a2cc2c061ffdab0e 100755 --- a/ppocr/modeling/backbones/__init__.py +++ b/ppocr/modeling/backbones/__init__.py @@ -45,8 +45,11 @@ def build_backbone(config, model_type): from .table_mobilenet_v3 import MobileNetV3 support_dict = ["ResNet", "MobileNetV3"] elif model_type == 'vqa': - from .vqa_layoutlm import LayoutLMForSer, LayoutXLMForSer, LayoutXLMForRe - support_dict = ["LayoutLMForSer", "LayoutXLMForSer", 'LayoutXLMForRe'] + from .vqa_layoutlm import LayoutLMForSer, LayoutLMv2ForSer, LayoutLMv2ForRe, LayoutXLMForSer, LayoutXLMForRe + support_dict = [ + "LayoutLMForSer", "LayoutLMv2ForSer", 'LayoutLMv2ForRe', + "LayoutXLMForSer", 'LayoutXLMForRe' + ] else: raise NotImplementedError diff --git a/ppocr/modeling/backbones/vqa_layoutlm.py b/ppocr/modeling/backbones/vqa_layoutlm.py index 0e98155514cdd055680f32b529fdce631384a37f..ede5b7a35af65fac351277cefccd89b251f5cdb7 100644 --- a/ppocr/modeling/backbones/vqa_layoutlm.py +++ b/ppocr/modeling/backbones/vqa_layoutlm.py @@ -21,12 +21,14 @@ from paddle import nn from paddlenlp.transformers import LayoutXLMModel, LayoutXLMForTokenClassification, LayoutXLMForRelationExtraction from paddlenlp.transformers import LayoutLMModel, LayoutLMForTokenClassification +from paddlenlp.transformers import LayoutLMv2Model, LayoutLMv2ForTokenClassification, LayoutLMv2ForRelationExtraction __all__ = ["LayoutXLMForSer", 'LayoutLMForSer'] pretrained_model_dict = { LayoutXLMModel: 'layoutxlm-base-uncased', - LayoutLMModel: 'layoutlm-base-uncased' + LayoutLMModel: 'layoutlm-base-uncased', + LayoutLMv2Model: 'layoutlmv2-base-uncased' } @@ -58,12 +60,34 @@ class NLPBaseModel(nn.Layer): self.out_channels = 1 -class LayoutXLMForSer(NLPBaseModel): +class LayoutLMForSer(NLPBaseModel): def __init__(self, num_classes, pretrained=True, checkpoints=None, **kwargs): - super(LayoutXLMForSer, self).__init__( - LayoutXLMModel, - LayoutXLMForTokenClassification, + super(LayoutLMForSer, self).__init__( + LayoutLMModel, + LayoutLMForTokenClassification, + 'ser', + pretrained, + checkpoints, + num_classes=num_classes) + + def forward(self, x): + x = self.model( + input_ids=x[0], + bbox=x[2], + attention_mask=x[4], + token_type_ids=x[5], + position_ids=None, + output_hidden_states=False) + return x + + +class LayoutLMv2ForSer(NLPBaseModel): + def __init__(self, num_classes, pretrained=True, checkpoints=None, + **kwargs): + super(LayoutLMv2ForSer, self).__init__( + LayoutLMv2Model, + LayoutLMv2ForTokenClassification, 'ser', pretrained, checkpoints, @@ -82,12 +106,12 @@ class LayoutXLMForSer(NLPBaseModel): return x[0] -class LayoutLMForSer(NLPBaseModel): +class LayoutXLMForSer(NLPBaseModel): def __init__(self, num_classes, pretrained=True, checkpoints=None, **kwargs): - super(LayoutLMForSer, self).__init__( - LayoutLMModel, - LayoutLMForTokenClassification, + super(LayoutXLMForSer, self).__init__( + LayoutXLMModel, + LayoutXLMForTokenClassification, 'ser', pretrained, checkpoints, @@ -97,10 +121,33 @@ class LayoutLMForSer(NLPBaseModel): x = self.model( input_ids=x[0], bbox=x[2], + image=x[3], attention_mask=x[4], token_type_ids=x[5], position_ids=None, - output_hidden_states=False) + head_mask=None, + labels=None) + return x[0] + + +class LayoutLMv2ForRe(NLPBaseModel): + def __init__(self, pretrained=True, checkpoints=None, **kwargs): + super(LayoutLMv2ForRe, self).__init__(LayoutLMv2Model, + LayoutLMv2ForRelationExtraction, + 're', pretrained, checkpoints) + + def forward(self, x): + x = self.model( + input_ids=x[0], + bbox=x[1], + labels=None, + image=x[2], + attention_mask=x[3], + token_type_ids=x[4], + position_ids=None, + head_mask=None, + entities=x[5], + relations=x[6]) return x diff --git a/ppocr/optimizer/__init__.py b/ppocr/optimizer/__init__.py index e0c6b90371cb4b09fb894ceeaeb8595e51c6c557..4110fb47678583cff826a9bc855b3fb378a533f9 100644 --- a/ppocr/optimizer/__init__.py +++ b/ppocr/optimizer/__init__.py @@ -25,11 +25,8 @@ __all__ = ['build_optimizer'] def build_lr_scheduler(lr_config, epochs, step_each_epoch): from . import learning_rate lr_config.update({'epochs': epochs, 'step_each_epoch': step_each_epoch}) - if 'name' in lr_config: - lr_name = lr_config.pop('name') - lr = getattr(learning_rate, lr_name)(**lr_config)() - else: - lr = lr_config['learning_rate'] + lr_name = lr_config.pop('name', 'Const') + lr = getattr(learning_rate, lr_name)(**lr_config)() return lr diff --git a/ppocr/optimizer/learning_rate.py b/ppocr/optimizer/learning_rate.py index b1879f3ee509761043c1797d8b67e4e0988af130..fe251f36e736bb1eac8a71a8115c941cbd7443e6 100644 --- a/ppocr/optimizer/learning_rate.py +++ b/ppocr/optimizer/learning_rate.py @@ -275,4 +275,36 @@ class OneCycle(object): start_lr=0.0, end_lr=self.max_lr, last_epoch=self.last_epoch) - return learning_rate \ No newline at end of file + return learning_rate + + +class Const(object): + """ + Const learning rate decay + Args: + learning_rate(float): initial learning rate + step_each_epoch(int): steps each epoch + last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate. + """ + + def __init__(self, + learning_rate, + step_each_epoch, + warmup_epoch=0, + last_epoch=-1, + **kwargs): + super(Const, self).__init__() + self.learning_rate = learning_rate + self.last_epoch = last_epoch + self.warmup_epoch = round(warmup_epoch * step_each_epoch) + + def __call__(self): + learning_rate = self.learning_rate + if self.warmup_epoch > 0: + learning_rate = lr.LinearWarmup( + learning_rate=learning_rate, + warmup_steps=self.warmup_epoch, + start_lr=0.0, + end_lr=self.learning_rate, + last_epoch=self.last_epoch) + return learning_rate diff --git a/ppstructure/README.md b/ppstructure/README.md index b4c1ec8d828fd521601c97f9f5d0754eecd13152..bf5a7dcf74cdd6865497daebfc719c40de3eb550 100644 --- a/ppstructure/README.md +++ b/ppstructure/README.md @@ -13,20 +13,18 @@ English | [简体中文](README_ch.md) - [6.1.2 Table recognition](#612-table-recognition) - [6.2 DOC-VQA](#62-doc-vqa) - [7. Model List](#7-model-list) - - + - [7.1 Layout analysis model](#71-layout-analysis-model) + - [7.2 OCR and table recognition model](#72-ocr-and-table-recognition-model) + - [7.3 DOC-VQA model](#73-doc-vqa-model) ## 1. Introduction PP-Structure is an OCR toolkit that can be used for document analysis and processing with complex structures, designed to help developers better complete document understanding tasks - - ## 2. Update log +* 2022.02.12 DOC-VQA add LayoutLMv2 model。 * 2021.12.07 add [DOC-VQA SER and RE tasks](vqa/README.md)。 - - ## 3. Features The main features of PP-Structure are as follows: @@ -38,21 +36,14 @@ The main features of PP-Structure are as follows: - Support custom training for layout analysis and table structure tasks - Support Document Visual Question Answering (DOC-VQA) tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE) - - - ## 4. Results - - ### 4.1 Layout analysis and table recognition The figure shows the pipeline of layout analysis + table recognition. The image is first divided into four areas of image, text, title and table by layout analysis, and then OCR detection and recognition is performed on the three areas of image, text and title, and the table is performed table recognition, where the image will also be stored for use. - - ### 4.2 DOC-VQA * SER @@ -77,19 +68,12 @@ The corresponding category and OCR recognition results are also marked at the to In the figure, the red box represents the question, the blue box represents the answer, and the question and answer are connected by green lines. The corresponding category and OCR recognition results are also marked at the top left of the OCR detection box. - - - ## 5. Quick start Start from [Quick Installation](./docs/quickstart.md) - - ## 6. PP-Structure System - - ### 6.1 Layout analysis and table recognition ![pipeline](../doc/table/pipeline.jpg) @@ -104,39 +88,33 @@ Layout analysis classifies image by region, including the use of Python scripts Table recognition converts table images into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed instructions, please refer to [document](table/README.md) - - ### 6.2 DOC-VQA Document Visual Question Answering (DOC-VQA) if a type of Visual Question Answering (VQA), which includes Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks. Based on SER task, text recognition and classification in images can be completed. Based on THE RE task, we can extract the relation of the text content in the image, such as judge the problem pair. For details, please refer to [document](vqa/README.md) - - - ## 7. Model List -PP-Structure系列模型列表(更新中) +PP-Structure Series Model List (Updating) -* Layout analysis model +### 7.1 Layout analysis model |model name|description|download| | --- | --- | --- | | ppyolov2_r50vd_dcn_365e_publaynet | The layout analysis model trained on the PubLayNet dataset can divide image into 5 types of areas **text, title, table, picture, and list** | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) | - -* OCR and table recognition model +### 7.2 OCR and table recognition model |model name|description|model size|download| | --- | --- | --- | --- | -|ch_ppocr_mobile_slim_v2.0_det|Slim pruned lightweight model, supporting Chinese, English, multilingual text detection|2.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) | -|ch_ppocr_mobile_slim_v2.0_rec|Slim pruned and quantized lightweight model, supporting Chinese, English and number recognition|6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) | -|en_ppocr_mobile_v2.0_table_structure|Table structure prediction of English table scene trained on PubLayNet dataset|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | +|ch_PP-OCRv2_det_slim|[New] Slim quantization with distillation lightweight model, supporting Chinese, English, multilingual text detection| 3M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)| +|ch_PP-OCRv2_rec_slim|[New] Slim qunatization with distillation lightweight model, supporting Chinese, English, multilingual text recognition| 9M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_train.tar) | +|en_ppocr_mobile_v2.0_table_structure|Table structure prediction of English table scene trained on PubLayNet dataset| 18.6M |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | -* DOC-VQA model +### 7.3 DOC-VQA model |model name|description|model size|download| | --- | --- | --- | --- | -|PP-Layout_v1.0_ser_pretrained|SER model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar) | -|PP-Layout_v1.0_re_pretrained|RE model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_re_pretrained.tar) | +|ser_LayoutXLM_xfun_zhd|SER model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | +|re_LayoutXLM_xfun_zh|RE model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | If you need to use other models, you can download the model in [PPOCR model_list](../doc/doc_en/models_list_en.md) and [PPStructure model_list](./docs/model_list.md) diff --git a/ppstructure/README_ch.md b/ppstructure/README_ch.md index a449028dff29739e621bfa2aa77eac63b43e6c84..1013c619bf706a9d653371f48c566c087668b812 100644 --- a/ppstructure/README_ch.md +++ b/ppstructure/README_ch.md @@ -13,18 +13,17 @@ - [6.1.2 表格识别](#612-表格识别) - [6.2 DOC-VQA](#62-doc-vqa) - [7. 模型库](#7-模型库) + - [7.1 版面分析模型](#71-版面分析模型) + - [7.2 OCR和表格识别模型](#72-ocr和表格识别模型) + - [7.2 DOC-VQA 模型](#72-doc-vqa-模型) - ## 1. 简介 PP-Structure是一个可用于复杂文档结构分析和处理的OCR工具包,旨在帮助开发者更好的完成文档理解相关任务。 - - ## 2. 近期更新 -* 2021.12.07 新增DOC-[VQA任务SER和RE](vqa/README.md)。 - - +* 2022.02.12 DOC-VQA增加LayoutLMv2模型。 +* 2021.12.07 新增[DOC-VQA任务SER和RE](vqa/README.md)。 ## 3. 特性 @@ -36,22 +35,14 @@ PP-Structure的主要特性如下: - 支持版面分析和表格结构化两类任务自定义训练 - 支持文档视觉问答(Document Visual Question Answering,DOC-VQA)任务-语义实体识别(Semantic Entity Recognition,SER)和关系抽取(Relation Extraction,RE) - - - ## 4. 效果展示 - - ### 4.1 版面分析和表格识别 图中展示了版面分析+表格识别的整体流程,图片先有版面分析划分为图像、文本、标题和表格四种区域,然后对图像、文本和标题三种区域进行OCR的检测识别,对表格进行表格识别,其中图像还会被存储下来以便使用。 - - - ### 4.2 DOC-VQA * SER @@ -75,18 +66,12 @@ PP-Structure的主要特性如下: 图中红色框表示问题,蓝色框表示答案,问题和答案之间使用绿色线连接。在OCR检测框的左上方也标出了对应的类别和OCR识别结果。 - - ## 5. 快速体验 请参考[快速安装](./docs/quickstart.md)教程。 - - ## 6. PP-Structure 介绍 - - ### 6.1 版面分析+表格识别 ![pipeline](../doc/table/pipeline.jpg) @@ -101,39 +86,34 @@ PP-Structure的主要特性如下: 表格识别将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md)。 - - ### 6.2 DOC-VQA DOC-VQA指文档视觉问答,其中包括语义实体识别 (Semantic Entity Recognition, SER) 和关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair),详细说明参考[文档](vqa/README.md)。 - - ## 7. 模型库 PP-Structure系列模型列表(更新中) -* 版面分析模型 +### 7.1 版面分析模型 |模型名称|模型简介|下载地址| | --- | --- | --- | | ppyolov2_r50vd_dcn_365e_publaynet | PubLayNet 数据集训练的版面分析模型,可以划分**文字、标题、表格、图片以及列表**5类区域 | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) | - -* OCR和表格识别模型 +### 7.2 OCR和表格识别模型 |模型名称|模型简介|模型大小|下载地址| | --- | --- | --- | --- | -|ch_ppocr_mobile_slim_v2.0_det|slim裁剪版超轻量模型,支持中英文、多语种文本检测|2.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) | -|ch_ppocr_mobile_slim_v2.0_rec|slim裁剪量化版超轻量模型,支持中英文、数字识别|6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) | +|ch_PP-OCRv2_det_slim|【最新】slim量化+蒸馏版超轻量模型,支持中英文、多语种文本检测| 3M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)| +|ch_PP-OCRv2_rec_slim|【最新】slim量化版超轻量模型,支持中英文、数字识别| 9M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_train.tar) | |en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | -* DOC-VQA 模型 +### 7.2 DOC-VQA 模型 |模型名称|模型简介|模型大小|下载地址| | --- | --- | --- | --- | -|PP-Layout_v1.0_ser_pretrained|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar) | -|PP-Layout_v1.0_re_pretrained|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_re_pretrained.tar) | +|ser_LayoutXLM_xfun_zhd|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | +|re_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | -更多模型下载,可以参考 [PPOCR model_list](../doc/doc_en/models_list.md) and [PPStructure model_list](./docs/model_list.md) +更多模型下载,可以参考 [PP-OCR model_list](../doc/doc_en/models_list.md) and [PP-Structure model_list](./docs/models_list.md) diff --git a/ppstructure/docs/installation.md b/ppstructure/docs/installation.md index 30c25d5dc92f6ccdb0d93dafe9707f30eca0c0a9..155baf29de5701b58c9342cf82897b23f4ab7e45 100644 --- a/ppstructure/docs/installation.md +++ b/ppstructure/docs/installation.md @@ -1,3 +1,9 @@ +- [快速安装](#快速安装) + - [1. PaddlePaddle 和 PaddleOCR](#1-paddlepaddle-和-paddleocr) + - [2. 安装其他依赖](#2-安装其他依赖) + - [2.1 版面分析所需 Layout-Parser](#21-版面分析所需--layout-parser) + - [2.2 VQA所需依赖](#22--vqa所需依赖) + # 快速安装 ## 1. PaddlePaddle 和 PaddleOCR diff --git a/ppstructure/docs/kie.md b/ppstructure/docs/kie.md index 21854b0d24b0b2bbe6a4612b1112b201c5df255d..35498b33478d1010fd2548dfcb8586b4710723a1 100644 --- a/ppstructure/docs/kie.md +++ b/ppstructure/docs/kie.md @@ -1,4 +1,8 @@ - +- [关键信息提取(Key Information Extraction)](#关键信息提取key-information-extraction) + - [1. 快速使用](#1-快速使用) + - [2. 执行训练](#2-执行训练) + - [3. 执行评估](#3-执行评估) + - [4. 参考文献](#4-参考文献) # 关键信息提取(Key Information Extraction) @@ -7,11 +11,6 @@ SDMGR是一个关键信息提取算法,将每个检测到的文本区域分类为预定义的类别,如订单ID、发票号码,金额等。 -* [1. 快速使用](#1-----) -* [2. 执行训练](#2-----) -* [3. 执行评估](#3-----) - - ## 1. 快速使用 训练和测试的数据采用wildreceipt数据集,通过如下指令下载数据集: @@ -36,7 +35,6 @@ python3.7 tools/infer_kie.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpo - ## 2. 执行训练 创建数据集软链到PaddleOCR/train_data目录下: @@ -50,7 +48,6 @@ ln -s ../../wildreceipt ./ ``` python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/ ``` - ## 3. 执行评估 ``` @@ -58,7 +55,7 @@ python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints= ``` -**参考文献:** +## 4. 参考文献 diff --git a/ppstructure/docs/kie_en.md b/ppstructure/docs/kie_en.md index a424968a9b5a33132afe52a4850cfe541919ae1c..1fe38b0b399e9290526dafa5409673dc87026db7 100644 --- a/ppstructure/docs/kie_en.md +++ b/ppstructure/docs/kie_en.md @@ -1,4 +1,8 @@ - +- [Key Information Extraction(KIE)](#key-information-extractionkie) + - [1. Quick Use](#1-quick-use) + - [2. Model Training](#2-model-training) + - [3. Model Evaluation](#3-model-evaluation) + - [4. Reference](#4-reference) # Key Information Extraction(KIE) @@ -6,13 +10,6 @@ This section provides a tutorial example on how to quickly use, train, and evalu [SDMGR(Spatial Dual-Modality Graph Reasoning)](https://arxiv.org/abs/2103.14470) is a KIE algorithm that classifies each detected text region into predefined categories, such as order ID, invoice number, amount, and etc. - -* [1. Quick Use](#1-----) -* [2. Model Training](#2-----) -* [3. Model Evaluation](#3-----) - - - ## 1. Quick Use [Wildreceipt dataset](https://paperswithcode.com/dataset/wildreceipt) is used for this tutorial. It contains 1765 photos, with 25 classes, and 50000 text boxes, which can be downloaded by wget: @@ -37,7 +34,6 @@ The visualization results are shown in the figure below: - ## 2. Model Training Create a softlink to the folder, `PaddleOCR/train_data`: @@ -51,7 +47,6 @@ The configuration file used for training is `configs/kie/kie_unet_sdmgr.yml`. Th ```shell python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/ ``` - ## 3. Model Evaluation @@ -61,7 +56,7 @@ After training, you can execute the model evaluation with the following command: python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints=./output/kie/best_accuracy ``` -**Reference:** +## 4. Reference diff --git a/ppstructure/docs/model_list.md b/ppstructure/docs/models_list.md similarity index 53% rename from ppstructure/docs/model_list.md rename to ppstructure/docs/models_list.md index baec2a2fd08a5b8d51e4c68bc62902feb04de977..d966e18f2a7fd6d76a0fd491058539173b5d9690 100644 --- a/ppstructure/docs/model_list.md +++ b/ppstructure/docs/models_list.md @@ -1,4 +1,13 @@ -# Model List +- [PP-Structure 系列模型列表](#pp-structure-系列模型列表) + - [1. LayoutParser 模型](#1-layoutparser-模型) + - [2. OCR和表格识别模型](#2-ocr和表格识别模型) + - [2.1 OCR](#21-ocr) + - [2.2 表格识别模型](#22-表格识别模型) + - [3. VQA模型](#3-vqa模型) + - [4. KIE模型](#4-kie模型) + +# PP-Structure 系列模型列表 + ## 1. LayoutParser 模型 @@ -10,25 +19,33 @@ ## 2. OCR和表格识别模型 +### 2.1 OCR + |模型名称|模型简介|推理模型大小|下载地址| | --- | --- | --- | --- | -|ch_ppocr_mobile_slim_v2.0_det|slim裁剪版超轻量模型,支持中英文、多语种文本检测|2.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) | -|ch_ppocr_mobile_slim_v2.0_rec|slim裁剪量化版超轻量模型,支持中英文、数字识别|6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) | |en_ppocr_mobile_v2.0_table_det|PubLayNet数据集训练的英文表格场景的文字检测|4.7M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_det_train.tar) | |en_ppocr_mobile_v2.0_table_rec|PubLayNet数据集训练的英文表格场景的文字识别|6.9M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_rec_train.tar) | -|en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | -如需要使用其他OCR模型,可以在 [model_list](../../doc/doc_ch/models_list.md) 下载模型或者使用自己训练好的模型配置到`det_model_dir`,`rec_model_dir`两个字段即可。 +如需要使用其他OCR模型,可以在 [PP-OCR model_list](../../doc/doc_ch/models_list.md) 下载模型或者使用自己训练好的模型配置到 `det_model_dir`, `rec_model_dir`两个字段即可。 + +### 2.2 表格识别模型 + +|模型名称|模型简介|推理模型大小|下载地址| +| --- | --- | --- | --- | +|en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | ## 3. VQA模型 |模型名称|模型简介|推理模型大小|下载地址| | --- | --- | --- | --- | -|PP-Layout_v1.0_ser_pretrained|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | -|PP-Layout_v1.0_re_pretrained|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | +|ser_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | +|re_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | +|ser_LayoutLMv2_xfun_zh|基于LayoutLMv2在xfun中文数据集上训练的SER模型|778M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLMv2_xfun_zh.tar) | +|re_LayoutLMv2_xfun_zh|基于LayoutLMv2在xfun中文数据集上训练的RE模型|765M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutLMv2_xfun_zh.tar) | +|ser_LayoutLM_xfun_zh|基于LayoutLM在xfun中文数据集上训练的SER模型|430M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLM_xfun_zh.tar) | -## 3. KIE模型 +## 4. KIE模型 |模型名称|模型简介|模型大小|下载地址| | --- | --- | --- | --- | -|SDMGR|关键信息提取模型|-|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)| +|SDMGR|关键信息提取模型|78M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)| diff --git a/ppstructure/docs/quickstart.md b/ppstructure/docs/quickstart.md index 668775c6da2b06d973f69a9ce81a37396460cbdf..7016f0fcb6c10176cf6f9d30457a5ff98d2b06e1 100644 --- a/ppstructure/docs/quickstart.md +++ b/ppstructure/docs/quickstart.md @@ -1,15 +1,13 @@ # PP-Structure 快速开始 -* [1. 安装PaddleOCR whl包](#1) -* [2. 便捷使用](#2) - + [2.1 命令行使用](#21) - + [2.2 Python脚本使用](#22) - + [2.3 返回结果说明](#23) - + [2.4 参数说明](#24) -* [3. Python脚本使用](#3) - - - +- [PP-Structure 快速开始](#pp-structure-快速开始) + - [1. 安装依赖包](#1-安装依赖包) + - [2. 便捷使用](#2-便捷使用) + - [2.1 命令行使用](#21-命令行使用) + - [2.2 Python脚本使用](#22-python脚本使用) + - [2.3 返回结果说明](#23-返回结果说明) + - [2.4 参数说明](#24-参数说明) + - [3. Python脚本使用](#3-python脚本使用) ## 1. 安装依赖包 @@ -24,12 +22,8 @@ pip3 install -e . ``` - - ## 2. 便捷使用 - - ### 2.1 命令行使用 * 版面分析+表格识别 @@ -41,8 +35,6 @@ paddleocr --image_dir=../doc/table/1.png --type=structure 请参考:[文档视觉问答](../vqa/README.md)。 - - ### 2.2 Python脚本使用 * 版面分析+表格识别 @@ -76,8 +68,6 @@ im_show.save('result.jpg') 请参考:[文档视觉问答](../vqa/README.md)。 - - ### 2.3 返回结果说明 PP-Structure的返回结果为一个dict组成的list,示例如下 @@ -103,8 +93,6 @@ dict 里各个字段说明如下 请参考:[文档视觉问答](../vqa/README.md)。 - - ### 2.4 参数说明 | 字段 | 说明 | 默认值 | @@ -122,8 +110,6 @@ dict 里各个字段说明如下 运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,图片区域会被裁剪之后保存下来,excel文件和图片名名为表格在图片里的坐标。 - - ## 3. Python脚本使用 * 版面分析+表格识别 diff --git a/ppstructure/layout/README.md b/ppstructure/layout/README.md index 74cb928e30c012d5b469d685fd63b443a7d22613..0931702a7cf411e6589a1375e014a7374442f9f0 100644 --- a/ppstructure/layout/README.md +++ b/ppstructure/layout/README.md @@ -1,28 +1,19 @@ English | [简体中文](README_ch.md) - +- [Getting Started](#getting-started) + - [1. Install whl package](#1--install-whl-package) + - [2. Quick Start](#2-quick-start) + - [3. PostProcess](#3-postprocess) + - [4. Results](#4-results) + - [5. Training](#5-training) # Getting Started -[1. Install whl package](#Install) - -[2. Quick Start](#QuickStart) - -[3. PostProcess](#PostProcess) - -[4. Results](#Results) - -[5. Training](#Training) - - - ## 1. Install whl package ```bash wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl pip install -U layoutparser-0.0.0-py3-none-any.whl ``` - - ## 2. Quick Start Use LayoutParser to identify the layout of a document: @@ -77,8 +68,6 @@ The following model configurations and label maps are currently supported, which * TableBank word and TableBank latex are trained on datasets of word documents and latex documents respectively; * Download TableBank dataset contains both word and latex。 - - ## 3. PostProcess Layout parser contains multiple categories, if you only want to get the detection box for a specific category (such as the "Text" category), you can use the following code: @@ -119,7 +108,6 @@ Displays results with only the "Text" category: