diff --git "a/applications/\345\217\221\347\245\250\345\205\263\351\224\256\344\277\241\346\201\257\346\212\275\345\217\226.md" "b/applications/\345\217\221\347\245\250\345\205\263\351\224\256\344\277\241\346\201\257\346\212\275\345\217\226.md" new file mode 100644 index 0000000000000000000000000000000000000000..cd7fa1a0b3c988b21b33fe8f123e7d7c3e851ca5 --- /dev/null +++ "b/applications/\345\217\221\347\245\250\345\205\263\351\224\256\344\277\241\346\201\257\346\212\275\345\217\226.md" @@ -0,0 +1,337 @@ + +# 基于VI-LayoutXLM的发票关键信息抽取 + +- [1. 项目背景及意义](#1-项目背景及意义) +- [2. 项目内容](#2-项目内容) +- [3. 安装环境](#3-安装环境) +- [4. 关键信息抽取](#4-关键信息抽取) + - [4.1 文本检测](#41-文本检测) + - [4.2 文本识别](#42-文本识别) + - [4.3 语义实体识别](#43-语义实体识别) + - [4.4 关系抽取](#44-关系抽取) + + + +## 1. 项目背景及意义 + +关键信息抽取在文档场景中被广泛使用,如身份证中的姓名、住址信息抽取,快递单中的姓名、联系方式等关键字段内容的抽取。传统基于模板匹配的方案需要针对不同的场景制定模板并进行适配,较为繁琐,不够鲁棒。基于该问题,我们借助飞桨提供的PaddleOCR套件中的关键信息抽取方案,实现对增值税发票场景的关键信息抽取。 + +## 2. 项目内容 + +本项目基于PaddleOCR开源套件,以VI-LayoutXLM多模态关键信息抽取模型为基础,针对增值税发票场景进行适配,提取该场景的关键信息。 + +## 3. 安装环境 + +```bash +# 首先git官方的PaddleOCR项目,安装需要的依赖 +# 第一次运行打开该注释 +git clone https://gitee.com/PaddlePaddle/PaddleOCR.git +cd PaddleOCR +# 安装PaddleOCR的依赖 +pip install -r requirements.txt +# 安装关键信息抽取任务的依赖 +pip install -r ./ppstructure/vqa/requirements.txt +``` + +## 4. 关键信息抽取 + +基于文档图像的关键信息抽取包含3个部分:(1)文本检测(2)文本识别(3)关键信息抽取方法,包括语义实体识别或者关系抽取,下面分别进行介绍。 + +### 4.1 文本检测 + + +本文重点关注发票的关键信息抽取模型训练与预测过程,因此在关键信息抽取过程中,直接使用标注的文本检测与识别标注信息进行测试,如果你希望自定义该场景的文本检测模型,完成端到端的关键信息抽取部分,请参考[文本检测模型训练教程](../doc/doc_ch/detection.md),按照训练数据格式准备数据,并完成该场景下垂类文本检测模型的微调过程。 + + +### 4.2 文本识别 + +本文重点关注发票的关键信息抽取模型训练与预测过程,因此在关键信息抽取过程中,直接使用提供的文本检测与识别标注信息进行测试,如果你希望自定义该场景的文本检测模型,完成端到端的关键信息抽取部分,请参考[文本识别模型训练教程](../doc/doc_ch/recognition.md),按照训练数据格式准备数据,并完成该场景下垂类文本识别模型的微调过程。 + +### 4.3 语义实体识别 (Semantic Entity Recognition) + +语义实体识别指的是给定一段文本行,确定其类别(如`姓名`、`住址`等类别)。PaddleOCR中提供了基于VI-LayoutXLM的多模态语义实体识别方法,融合文本、位置与版面信息,相比LayoutXLM多模态模型,去除了其中的视觉骨干网络特征提取部分,引入符合阅读顺序的文本行排序方法,同时使用UDML联合互蒸馏方法进行训练,最终在精度与速度方面均超越LayoutXLM。更多关于VI-LayoutXLM的算法介绍与精度指标,请参考:[VI-LayoutXLM算法介绍](../doc/doc_ch/algorithm_kie_vi_layoutxlm.md)。 + +#### 4.3.1 准备数据 + +发票场景为例,我们首先需要标注出其中的关键字段,我们将其标注为`问题-答案`的key-value pair,如下,编号No为12270830,则`No`字段标注为question,`12270830`字段标注为answer。如下图所示。 + +
+ +
+ +**注意:** + +* 如果文本检测模型数据标注过程中,没有标注 **非关键信息内容** 的检测框,那么在标注关键信息抽取任务的时候,也不需要标注该部分,如上图所示;如果标注的过程,如果同时标注了**非关键信息内容** 的检测框,那么我们需要将该部分的label记为other。 +* 标注过程中,需要以文本行为单位进行标注,无需标注单个字符的位置信息。 + + +已经处理好的增值税发票数据集从这里下载:[增值税发票数据集下载链接](https://aistudio.baidu.com/aistudio/datasetdetail/165561)。 + +下载好发票数据集,并解压在train_data目录下,目录结构如下所示。 + +``` +train_data + |--zzsfp + |---class_list.txt + |---imgs/ + |---train.json + |---val.json +``` + +其中`class_list.txt`是包含`other`, `question`, `answer`,3个种类的的类别列表(不区分大小写),`imgs`目录底下,`train.json`与`val.json`分别表示训练与评估集合的标注文件。训练集中包含30张图片,验证集中包含8张图片。部分标注如下所示。 + +```py +b33.jpg [{"transcription": "No", "label": "question", "points": [[2882, 472], [3026, 472], [3026, 588], [2882, 588]], }, {"transcription": "12269563", "label": "answer", "points": [[3066, 448], [3598, 448], [3598, 576], [3066, 576]], ]}] +``` + +相比于OCR检测的标注,仅多了`label`字段。 + + +#### 4.3.2 开始训练 + + +VI-LayoutXLM的配置为[ser_vi_layoutxlm_xfund_zh_udml.yml](../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml),需要修改数据、类别数目以及配置文件。 + +```yml +Architecture: + model_type: &model_type "vqa" + name: DistillationModel + algorithm: Distillation + Models: + Teacher: + pretrained: + freeze_params: false + return_all_feats: true + model_type: *model_type + algorithm: &algorithm "LayoutXLM" + Transform: + Backbone: + name: LayoutXLMForSer + pretrained: True + # one of base or vi + mode: vi + checkpoints: + # 定义类别数目 + num_classes: &num_classes 5 + ... + +PostProcess: + name: DistillationSerPostProcess + model_name: ["Student", "Teacher"] + key: backbone_out + # 定义类别文件 + class_path: &class_path train_data/zzsfp/class_list.txt + +Train: + dataset: + name: SimpleDataSet + # 定义训练数据目录与标注文件 + data_dir: train_data/zzsfp/imgs + label_file_list: + - train_data/zzsfp/train.json + ... + +Eval: + dataset: + # 定义评估数据目录与标注文件 + name: SimpleDataSet + data_dir: train_data/zzsfp/imgs + label_file_list: + - train_data/zzsfp/val.json + ... +``` + +LayoutXLM与VI-LayoutXLM针对该场景的训练结果如下所示。 + +| 模型 | 迭代轮数 | Hmean | +| :---: | :---: | :---: | +| LayoutXLM | 50 | 100% | +| VI-LayoutXLM | 50 | 100% | + +可以看出,由于当前数据量较少,场景比较简单,因此2个模型的Hmean均达到了100%。 + + +#### 4.3.3 模型评估 + +模型训练过程中,使用的是知识蒸馏的策略,最终保留了学生模型的参数,在评估时,我们需要针对学生模型的配置文件进行修改: [ser_vi_layoutxlm_xfund_zh.yml](../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml),修改内容与训练配置相同,包括**类别数、类别映射文件、数据目录**。 + +修改完成后,执行下面的命令完成评估过程。 + +```bash +# 注意:需要根据你的配置文件地址与保存的模型地址,对评估命令进行修改 +python3 tools/eval.py -c ./fapiao/ser_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy +``` + +输出结果如下所示。 + +``` +[2022/08/18 08:49:58] ppocr INFO: metric eval *************** +[2022/08/18 08:49:58] ppocr INFO: precision:1.0 +[2022/08/18 08:49:58] ppocr INFO: recall:1.0 +[2022/08/18 08:49:58] ppocr INFO: hmean:1.0 +[2022/08/18 08:49:58] ppocr INFO: fps:1.9740402401574881 +``` + +#### 4.3.4 模型预测 + +使用下面的命令进行预测。 + +```bash +python3 tools/infer_vqa_token_ser.py -c fapiao/ser_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False +``` + +预测结果会保存在配置文件中的`Global.save_res_path`目录中。 + +部分预测结果如下所示。 + +
+ +
+ + +* 注意:在预测时,使用的文本检测与识别结果为标注的结果,直接从json文件里面进行读取。 + +如果希望使用OCR引擎结果得到的结果进行推理,则可以使用下面的命令进行推理。 + + +```bash +python3 tools/infer_vqa_token_ser.py -c fapiao/ser_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/zzsfp/imgs/b25.jpg Global.infer_mode=True +``` + +结果如下所示。 + +
+ +
+ +它会使用PP-OCRv3的文本检测与识别模型进行获取文本位置与内容信息。 + +可以看出,由于训练的过程中,没有标注额外的字段为other类别,所以大多数检测出来的字段被预测为question或者answer。 + +如果希望构建基于你在垂类场景训练得到的OCR检测与识别模型,可以使用下面的方法传入检测与识别的inference 模型路径,即可完成OCR文本检测与识别以及SER的串联过程。 + +```bash +python3 tools/infer_vqa_token_ser.py -c fapiao/ser_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/zzsfp/imgs/b25.jpg Global.infer_mode=True Global.kie_rec_model_dir="your_rec_model" Global.kie_det_model_dir="your_det_model" +``` + +### 4.4 关系抽取(Relation Extraction) + +使用SER模型,可以获取图像中所有的question与answer的字段,继续这些字段的类别,我们需要进一步获取question与answer之间的连接,因此需要进一步训练关系抽取模型,解决该问题。本文也基于VI-LayoutXLM多模态预训练模型,进行下游RE任务的模型训练。 + +#### 4.4.1 准备数据 + +以发票场景为例,相比于SER任务,RE中还需要标记每个文本行的id信息以及链接关系linking,如下所示。 + +
+ +
+ + +标注文件的部分内容如下所示。 + +```py +b33.jpg [{"transcription": "No", "label": "question", "points": [[2882, 472], [3026, 472], [3026, 588], [2882, 588]], "id": 0, "linking": [[0, 1]]}, {"transcription": "12269563", "label": "answer", "points": [[3066, 448], [3598, 448], [3598, 576], [3066, 576]], "id": 1, "linking": [[0, 1]]}] +``` + +相比与SER的标注,多了`id`与`linking`的信息,分别表示唯一标识以及连接关系。 + +已经处理好的增值税发票数据集从这里下载:[增值税发票数据集下载链接](https://aistudio.baidu.com/aistudio/datasetdetail/165561)。 + +#### 4.4.2 开始训练 + +基于VI-LayoutXLM的RE任务配置为[re_vi_layoutxlm_xfund_zh_udml.yml](../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml),需要修改**数据路径、类别列表文件**。 + +```yml +Train: + dataset: + name: SimpleDataSet + # 定义训练数据目录与标注文件 + data_dir: train_data/zzsfp/imgs + label_file_list: + - train_data/zzsfp/train.json + transforms: + - DecodeImage: # load image + img_mode: RGB + channel_first: False + - VQATokenLabelEncode: # Class handling label + contains_re: True + algorithm: *algorithm + class_path: &class_path train_data/zzsfp/class_list.txt + ... + +Eval: + dataset: + # 定义评估数据目录与标注文件 + name: SimpleDataSet + data_dir: train_data/zzsfp/imgs + label_file_list: + - train_data/zzsfp/val.json + ... + +``` + +LayoutXLM与VI-LayoutXLM针对该场景的训练结果如下所示。 + +| 模型 | 迭代轮数 | Hmean | +| :---: | :---: | :---: | +| LayoutXLM | 50 | 98.0% | +| VI-LayoutXLM | 50 | 99.3% | + +可以看出,对于VI-LayoutXLM相比LayoutXLM的Hmean高了1.3%。 + + +#### 4.4.3 模型评估 + +模型训练过程中,使用的是知识蒸馏的策略,最终保留了学生模型的参数,在评估时,我们需要针对学生模型的配置文件进行修改: [re_vi_layoutxlm_xfund_zh.yml](../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml),修改内容与训练配置相同,包括**类别映射文件、数据目录**。 + +修改完成后,执行下面的命令完成评估过程。 + +```bash +# 注意:需要根据你的配置文件地址与保存的模型地址,对评估命令进行修改 +python3 tools/eval.py -c ./fapiao/re_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy +``` + +输出结果如下所示。 + +```py +[2022/08/18 12:17:14] ppocr INFO: metric eval *************** +[2022/08/18 12:17:14] ppocr INFO: precision:1.0 +[2022/08/18 12:17:14] ppocr INFO: recall:0.9873417721518988 +[2022/08/18 12:17:14] ppocr INFO: hmean:0.9936305732484078 +[2022/08/18 12:17:14] ppocr INFO: fps:2.765963539771157 +``` + +#### 4.4.4 模型预测 + +使用下面的命令进行预测。 + +```bash +# -c 后面的是RE任务的配置文件 +# -o 后面的字段是RE任务的配置 +# -c_ser 后面的是SER任务的配置文件 +# -c_ser 后面的字段是SER任务的配置 +python3 tools/infer_vqa_token_ser_re.py -c fapiao/re_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/zzsfp/val.json Global.infer_mode=False -c_ser fapiao/ser_vi_layoutxlm.yml -o_ser Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy +``` + +预测结果会保存在配置文件中的`Global.save_res_path`目录中。 + +部分预测结果如下所示。 + +
+ +
+ + +* 注意:在预测时,使用的文本检测与识别结果为标注的结果,直接从json文件里面进行读取。 + +如果希望使用OCR引擎结果得到的结果进行推理,则可以使用下面的命令进行推理。 + +```bash +python3 tools/infer_vqa_token_ser_re.py -c fapiao/re_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/zzsfp/val.json Global.infer_mode=True -c_ser fapiao/ser_vi_layoutxlm.yml -o_ser Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy +``` + +如果希望构建基于你在垂类场景训练得到的OCR检测与识别模型,可以使用下面的方法传入,即可完成SER + RE的串联过程。 + +```bash +python3 tools/infer_vqa_token_ser_re.py -c fapiao/re_vi_layoutxlm.yml -o Architecture.Backbone.checkpoints=fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img=./train_data/zzsfp/val.json Global.infer_mode=True -c_ser fapiao/ser_vi_layoutxlm.yml -o_ser Architecture.Backbone.checkpoints=fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.kie_rec_model_dir="your_rec_model" Global.kie_det_model_dir="your_det_model" +``` diff --git a/configs/kie/layoutlm_series/re_layoutlmv2_xfund_zh.yml b/configs/kie/layoutlm_series/re_layoutlmv2_xfund_zh.yml index 4b330d8d58bef2d549ec7e0fea5986746a23fbe4..3e3578d8cac1aadd484f583dbe0955f7c47fca73 100644 --- a/configs/kie/layoutlm_series/re_layoutlmv2_xfund_zh.yml +++ b/configs/kie/layoutlm_series/re_layoutlmv2_xfund_zh.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_21.jpg + infer_img: ppstructure/docs/kie/input/zh_val_21.jpg save_res_path: ./output/re_layoutlmv2_xfund_zh/res/ Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutLMv2" Transform: Backbone: diff --git a/configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml b/configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml index a092106eea10e0457419e5551dd75819adeddf1b..2401cf317987c5614a476065191e750587bc09b5 100644 --- a/configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml +++ b/configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_21.jpg + infer_img: ppstructure/docs/kie/input/zh_val_21.jpg save_res_path: ./output/re_layoutxlm_xfund_zh/res/ Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutXLM" Transform: Backbone: diff --git a/configs/kie/layoutlm_series/ser_layoutlm_xfund_zh.yml b/configs/kie/layoutlm_series/ser_layoutlm_xfund_zh.yml index 8c754dd8c542b12de4ee493052407bb0da687fd0..34c7d4114062e9227d48ad5684024e2776e68447 100644 --- a/configs/kie/layoutlm_series/ser_layoutlm_xfund_zh.yml +++ b/configs/kie/layoutlm_series/ser_layoutlm_xfund_zh.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_42.jpg + infer_img: ppstructure/docs/kie/input/zh_val_42.jpg save_res_path: ./output/re_layoutlm_xfund_zh/res Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutLM" Transform: Backbone: diff --git a/configs/kie/layoutlm_series/ser_layoutlmv2_xfund_zh.yml b/configs/kie/layoutlm_series/ser_layoutlmv2_xfund_zh.yml index 3c0ffabe4465e36e5699a135a9ed0b6254cbf20b..c5e833524011b03110db3bd6f4bf845db8473922 100644 --- a/configs/kie/layoutlm_series/ser_layoutlmv2_xfund_zh.yml +++ b/configs/kie/layoutlm_series/ser_layoutlmv2_xfund_zh.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_42.jpg + infer_img: ppstructure/docs/kie/input/zh_val_42.jpg save_res_path: ./output/ser_layoutlmv2_xfund_zh/res/ Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutLMv2" Transform: Backbone: diff --git a/configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml b/configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml index 18f87bdebc249940ef3ec1897af3ad1a240f3705..abcfec2d16f13d4b4266633dbb509e0fba6d931f 100644 --- a/configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml +++ b/configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_42.jpg + infer_img: ppstructure/docs/kie/input/zh_val_42.jpg save_res_path: ./output/ser_layoutxlm_xfund_zh/res Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutXLM" Transform: Backbone: diff --git a/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml b/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml index 89f7d5c3cb74854bb9fe7e28fdc8365ed37655be..ea9f50ef56ec8b169333263c1d5e96586f9472b3 100644 --- a/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml +++ b/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml @@ -11,11 +11,13 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_21.jpg + infer_img: ppstructure/docs/kie/input/zh_val_21.jpg save_res_path: ./output/re/xfund_zh/with_gt + kie_rec_model_dir: + kie_det_model_dir: Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutXLM" Transform: Backbone: diff --git a/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml b/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml index c1bfdb6c6cee1c9618602016fec6cc1ec0a7b3bf..b96528d2738e7cfb2575feca4146af1eed0c5d2f 100644 --- a/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml +++ b/configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml @@ -11,11 +11,11 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_21.jpg + infer_img: ppstructure/docs/kie/input/zh_val_21.jpg save_res_path: ./output/re/xfund_zh/with_gt Architecture: - model_type: &model_type "vqa" + model_type: &model_type "kie" name: DistillationModel algorithm: Distillation Models: diff --git a/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml b/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml index d54125db64cef289457c4b855fe9bded3fa4149f..b8aa44dde8fd3fdc4ff14bbca20513b95178cdb0 100644 --- a/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml +++ b/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml @@ -11,16 +11,18 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_42.jpg + infer_img: ppstructure/docs/kie/input/zh_val_42.jpg # if you want to predict using the groundtruth ocr info, # you can use the following config # infer_img: train_data/XFUND/zh_val/val.json # infer_mode: False save_res_path: ./output/ser/xfund_zh/res + kie_rec_model_dir: + kie_det_model_dir: Architecture: - model_type: vqa + model_type: kie algorithm: &algorithm "LayoutXLM" Transform: Backbone: diff --git a/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml b/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml index 6f0961c8e80312ab26a8d1649bf2bb10f8792efb..238bbd2b2c7083b5534062afd3e6c11a87494a56 100644 --- a/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml +++ b/configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml @@ -11,12 +11,12 @@ Global: save_inference_dir: use_visualdl: False seed: 2022 - infer_img: ppstructure/docs/vqa/input/zh_val_42.jpg + infer_img: ppstructure/docs/kie/input/zh_val_42.jpg save_res_path: ./output/ser_layoutxlm_xfund_zh/res Architecture: - model_type: &model_type "vqa" + model_type: &model_type "kie" name: DistillationModel algorithm: Distillation Models: diff --git a/doc/doc_ch/algorithm_kie_layoutxlm.md b/doc/doc_ch/algorithm_kie_layoutxlm.md index 8b50e98c1c4680809287472baca4f1c88d115704..e693be49b7bc89e04b169fe74cf76525b2494948 100644 --- a/doc/doc_ch/algorithm_kie_layoutxlm.md +++ b/doc/doc_ch/algorithm_kie_layoutxlm.md @@ -66,10 +66,10 @@ LayoutXLM模型基于SER任务进行推理,可以执行如下命令: ```bash cd ppstructure -python3 vqa/predict_vqa_token_ser.py \ - --vqa_algorithm=LayoutXLM \ +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ --ser_model_dir=../inference/ser_layoutxlm_infer \ - --image_dir=./docs/vqa/input/zh_val_42.jpg \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ --vis_font_path=../doc/fonts/simfang.ttf ``` @@ -77,7 +77,7 @@ python3 vqa/predict_vqa_token_ser.py \ SER可视化结果默认保存到`./output`文件夹里面,结果示例如下:
- +
diff --git a/doc/doc_ch/algorithm_kie_vi_layoutxlm.md b/doc/doc_ch/algorithm_kie_vi_layoutxlm.md index 155849a6c91bbd94be89a5f59e1a77bc68609d98..f1bb4b1e62736e88594196819dcc41980f1716bf 100644 --- a/doc/doc_ch/algorithm_kie_vi_layoutxlm.md +++ b/doc/doc_ch/algorithm_kie_vi_layoutxlm.md @@ -59,10 +59,10 @@ VI-LayoutXLM模型基于SER任务进行推理,可以执行如下命令: ```bash cd ppstructure -python3 vqa/predict_vqa_token_ser.py \ - --vqa_algorithm=LayoutXLM \ +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ --ser_model_dir=../inference/ser_vi_layoutxlm_infer \ - --image_dir=./docs/vqa/input/zh_val_42.jpg \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ --vis_font_path=../doc/fonts/simfang.ttf \ --ocr_order_method="tb-yx" @@ -71,7 +71,7 @@ python3 vqa/predict_vqa_token_ser.py \ SER可视化结果默认保存到`./output`文件夹里面,结果示例如下:
- +
diff --git a/doc/doc_ch/dataset/kie_datasets.md b/doc/doc_ch/dataset/kie_datasets.md index 4535ae5f8a1ac6d2dc3d4585f33a3ec290e2373e..7f8d14cbc4ad724621f28c7d6ca1f8c2ac79f097 100644 --- a/doc/doc_ch/dataset/kie_datasets.md +++ b/doc/doc_ch/dataset/kie_datasets.md @@ -1,6 +1,7 @@ -# 信息抽取数据集 +# 关键信息抽取数据集 这里整理了常见的DocVQA数据集,持续更新中,欢迎各位小伙伴贡献数据集~ + - [FUNSD数据集](#funsd) - [XFUND数据集](#xfund) - [wildreceipt数据集](#wildreceipt) diff --git a/doc/doc_ch/kie.md b/doc/doc_ch/kie.md index da86797a21648d9b987a55493b714f6b21f21c01..b6f38a662fd98597011c5a51ff29c417d880ca17 100644 --- a/doc/doc_ch/kie.md +++ b/doc/doc_ch/kie.md @@ -64,7 +64,7 @@ zh_train_1.jpg [{"transcription": "中国人体器官捐献", "label": "other" 验证集构建方式与训练集相同。 -* 字典文件 +**(3)字典文件** 训练集与验证集中的文本行包含标签信息,所有标签的列表存在字典文件中(如`class_list.txt`),字典文件中的每一行表示为一个类别名称。 @@ -103,7 +103,7 @@ HEADER ## 1.3. 数据下载 -如果你没有本地数据集,可以从[XFUND](https://github.com/doc-analysis/XFUND)或者[FUNSD](https://guillaumejaume.github.io/FUNSD/)官网下载数据,然后使用XFUND与FUNSD的处理脚本([XFUND](../../ppstructure/vqa/tools/trans_xfun_data.py), [FUNSD](../../ppstructure/vqa/tools/trans_funsd_label.py)),生成用于PaddleOCR训练的数据格式,并使用公开数据集快速体验关键信息抽取的流程。 +如果你没有本地数据集,可以从[XFUND](https://github.com/doc-analysis/XFUND)或者[FUNSD](https://guillaumejaume.github.io/FUNSD/)官网下载数据,然后使用XFUND与FUNSD的处理脚本([XFUND](../../ppstructure/kie/tools/trans_xfun_data.py), [FUNSD](../../ppstructure/kie/tools/trans_funsd_label.py)),生成用于PaddleOCR训练的数据格式,并使用公开数据集快速体验关键信息抽取的流程。 更多关于公开数据集的介绍,请参考[关键信息抽取数据集说明文档](./dataset/kie_datasets.md)。 @@ -209,7 +209,7 @@ Architecture: num_classes: &num_classes 7 PostProcess: - name: VQASerTokenLayoutLMPostProcess + name: kieSerTokenLayoutLMPostProcess # 修改字典文件的路径为你自定义的数据集的字典路径 class_path: &class_path train_data/XFUND/class_list_xfun.txt @@ -347,25 +347,25 @@ output/ser_vi_layoutxlm_xfund_zh/ ```bash -python3 tools/infer_vqa_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./ppstructure/docs/vqa/input/zh_val_42.jpg +python3 tools/infer_kie_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./ppstructure/docs/kie/input/zh_val_42.jpg ``` 预测图片如下所示,图片会存储在`Global.save_res_path`路径中。
- +
预测过程中,默认会加载PP-OCRv3的检测识别模型,用于OCR的信息抽取,如果希望加载预先获取的OCR结果,可以使用下面的方式进行预测,指定`Global.infer_img`为标注文件,其中包含图片路径以及OCR信息,同时指定`Global.infer_mode`为False,表示此时不使用OCR预测引擎。 ```bash -python3 tools/infer_vqa_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False +python3 tools/infer_kie_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False ``` 对于上述图片,如果使用标注的OCR结果进行信息抽取,预测结果如下。
- +
可以看出,部分检测框信息更加准确,但是整体信息抽取识别结果基本一致。 @@ -375,20 +375,26 @@ python3 tools/infer_vqa_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxl ```bash -python3 ./tools/infer_vqa_token_ser_re.py -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/re_layoutxlm_xfund_zh_v4_udml/best_accuracy/ Global.infer_img=./train_data/XFUND/zh_val/image/ -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o_ser Architecture.Backbone.checkpoints=pretrain_models/ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/best_accuracy/ \ + Global.infer_img=./train_data/XFUND/zh_val/image/ \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=pretrain_models/ \ + ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ ``` 预测结果如下所示。
- +
如果希望使用标注或者预先获取的OCR信息进行关键信息抽取,同上,可以指定`Global.infer_mode`为False,指定`Global.infer_img`为标注文件。 ```bash -python3 ./tools/infer_vqa_token_ser_re.py -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/re_layoutxlm_xfund_zh_v4_udml/best_accuracy/ Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o_ser Architecture.Backbone.checkpoints=pretrain_models/ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ +python3 ./tools/infer_kie_token_ser_re.py -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/re_layoutxlm_xfund_zh_v4_udml/best_accuracy/ Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o_ser Architecture.Backbone.checkpoints=pretrain_models/ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ ``` 其中`c_ser`表示SER的配置文件,`o_ser` 后面需要加上待修改的SER模型与配置文件,如预训练权重等。 @@ -397,7 +403,7 @@ python3 ./tools/infer_vqa_token_ser_re.py -c configs/kie/vi_layoutxlm/re_vi_layo 预测结果如下所示。
- +
可以看出,直接使用标注的OCR结果的RE预测结果要更加准确一些。 @@ -417,8 +423,8 @@ inference 模型(`paddle.jit.save`保存的模型) ```bash # -c 后面设置训练算法的yml配置文件 # -o 配置可选参数 -# Global.pretrained_model 参数设置待转换的训练模型地址。 -# Global.save_inference_dir参数设置转换的模型将保存的地址。 +# Architecture.Backbone.checkpoints 参数设置待转换的训练模型地址 +# Global.save_inference_dir 参数设置转换的模型将保存的地址 python3 tools/export_model.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.save_inference_dir=./inference/ser_vi_layoutxlm ``` @@ -440,10 +446,10 @@ VI-LayoutXLM模型基于SER任务进行推理,可以执行如下命令: ```bash cd ppstructure -python3 vqa/predict_vqa_token_ser.py \ - --vqa_algorithm=LayoutXLM \ +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ --ser_model_dir=../inference/ser_vi_layoutxlm \ - --image_dir=./docs/vqa/input/zh_val_42.jpg \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ --vis_font_path=../doc/fonts/simfang.ttf \ --ocr_order_method="tb-yx" @@ -452,7 +458,7 @@ python3 vqa/predict_vqa_token_ser.py \ 可视化SER结果结果默认保存到`./output`文件夹里面。结果示例如下:
- +
diff --git a/doc/doc_en/algorithm_kie_layoutxlm_en.md b/doc/doc_en/algorithm_kie_layoutxlm_en.md new file mode 100644 index 0000000000000000000000000000000000000000..910c1f4d497a6e503f0a7a5ec26dbeceb2d321a1 --- /dev/null +++ b/doc/doc_en/algorithm_kie_layoutxlm_en.md @@ -0,0 +1,162 @@ +# KIE Algorithm - LayoutXLM + + +- [1. Introduction](#1-introduction) +- [2. Environment](#2-environment) +- [3. Model Training / Evaluation / Prediction](#3-model-training--evaluation--prediction) +- [4. Inference and Deployment](#4-inference-and-deployment) + - [4.1 Python Inference](#41-python-inference) + - [4.2 C++ Inference](#42-c-inference) + - [4.3 Serving](#43-serving) + - [4.4 More](#44-more) +- [5. FAQ](#5-faq) +- [Citation](#Citation) + + +## 1. Introduction + +Paper: + +> [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) +> +> Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei +> +> 2021 + +On XFUND_zh dataset, the algorithm reproduction Hmean is as follows. + +|Model|Backbone|Task |Cnnfig|Hmean|Download link| +| --- | --- |--|--- | --- | --- | +|LayoutXLM|LayoutXLM-base|SER |[ser_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml)|90.38%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar)/[inference model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh_infer.tar)| +|LayoutXLM|LayoutXLM-base|RE | [re_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml)|74.83%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar)/[inference model(coming soon)]()| + + +## 2. Environment + +Please refer to ["Environment Preparation"](./environment_en.md) to configure the PaddleOCR environment, and refer to ["Project Clone"](./clone_en.md) to clone the project code. + + +## 3. Model Training / Evaluation / Prediction + +Please refer to [KIE tutorial](./kie_en.md)。PaddleOCR has modularized the code structure, so that you only need to **replace the configuration file** to train different models. + + + +## 4. Inference and Deployment + +### 4.1 Python Inference + +**Note:** Currently, the RE model inference process is still in the process of adaptation. We take SER model as an example to introduce the KIE process based on LayoutXLM model. + +First, we need to export the trained model into inference model. Take LayoutXLM model trained on XFUND_zh as an example ([trained model download link](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar)). Use the following command to export. + + +``` bash +wget https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar +tar -xf ser_LayoutXLM_xfun_zh.tar +python3 tools/export_model.py -c configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./ser_LayoutXLM_xfun_zh/best_accuracy Global.save_inference_dir=./inference/ser_layoutxlm +``` + +Use the following command to infer using LayoutXLM SER model. + +```bash +cd ppstructure +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ + --ser_model_dir=../inference/ser_layoutxlm_infer \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ + --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ + --vis_font_path=../doc/fonts/simfang.ttf +``` + +The SER visualization results are saved in the `./output` directory by default. The results are as follows. + + +
+ +
+ + +### 4.2 C++ Inference + +Not supported + +### 4.3 Serving + +Not supported + +### 4.4 More + +Not supported + +## 5. FAQ + +## Citation + +```bibtex +@article{DBLP:journals/corr/abs-2104-08836, + author = {Yiheng Xu and + Tengchao Lv and + Lei Cui and + Guoxin Wang and + Yijuan Lu and + Dinei Flor{\^{e}}ncio and + Cha Zhang and + Furu Wei}, + title = {LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich + Document Understanding}, + journal = {CoRR}, + volume = {abs/2104.08836}, + year = {2021}, + url = {https://arxiv.org/abs/2104.08836}, + eprinttype = {arXiv}, + eprint = {2104.08836}, + timestamp = {Thu, 14 Oct 2021 09:17:23 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-2104-08836.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@article{DBLP:journals/corr/abs-1912-13318, + author = {Yiheng Xu and + Minghao Li and + Lei Cui and + Shaohan Huang and + Furu Wei and + Ming Zhou}, + title = {LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, + journal = {CoRR}, + volume = {abs/1912.13318}, + year = {2019}, + url = {http://arxiv.org/abs/1912.13318}, + eprinttype = {arXiv}, + eprint = {1912.13318}, + timestamp = {Mon, 01 Jun 2020 16:20:46 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-1912-13318.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@article{DBLP:journals/corr/abs-2012-14740, + author = {Yang Xu and + Yiheng Xu and + Tengchao Lv and + Lei Cui and + Furu Wei and + Guoxin Wang and + Yijuan Lu and + Dinei A. F. Flor{\^{e}}ncio and + Cha Zhang and + Wanxiang Che and + Min Zhang and + Lidong Zhou}, + title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding}, + journal = {CoRR}, + volume = {abs/2012.14740}, + year = {2020}, + url = {https://arxiv.org/abs/2012.14740}, + eprinttype = {arXiv}, + eprint = {2012.14740}, + timestamp = {Tue, 27 Jul 2021 09:53:52 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-2012-14740.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} +``` diff --git a/doc/doc_en/algorithm_kie_sdmgr_en.md b/doc/doc_en/algorithm_kie_sdmgr_en.md new file mode 100644 index 0000000000000000000000000000000000000000..5b12b8c959e830015ffb173626ac5752ee9ecee0 --- /dev/null +++ b/doc/doc_en/algorithm_kie_sdmgr_en.md @@ -0,0 +1,130 @@ + +# KIE Algorithm - SDMGR + +- [1. Introduction](#1-introduction) +- [2. Environment](#2-environment) +- [3. Model Training / Evaluation / Prediction](#3-model-training--evaluation--prediction) +- [4. Inference and Deployment](#4-inference-and-deployment) + - [4.1 Python Inference](#41-python-inference) + - [4.2 C++ Inference](#42-c-inference) + - [4.3 Serving](#43-serving) + - [4.4 More](#44-more) +- [5. FAQ](#5-faq) +- [Citation](#Citation) + +## 1. Introduction + +Paper: + +> [Spatial Dual-Modality Graph Reasoning for Key Information Extraction](https://arxiv.org/abs/2103.14470) +> +> Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang +> +> 2021 + +On wildreceipt dataset, the algorithm reproduction Hmean is as follows. + +|Model|Backbone |Cnnfig|Hmean|Download link| +| --- | --- | --- | --- | --- | +|SDMGR|VGG6|[configs/kie/sdmgr/kie_unet_sdmgr.yml](../../configs/kie/sdmgr/kie_unet_sdmgr.yml)|86.7%|[trained model]( https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)/[inference model(coming soon)]()| + + + +## 2. 环境配置 + +Please refer to ["Environment Preparation"](./environment_en.md) to configure the PaddleOCR environment, and refer to ["Project Clone"](./clone_en.md) to clone the project code. + + + +## 3. Model Training / Evaluation / Prediction + +SDMGR is a key information extraction algorithm that classifies each detected textline into predefined categories, such as order ID, invoice number, amount, etc. + +The training and test data are collected in the wildreceipt dataset, use following command to downloaded the dataset. + + +```bash +wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/wildreceipt.tar && tar xf wildreceipt.tar +``` + +Create dataset soft link to `PaddleOCR/train_data` directory. + +```bash +cd PaddleOCR/ && mkdir train_data && cd train_data +ln -s ../../wildreceipt ./ +``` + + +### 3.1 Model training + +The config file is `configs/kie/sdmgr/kie_unet_sdmgr.yml`, the default dataset path is `train_data/wildreceipt`. + +Use the following command to train the model. + +```bash +python3 tools/train.py -c configs/kie/sdmgr/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/ +``` + +### 3.2 Model evaluation + +Use the following command to evaluate the model. + +```bash +python3 tools/eval.py -c configs/kie/sdmgr/kie_unet_sdmgr.yml -o Global.checkpoints=./output/kie/best_accuracy +``` + +An example of output information is shown below. + +```py +[2022/08/10 05:22:23] ppocr INFO: metric eval *************** +[2022/08/10 05:22:23] ppocr INFO: hmean:0.8670120239257812 +[2022/08/10 05:22:23] ppocr INFO: fps:10.18816520530961 +``` + +### 3.3 Model prediction + +Use the following command to load the model and predict. During the prediction, the text file storing the image path and OCR information needs to be loaded in advance. Use `Global.infer_img` to assign. + +```bash +python3 tools/infer_kie.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints=kie_vgg16/best_accuracy Global.infer_img=./train_data/wildreceipt/1.txt +``` + +The visualization results and texts are saved in the `./output/sdmgr_kie/` directory by default. The results are as follows. + + +
+ +
+ +## 4. Inference and Deployment + +### 4.1 Python Inference + +Not supported + +### 4.2 C++ Inference + +Not supported + +### 4.3 Serving + +Not supported + +### 4.4 More + +Not supported + +## 5. FAQ + +## Citation + +```bibtex +@misc{sun2021spatial, + title={Spatial Dual-Modality Graph Reasoning for Key Information Extraction}, + author={Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang}, + year={2021}, + eprint={2103.14470}, + archivePrefix={arXiv}, + primaryClass={cs.CV} +} +``` diff --git a/doc/doc_en/algorithm_kie_vi_layoutxlm_en.md b/doc/doc_en/algorithm_kie_vi_layoutxlm_en.md new file mode 100644 index 0000000000000000000000000000000000000000..12b6e1bddbd03b820ce33ba86de3d430a44f8987 --- /dev/null +++ b/doc/doc_en/algorithm_kie_vi_layoutxlm_en.md @@ -0,0 +1,156 @@ +# KIE Algorithm - VI-LayoutXLM + + +- [1. Introduction](#1-introduction) +- [2. Environment](#2-environment) +- [3. Model Training / Evaluation / Prediction](#3-model-training--evaluation--prediction) +- [4. Inference and Deployment](#4-inference-and-deployment) + - [4.1 Python Inference](#41-python-inference) + - [4.2 C++ Inference](#42-c-inference) + - [4.3 Serving](#43-serving) + - [4.4 More](#44-more) +- [5. FAQ](#5-faq) +- [Citation](#Citation) + + +## 1. Introduction + +VI-LayoutXLM is improved based on LayoutXLM. In the process of downstream finetuning, the visual backbone network module is removed, and the model infernce speed is further improved on the basis of almost lossless accuracy. + +On XFUND_zh dataset, the algorithm reproduction Hmean is as follows. + +|Model|Backbone|Task |Cnnfig|Hmean|Download link| +| --- | --- |---| --- | --- | --- | +|VI-LayoutXLM |VI-LayoutXLM-base | SER |[ser_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml)|93.19%|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar)/[inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_infer.tar)| +|VI-LayoutXLM |VI-LayoutXLM-base |RE | [re_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml)|83.92%|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar)/[inference model(coming soon)]()| + + +Please refer to ["Environment Preparation"](./environment_en.md) to configure the PaddleOCR environment, and refer to ["Project Clone"](./clone_en.md) to clone the project code. + + +## 3. Model Training / Evaluation / Prediction + +Please refer to [KIE tutorial](./kie_en.md)。PaddleOCR has modularized the code structure, so that you only need to **replace the configuration file** to train different models. + + +## 4. Inference and Deployment + +### 4.1 Python Inference + +**Note:** Currently, the RE model inference process is still in the process of adaptation. We take SER model as an example to introduce the KIE process based on VI-LayoutXLM model. + +First, we need to export the trained model into inference model. Take VI-LayoutXLM model trained on XFUND_zh as an example ([trained model download link](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar)). Use the following command to export. + + +``` bash +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar +tar -xf ser_vi_layoutxlm_xfund_pretrained.tar +python3 tools/export_model.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./ser_vi_layoutxlm_xfund_pretrained/best_accuracy Global.save_inference_dir=./inference/ser_vi_layoutxlm_infer +``` + +Use the following command to infer using VI-LayoutXLM SER model. + + +```bash +cd ppstructure +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ + --ser_model_dir=../inference/ser_vi_layoutxlm_infer \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ + --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ + --vis_font_path=../doc/fonts/simfang.ttf \ + --ocr_order_method="tb-yx" +``` + +The SER visualization results are saved in the `./output` folder by default. The results are as follows. + + +
+ +
+ + +### 4.2 C++ Inference + +Not supported + +### 4.3 Serving + +Not supported + +### 4.4 More + +Not supported + +## 5. FAQ + +## Citation + + +```bibtex +@article{DBLP:journals/corr/abs-2104-08836, + author = {Yiheng Xu and + Tengchao Lv and + Lei Cui and + Guoxin Wang and + Yijuan Lu and + Dinei Flor{\^{e}}ncio and + Cha Zhang and + Furu Wei}, + title = {LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich + Document Understanding}, + journal = {CoRR}, + volume = {abs/2104.08836}, + year = {2021}, + url = {https://arxiv.org/abs/2104.08836}, + eprinttype = {arXiv}, + eprint = {2104.08836}, + timestamp = {Thu, 14 Oct 2021 09:17:23 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-2104-08836.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@article{DBLP:journals/corr/abs-1912-13318, + author = {Yiheng Xu and + Minghao Li and + Lei Cui and + Shaohan Huang and + Furu Wei and + Ming Zhou}, + title = {LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, + journal = {CoRR}, + volume = {abs/1912.13318}, + year = {2019}, + url = {http://arxiv.org/abs/1912.13318}, + eprinttype = {arXiv}, + eprint = {1912.13318}, + timestamp = {Mon, 01 Jun 2020 16:20:46 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-1912-13318.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@article{DBLP:journals/corr/abs-2012-14740, + author = {Yang Xu and + Yiheng Xu and + Tengchao Lv and + Lei Cui and + Furu Wei and + Guoxin Wang and + Yijuan Lu and + Dinei A. F. Flor{\^{e}}ncio and + Cha Zhang and + Wanxiang Che and + Min Zhang and + Lidong Zhou}, + title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding}, + journal = {CoRR}, + volume = {abs/2012.14740}, + year = {2020}, + url = {https://arxiv.org/abs/2012.14740}, + eprinttype = {arXiv}, + eprint = {2012.14740}, + timestamp = {Tue, 27 Jul 2021 09:53:52 +0200}, + biburl = {https://dblp.org/rec/journals/corr/abs-2012-14740.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} +``` diff --git a/doc/doc_en/algorithm_overview_en.md b/doc/doc_en/algorithm_overview_en.md index 3412ccbf76f6c04b61420a6abd91a55efb383db6..3f59bf9c829920fb43fa7f89858b4586ceaac26f 100755 --- a/doc/doc_en/algorithm_overview_en.md +++ b/doc/doc_en/algorithm_overview_en.md @@ -1,17 +1,17 @@ -# OCR Algorithms +# Algorithms -- [1. Two-stage Algorithms](#1) +- [1. Two-stage OCR Algorithms](#1) - [1.1 Text Detection Algorithms](#11) - [1.2 Text Recognition Algorithms](#12) -- [2. End-to-end Algorithms](#2) +- [2. End-to-end OCR Algorithms](#2) - [3. Table Recognition Algorithms](#3) - +- [4. Key Information Extraction Algorithms](#4) This tutorial lists the OCR algorithms supported by PaddleOCR, as well as the models and metrics of each algorithm on **English public datasets**. It is mainly used for algorithm introduction and algorithm performance comparison. For more models on other datasets including Chinese, please refer to [PP-OCR v2.0 models list](./models_list_en.md). -## 1. Two-stage Algorithms +## 1. Two-stage OCR Algorithms @@ -98,11 +98,12 @@ Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation r -## 2. End-to-end Algorithms +## 2. End-to-end OCR Algorithms Supported end-to-end algorithms (Click the link to get the tutorial): - [x] [PGNet](./algorithm_e2e_pgnet_en.md) + ## 3. Table Recognition Algorithms @@ -114,3 +115,34 @@ On the PubTabNet dataset, the algorithm result is as follows: |Model|Backbone|Config|Acc|Download link| |---|---|---|---|---| |TableMaster|TableResNetExtra|[configs/table/table_master.yml](../../configs/table/table_master.yml)|77.47%|[trained](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_train.tar) / [inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_infer.tar)| + + + + +## 4. Key Information Extraction Algorithms + +Supported KIE algorithms (Click the link to get the tutorial): + +- [x] [VI-LayoutXLM](./algorithm_kie_vi_laoutxlm_en.md) +- [x] [LayoutLM](./algorithm_kie_laoutxlm_en.md) +- [x] [LayoutLMv2](./algorithm_kie_laoutxlm_en.md) +- [x] [LayoutXLM](./algorithm_kie_laoutxlm_en.md) +- [x] [SDMGR](./algorithm_kie_sdmgr_en.md) + +On wildreceipt dataset, the algorithm result is as follows: + +|Model|Backbone|Config|Hmean|Download link| +| --- | --- | --- | --- | --- | +|SDMGR|VGG6|[configs/kie/sdmgr/kie_unet_sdmgr.yml](../../configs/kie/sdmgr/kie_unet_sdmgr.yml)|86.7%|[trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)| + +On XFUND_zh dataset, the algorithm result is as follows: + +|Model|Backbone|Task|Config|Hmean|Download link| +| --- | --- | --- | --- | --- | --- | +|VI-LayoutXLM| VI-LayoutXLM-base | SER | [ser_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml)|**93.19%**|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | SER | [ser_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml)|90.38%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar)| +|LayoutLM| LayoutLM-base | SER | [ser_layoutlm_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutlm_xfund_zh.yml)|77.31%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLM_xfun_zh.tar)| +|LayoutLMv2| LayoutLMv2-base | SER | [ser_layoutlmv2_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutlmv2_xfund_zh.yml)|85.44%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLMv2_xfun_zh.tar)| +|VI-LayoutXLM| VI-LayoutXLM-base | RE | [re_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml)|**83.92%**|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | RE | [re_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml)|74.83%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar)| +|LayoutLMv2| LayoutLMv2-base | RE | [re_layoutlmv2_xfund_zh.yml](../../configs/kie/layoutlm_series/re_layoutlmv2_xfund_zh.yml)|67.77%|[trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutLMv2_xfun_zh.tar)| diff --git a/doc/doc_en/dataset/docvqa_datasets_en.md b/doc/doc_en/dataset/kie_datasets_en.md similarity index 63% rename from doc/doc_en/dataset/docvqa_datasets_en.md rename to doc/doc_en/dataset/kie_datasets_en.md index 820462c324318a391abe409412e8996f11b36279..3a8b744fc0b2653aab5c1435996a2ef73dd336e4 100644 --- a/doc/doc_en/dataset/docvqa_datasets_en.md +++ b/doc/doc_en/dataset/kie_datasets_en.md @@ -1,7 +1,9 @@ -## DocVQA dataset -Here are the common DocVQA datasets, which are being updated continuously. Welcome to contribute datasets~ +## Key Imnformation Extraction dataset + +Here are the common DocVQA datasets, which are being updated continuously. Welcome to contribute datasets. - [FUNSD dataset](#funsd) - [XFUND dataset](#xfund) +- [wildreceipt dataset](#wildreceipt数据集) #### 1. FUNSD dataset @@ -25,3 +27,21 @@ Here are the common DocVQA datasets, which are being updated continuously. Welco - **Download address**: https://github.com/doc-analysis/XFUND/releases/tag/v1.0 + + + +## 3. wildreceipt dataset + +- **Data source**: https://arxiv.org/abs/2103.14470 +- **Data introduction**: XFUND is an English receipt dataset, which contains 26 different categories. There are 1267 training images and 472 evaluation images, in which 50,000 textlines and boxes are annotated. Part of the image and the annotation box visualization are shown below. + +
+ + +
+ +**Note:** Boxes with category `Ignore` or `Others` are not visualized here. + +- **Download address**: + - Offical dataset: [link](https://download.openmmlab.com/mmocr/data/wildreceipt.tar) + - Dataset converted for PaddleOCR training process: [link](https://paddleocr.bj.bcebos.com/ppstructure/dataset/wildreceipt.tar) diff --git a/doc/doc_en/kie_en.md b/doc/doc_en/kie_en.md new file mode 100644 index 0000000000000000000000000000000000000000..0c335a5ceb8991b80bc0cab6facdf402878abb50 --- /dev/null +++ b/doc/doc_en/kie_en.md @@ -0,0 +1,491 @@ +# Key Information Extraction + +This tutorial provides a guide to the whole process of key information extraction using PaddleOCR, including data preparation, model training, optimization, evaluation, prediction of semantic entity recognition (SER) and relationship extraction (RE) tasks. + + +- [1. Data Preparation](#Data-Preparation) + - [1.1. Prepare for dataset](#11-Prepare-for-dataset) + - [1.2. Custom Dataset](#12-Custom-Dataset) + - [1.3. Download data](#13-Download-data) +- [2. Training](#2-Training) + - [2.1. Start Training](#21-start-training) + - [2.2. Resume Training](#22-Resume-Training) + - [2.3. Mixed Precision Training](#23-Mixed-Precision-Training) + - [2.4. Distributed Training](#24-Distributed-Training) + - [2.5. Train using knowledge distillation](#25-Train-using-knowledge-distillation) + - [2.6. Training on other platform](#26-Training-on-other-platform) +- [3. Evaluation and Test](#3-Evaluation-and-Test) + - [3.1. Evaluation](#31-指标评估) + - [3.2. Test](#32-Test) +- [4. Model inference](#4-Model-inference) +- [5. FAQ](#5-faq) + + +# 1. Data Preparation + +## 1.1. Prepare for dataset + +PaddleOCR supports the following data format when training KIE models. + +- `general data` is used to train a dataset whose annotation is stored in a text file (SimpleDataset). + + +The default storage path of training data is `PaddleOCR/train_data`. If you already have datasets on your disk, you only need to create a soft link to the dataset directory. + +``` +# linux and mac os +ln -sf /train_data/dataset +# windows +mklink /d /train_data/dataset +``` + +## 1.2. Custom Dataset + +The training process generally includes the training set and the evaluation set. The data formats of the two sets are same. + +**(1) Training set** + +It is recommended to put the training images into the same folder, record the path and annotation of images in a text file. The contents of the text file are as follows: + + +```py +" image path annotation information " +zh_train_0.jpg [{"transcription": "汇丰晋信", "label": "other", "points": [[104, 114], [530, 114], [530, 175], [104, 175]], "id": 1, "linking": []}, {"transcription": "受理时间:", "label": "question", "points": [[126, 267], [266, 267], [266, 305], [126, 305]], "id": 7, "linking": [[7, 13]]}, {"transcription": "2020.6.15", "label": "answer", "points": [[321, 239], [537, 239], [537, 285], [321, 285]], "id": 13, "linking": [[7, 13]]}] +zh_train_1.jpg [{"transcription": "中国人体器官捐献", "label": "other", "points": [[544, 459], [954, 459], [954, 517], [544, 517]], "id": 1, "linking": []}, {"transcription": ">编号:MC545715483585", "label": "other", "points": [[1462, 470], [2054, 470], [2054, 543], [1462, 543]], "id": 10, "linking": []}, {"transcription": "CHINAORGANDONATION", "label": "other", "points": [[543, 516], [958, 516], [958, 551], [543, 551]], "id": 14, "linking": []}, {"transcription": "中国人体器官捐献志愿登记表", "label": "header", "points": [[635, 793], [1892, 793], [1892, 904], [635, 904]], "id": 18, "linking": []}] +... +``` + +**Note:** In the text file, please split the image path and annotation with `\t`. Otherwise, error will happen when training. + +The annotation can be parsed by `json` into a list of sub-annotations. Each element in the list is a dict, which stores the required information of each text line. The required fields are as follows. + +- transcription: stores the text content of the text line +- label: the category of the text line content +- points: stores the four point position information of the text line +- id: stores the ID information of the text line for RE model training +- linking: stores the connection information between text lines for RE model training + +**(2) Evaluation set** + +The evaluation set is constructed in the same way as the training set. + +**(3) Dictionary file** + +The textlines in the training set and the evaluation set contain label information. The list of all labels is stored in the dictionary file (such as `class_list.txt`). Each line in the dictionary file is represented as a label name. + +For example, FUND_zh data contains four categories. The contents of the dictionary file are as follows. + +``` +OTHER +QUESTION +ANSWER +HEADER +``` + +In the annotation file, the annotation information of the `label` field of the text line content of each annotation needs to belong to the dictionary content. + + +The final dataset shall have the following file structure. + +``` +|-train_data + |-data_name + |- train.json + |- train + |- zh_train_0.png + |- zh_train_1.jpg + | ... + |- val.json + |- val + |- zh_val_0.png + |- zh_val_1.jpg + | ... +``` + +**Note:** + +-The category information in the annotation file is not case sensitive. For example, 'HEADER' and 'header' will be seen as the same category ID. +- In the dictionary file, it is recommended to put the `other` category (other textlines that need not be paid attention to can be labeled as `other`) on the first line. When parsing, the category ID of the 'other' category will be resolved to 0, and the textlines predicted as `other` will not be visualized later. + +## 1.3. Download data + +If you do not have local dataset, you can donwload the source files of [XFUND](https://github.com/doc-analysis/XFUND) or [FUNSD](https://guillaumejaume.github.io/FUNSD) and use the scripts of [XFUND](../../ppstructure/kie/tools/trans_xfun_data.py) or [FUNSD](../../ppstructure/kie/tools/trans_funsd_label.py) for tranform them into PaddleOCR format. Then you can use the public dataset to quick experience KIE. + +For more information about public KIE datasets, please refer to [KIE dataset tutorial](./dataset/kie_datasets_en.md). + +PaddleOCR also supports the annotation of KIE models. Please refer to [PPOCRLabel tutorial](../../PPOCRLabel/README.md). + +# 2. Training + +PaddleOCR provides training scripts, evaluation scripts and inference scripts. We will introduce based on VI-LayoutXLM model in this section. +This section will take the VI layoutxlm multimodal pre training model as an example to explain. + +> If you want to use the SDMGR based KIE algorithm, please refer to: [SDMGR tutorial](./algorithm_kie_sdmgr_en.md). + + +## 2.1. Start Training + +If you do not use a custom dataset, you can use XFUND_zh that has been processed in PaddleOCR dataset for quick experience. + + +```bash +mkdir train_data +cd train_data +wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar && tar -xf XFUND.tar +cd .. +``` + +If you don't want to train, and want to directly experience the process of model evaluation, prediction, and inference, you can download the training model provided in PaddleOCR and skip section 2.1. + + +Use the following command to download the trained model. + +```bash +mkdir pretrained_model +cd pretrained_model +# download and uncompress SER model +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar & tar -xf ser_vi_layoutxlm_xfund_pretrained.tar + +# download and uncompress RE model +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar & tar -xf re_vi_layoutxlm_xfund_pretrained.tar +``` + +Start training: + +- If your paddlepaddle version is `CPU`, you need to set `Global.use_gpu=False` in your config file. +- During training, PaddleOCR will download the VI-LayoutXLM pretraining model by default. There is no need to download it in advance. + +```bash +# GPU training, support single card and multi-cards +# The training log will be save in "{Global.save_model_dir}/train.log" + +# train SER model using single card +python3 tools/train.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml + +# train SER model using multi-cards, you can use --gpus to assign the GPU ids. +python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml + +# train RE model using single card +python3 tools/train.py -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml +``` + +Take the SER model training as an example. After the training is started, you will see the following log output. + +``` +[2022/08/08 16:28:28] ppocr INFO: epoch: [1/200], global_step: 10, lr: 0.000006, loss: 1.871535, avg_reader_cost: 0.28200 s, avg_batch_cost: 0.82318 s, avg_samples: 8.0, ips: 9.71838 samples/s, eta: 0:51:59 +[2022/08/08 16:28:33] ppocr INFO: epoch: [1/200], global_step: 19, lr: 0.000018, loss: 1.461939, avg_reader_cost: 0.00042 s, avg_batch_cost: 0.32037 s, avg_samples: 6.9, ips: 21.53773 samples/s, eta: 0:37:55 +[2022/08/08 16:28:39] ppocr INFO: cur metric, precision: 0.11526348939743859, recall: 0.19776657060518732, hmean: 0.14564265817747712, fps: 34.008392345050055 +[2022/08/08 16:28:45] ppocr INFO: save best model is to ./output/ser_vi_layoutxlm_xfund_zh/best_accuracy +[2022/08/08 16:28:45] ppocr INFO: best metric, hmean: 0.14564265817747712, precision: 0.11526348939743859, recall: 0.19776657060518732, fps: 34.008392345050055, best_epoch: 1 +[2022/08/08 16:28:51] ppocr INFO: save model in ./output/ser_vi_layoutxlm_xfund_zh/latest +``` + +The following information will be automatically printed. + + +|Field | meaning| +| :----: | :------: | +|epoch | current iteration round| +|iter | current iteration times| +|lr | current learning rate| +|loss | current loss function| +| reader_cost | current batch data processing time| +| batch_ Cost | total current batch time| +|samples | number of samples in the current batch| +|ips | number of samples processed per second| + + +PaddleOCR supports evaluation during training. you can modify `eval_batch_step` in the config file `configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml` (default as 19 iters). Trained model with best hmean will be saved as `output/ser_vi_layoutxlm_xfund_zh/best_accuracy/`. + +If the evaluation dataset is very large, it's recommended to enlarge the eval interval or evaluate the model after training. + +**Note:** for more KIE models training and configuration files, you can go into `configs/kie/` or refer to [Frontier KIE algorithms](./algorithm_overview_en.md). + + +If you want to train model on your own dataset, you need to modify the data path, dictionary file and category number in the configuration file. + + +Take `configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml` as an example, contents we need to fix is as follows. + +```yaml +Architecture: + # ... + Backbone: + name: LayoutXLMForSer + pretrained: True + mode: vi + # Assuming that n categroies are included in the dictionary file (other is included), the the num_classes is set as 2n-1 + num_classes: &num_classes 7 + +PostProcess: + name: kieSerTokenLayoutLMPostProcess + # Modify the dictionary file path for your custom dataset + class_path: &class_path train_data/XFUND/class_list_xfun.txt + +Train: + dataset: + name: SimpleDataSet + # Modify the data path for your training dataset + data_dir: train_data/XFUND/zh_train/image + # Modify the data annotation path for your training dataset + label_file_list: + - train_data/XFUND/zh_train/train.json + ... + loader: + # batch size for single card when training + batch_size_per_card: 8 + ... + +Eval: + dataset: + name: SimpleDataSet + # Modify the data path for your evaluation dataset + data_dir: train_data/XFUND/zh_val/image + # Modify the data annotation path for your evaluation dataset + label_file_list: + - train_data/XFUND/zh_val/val.json + ... + loader: + # batch size for single card when evaluation + batch_size_per_card: 8 +``` + +**Note that the configuration file for prediction/evaluation must be consistent with the training file.** + + +## 2.2. Resume Training + +If the training process is interrupted and you want to load the saved model to resume training, you can specify the path of the model to be loaded by specifying `Architecture.Backbone.checkpoints`. + + +```bash +python3 tools/train.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy +``` + +**Note:** + +- Priority of `Architecture.Backbone.checkpoints` is higher than` Architecture.Backbone.pretrained`. You need to set `Architecture.Backbone.checkpoints` for model finetuning, resume and evalution. If you want to train with the NLP pretrained model, you need to set `Architecture.Backbone.pretrained` as `True` and set `Architecture.Backbone.checkpoints` as null (`null`). +- PaddleNLP pretrained models are used here for LayoutXLM series models, the model loading and saving logic is same as those in PaddleNLP. Therefore we do not need to set `Global.pretrained_model` or `Global.checkpoints` here. +- If you use knowledge distillation to train the LayoutXLM series models, resuming training is not supported now. + +## 2.3. Mixed Precision Training + +coming soon! + +## 2.4. Distributed Training + +During multi-machine multi-gpu training, use the `--ips` parameter to set the used machine IP address, and the `--gpus` parameter to set the used GPU ID: + +```bash +python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1,2,3' tools/train.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml +``` + +**Note:** (1) When using multi-machine and multi-gpu training, you need to replace the ips value in the above command with the address of your machine, and the machines need to be able to ping each other. (2) Training needs to be launched separately on multiple machines. The command to view the ip address of the machine is `ifconfig`. (3) For more details about the distributed training speedup ratio, please refer to [Distributed Training Tutorial](./distributed_training_en.md). + + +## 2.5. Train with Knowledge Distillation + +Knowledge distillation is supported in PaddleOCR for KIE model training process. The configuration file is [ser_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml). For more information, please refer to [doc](./knowledge_distillation_en.md). + +**Note:** The saving and loading logic of the LayoutXLM series KIE models in PaddleOCR is consistent with PaddleNLP, so only the parameters of the student model are saved in the distillation process. If you want to use the saved model for evaluation, you need to use the configuration of the student model (the student model corresponding to the distillation file above is [ser_vi_layoutxlm_xfund_zh.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml). + + + +## 2.6. Training on other platform + +- Windows GPU/CPU +The Windows platform is slightly different from the Linux platform: +Windows platform only supports `single gpu` training and inference, specify GPU for training `set CUDA_VISIBLE_DEVICES=0` +On the Windows platform, DataLoader only supports single-process mode, so you need to set `num_workers` to 0; + +- macOS +GPU mode is not supported, you need to set `use_gpu` to False in the configuration file, and the rest of the training evaluation prediction commands are exactly the same as Linux GPU. + +- Linux DCU +Running on a DCU device requires setting the environment variable `export HIP_VISIBLE_DEVICES=0,1,2,3`, and the rest of the training and evaluation prediction commands are exactly the same as the Linux GPU. + +# 3. Evaluation and Test + +## 3.1. Evaluation + +The trained model will be saved in `Global.save_model_dir`. When evaluation, you need to set `Architecture.Backbone.checkpoints` as your model directroy. The evaluation dataset can be set by modifying the `Eval.dataset.label_file_list` field in the `configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml` file. + + +```bash +# GPU evaluation, Global.checkpoints is the weight to be tested +python3 tools/eval.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy +``` + +The following information will be printed such as precision, recall, hmean and so on. + +```py +[2022/08/09 07:59:28] ppocr INFO: metric eval *************** +[2022/08/09 07:59:28] ppocr INFO: precision:0.697476609016161 +[2022/08/09 07:59:28] ppocr INFO: recall:0.8861671469740634 +[2022/08/09 07:59:28] ppocr INFO: hmean:0.7805806758686339 +[2022/08/09 07:59:28] ppocr INFO: fps:17.367364606899105 +``` + + +## 3.2. Test + +Using the model trained by PaddleOCR, we can quickly get prediction through the following script. + +The default prediction image is stored in `Global.infer_img`, and the trained model weight is specified via `-o Global.checkpoints`. + +According to the `Global.save_model_dir` and `save_epoch_step` fields set in the configuration file, the following parameters will be saved. + + +``` +output/ser_vi_layoutxlm_xfund_zh/ +├── best_accuracy + ├── metric.states + ├── model_config.json + ├── model_state.pdparams +├── best_accuracy.pdopt +├── config.yml +├── train.log +├── latest + ├── metric.states + ├── model_config.json + ├── model_state.pdparams +├── latest.pdopt +``` + +Among them, best_accuracy.* is the best model on the evaluation set; latest.* is the model of the last epoch. + +The configuration file for prediction must be consistent with the training file. If you finish the training process using `python3 tools/train.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml`. You can use the following command for prediction. + + +```bash +python3 tools/infer_kie_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./ppstructure/docs/kie/input/zh_val_42.jpg +``` + +The output image is as follows, which is also saved in `Global.save_res_path`. + + +
+ +
+ +During the prediction process, the detection and recognition model of PP-OCRv3 will be loaded by default for information extraction of OCR. If you want to load the OCR results obtained in advance, you can use the following method to predict, and specify `Global.infer_img` as the annotation file, which contains the image path and OCR information, and specifies `Global.infer_mode` as False, indicating that the OCR inference engine is not used at this time. + +```bash +python3 tools/infer_kie_token_ser.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.infer_img=./train_data/XFUND/zh_val/val.json Global.infer_mode=False +``` + +For the above image, if information extraction is performed using the labeled OCR results, the prediction results are as follows. + +
+ +
+ +It can be seen that part of the detection information is more accurate, but the overall information extraction results are basically the same. + +In RE model prediction, the SER model result needs to be given first, so the configuration file and model weight of SER need to be loaded at the same time, as shown in the following example. + +```bash +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/best_accuracy/ \ + Global.infer_img=./train_data/XFUND/zh_val/image/ \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=pretrain_models/ \ + ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ +``` + +The result is as follows. + +
+ +
+ + +If you want to load the OCR results obtained in advance, you can use the following method to predict, and specify `Global.infer_img` as the annotation file, which contains the image path and OCR information, and specifies `Global.infer_mode` as False, indicating that the OCR inference engine is not used at this time. + +```bash +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_udml_xfund_zh/best_accuracy/ \ + Global.infer_img=./train_data/XFUND/zh_val/val.json \ + Global.infer_mode=False \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=pretrain_models/ser_vi_layoutxlm_udml_xfund_zh/best_accuracy/ +``` + +`c_ser` denotes SER configurations file, `o_ser` denotes the SER model configurations that will override corresponding content in the file. + + +The result is as follows. + +
+ +
+ + +It can be seen that the re prediction results directly using the annotated OCR results are more accurate. + + +# 4. Model inference + + +## 4.1 Export the model + +The inference model (the model saved by `paddle.jit.save`) is generally a solidified model saved after the model training is completed, and is mostly used to give prediction in deployment. + +The model saved during the training process is the checkpoints model, which saves the parameters of the model and is mostly used to resume training. + +Compared with the checkpoints model, the inference model will additionally save the structural information of the model. Therefore, it is easier to deploy because the model structure and model parameters are already solidified in the inference model file, and is suitable for integration with actual systems. + +The SER model can be converted to the inference model using the following command. + + +```bash +# -c Set the training algorithm yml configuration file. +# -o Set optional parameters. +# Architecture.Backbone.checkpoints Set the training model address. +# Global.save_inference_dir Set the address where the converted model will be saved. +python3 tools/export_model.py -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml -o Architecture.Backbone.checkpoints=./output/ser_vi_layoutxlm_xfund_zh/best_accuracy Global.save_inference_dir=./inference/ser_vi_layoutxlm +``` + +After the conversion is successful, there are three files in the model save directory: + +``` +inference/ser_vi_layoutxlm/ + ├── inference.pdiparams # The parameter file of recognition inference model + ├── inference.pdiparams.info # The parameter information of recognition inference model, which can be ignored + └── inference.pdmodel # The program file of recognition +``` + +Export of RE model is also in adaptation. + +## 4.2 Model inference + +The VI layoutxlm model performs reasoning based on the ser task, and can execute the following commands: + + +Using the following command to infer the VI-LayoutXLM model. + +```bash +cd ppstructure +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ + --ser_model_dir=../inference/ser_vi_layoutxlm \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ + --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ + --vis_font_path=../doc/fonts/simfang.ttf \ + --ocr_order_method="tb-yx" +``` + +The visualized result will be saved in `./output`, which is shown as follows. + +
+ +
+ + +# 5. FAQ + +Q1: After the training model is transferred to the inference model, the prediction effect is inconsistent? + +**A**:The problems are mostly caused by inconsistent preprocessing and postprocessing parameters when the trained model predicts and the preprocessing and postprocessing parameters when the inference model predicts. You can compare whether there are differences in preprocessing, postprocessing, and prediction in the configuration files used for training. diff --git a/paddleocr.py b/paddleocr.py index 8e34c4fbc331f798618fc5f33bc00963a577e25a..d78046802eb8b8af42ae2718697a5cfc1e7186de 100644 --- a/paddleocr.py +++ b/paddleocr.py @@ -35,7 +35,7 @@ from tools.infer import predict_system from ppocr.utils.logging import get_logger logger = get_logger() -from ppocr.utils.utility import check_and_read_gif, get_image_file_list +from ppocr.utils.utility import check_and_read, get_image_file_list from ppocr.utils.network import maybe_download, download_with_progressbar, is_link, confirm_model_dir_url from tools.infer.utility import draw_ocr, str2bool, check_gpu from ppstructure.utility import init_args, draw_structure_result @@ -289,7 +289,8 @@ MODEL_URLS = { 'ch': { 'url': 'https://paddleocr.bj.bcebos.com/ppstructure/models/layout/picodet_lcnet_x1_0_layout_infer.tar', - 'dict_path': 'ppocr/utils/dict/layout_publaynet_dict.txt' + 'dict_path': + 'ppocr/utils/dict/layout_dict/layout_publaynet_dict.txt' } } } @@ -490,7 +491,7 @@ class PaddleOCR(predict_system.TextSystem): download_with_progressbar(img, 'tmp.jpg') img = 'tmp.jpg' image_file = img - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: with open(image_file, 'rb') as f: np_arr = np.frombuffer(f.read(), dtype=np.uint8) @@ -584,7 +585,7 @@ class PPStructure(StructureSystem): download_with_progressbar(img, 'tmp.jpg') img = 'tmp.jpg' image_file = img - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: with open(image_file, 'rb') as f: np_arr = np.frombuffer(f.read(), dtype=np.uint8) diff --git a/ppocr/modeling/backbones/__init__.py b/ppocr/modeling/backbones/__init__.py index f5d54150bc325521698c43662895287e5640fb3d..6fdcc4a759e59027b1457d1e46757c64c4dcad9e 100755 --- a/ppocr/modeling/backbones/__init__.py +++ b/ppocr/modeling/backbones/__init__.py @@ -52,17 +52,15 @@ def build_backbone(config, model_type): support_dict = ['ResNet'] elif model_type == 'kie': from .kie_unet_sdmgr import Kie_backbone - support_dict = ['Kie_backbone'] + from .vqa_layoutlm import LayoutLMForSer, LayoutLMv2ForSer, LayoutLMv2ForRe, LayoutXLMForSer, LayoutXLMForRe + support_dict = [ + 'Kie_backbone', 'LayoutLMForSer', 'LayoutLMv2ForSer', + 'LayoutLMv2ForRe', 'LayoutXLMForSer', 'LayoutXLMForRe' + ] elif model_type == 'table': from .table_resnet_vd import ResNet from .table_mobilenet_v3 import MobileNetV3 support_dict = ['ResNet', 'MobileNetV3'] - elif model_type == 'vqa': - from .vqa_layoutlm import LayoutLMForSer, LayoutLMv2ForSer, LayoutLMv2ForRe, LayoutXLMForSer, LayoutXLMForRe - support_dict = [ - 'LayoutLMForSer', 'LayoutLMv2ForSer', 'LayoutLMv2ForRe', - 'LayoutXLMForSer', 'LayoutXLMForRe' - ] else: raise NotImplementedError diff --git a/ppocr/utils/save_load.py b/ppocr/utils/save_load.py index 7ccadb005a8ad591d9927c0e028887caacb3e37b..0c652c8fdc88bd066d7202bb57c046aefbc20cc4 100644 --- a/ppocr/utils/save_load.py +++ b/ppocr/utils/save_load.py @@ -54,13 +54,15 @@ def load_model(config, model, optimizer=None, model_type='det'): pretrained_model = global_config.get('pretrained_model') best_model_dict = {} is_float16 = False + is_nlp_model = model_type == 'kie' and config["Architecture"][ + "algorithm"] not in ["SDMGR"] - if model_type == 'vqa': - # NOTE: for vqa model dsitillation, resume training is not supported now + if is_nlp_model is True: + # NOTE: for kie model dsitillation, resume training is not supported now if config["Architecture"]["algorithm"] in ["Distillation"]: return best_model_dict checkpoints = config['Architecture']['Backbone']['checkpoints'] - # load vqa method metric + # load kie method metric if checkpoints: if os.path.exists(os.path.join(checkpoints, 'metric.states')): with open(os.path.join(checkpoints, 'metric.states'), @@ -153,7 +155,7 @@ def load_pretrained_params(model, path): new_state_dict = {} is_float16 = False - + for k1 in params.keys(): if k1 not in state_dict.keys(): @@ -192,10 +194,10 @@ def save_model(model, _mkdir_if_not_exist(model_path, logger) model_prefix = os.path.join(model_path, prefix) paddle.save(optimizer.state_dict(), model_prefix + '.pdopt') - if config['Architecture']["model_type"] != 'vqa': + if is_nlp_model is not True: paddle.save(model.state_dict(), model_prefix + '.pdparams') metric_prefix = model_prefix - else: # for vqa system, we follow the save/load rules in NLP + else: # for kie system, we follow the save/load rules in NLP if config['Global']['distributed']: arch = model._layers else: diff --git a/ppstructure/README.md b/ppstructure/README.md index 856de5a4306de987378dafc65e582f280be4bef3..cff057e81909e620eaa86ffe464433cc3a5d6f21 100644 --- a/ppstructure/README.md +++ b/ppstructure/README.md @@ -5,25 +5,25 @@ English | [简体中文](README_ch.md) - [3. Features](#3-features) - [4. Results](#4-results) - [4.1 Layout analysis and table recognition](#41-layout-analysis-and-table-recognition) - - [4.2 DOC-VQA](#42-doc-vqa) + - [4.2 KIE](#42-kie) - [5. Quick start](#5-quick-start) - [6. PP-Structure System](#6-pp-structure-system) - [6.1 Layout analysis and table recognition](#61-layout-analysis-and-table-recognition) - [6.1.1 Layout analysis](#611-layout-analysis) - [6.1.2 Table recognition](#612-table-recognition) - - [6.2 DOC-VQA](#62-doc-vqa) + - [6.2 KIE](#62-kie) - [7. Model List](#7-model-list) - [7.1 Layout analysis model](#71-layout-analysis-model) - [7.2 OCR and table recognition model](#72-ocr-and-table-recognition-model) - - [7.3 DOC-VQA model](#73-doc-vqa-model) + - [7.3 KIE model](#73-kie-model) ## 1. Introduction PP-Structure is an OCR toolkit that can be used for document analysis and processing with complex structures, designed to help developers better complete document understanding tasks ## 2. Update log -* 2022.02.12 DOC-VQA add LayoutLMv2 model。 -* 2021.12.07 add [DOC-VQA SER and RE tasks](vqa/README.md)。 +* 2022.02.12 KIE add LayoutLMv2 model。 +* 2021.12.07 add [KIE SER and RE tasks](kie/README.md)。 ## 3. Features @@ -34,7 +34,7 @@ The main features of PP-Structure are as follows: - Support to extract excel files from the table areas - Support python whl package and command line usage, easy to use - Support custom training for layout analysis and table structure tasks -- Support Document Visual Question Answering (DOC-VQA) tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE) +- Support Document Key Information Extraction (KIE) tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE) ## 4. Results @@ -44,11 +44,11 @@ The main features of PP-Structure are as follows: The figure shows the pipeline of layout analysis + table recognition. The image is first divided into four areas of image, text, title and table by layout analysis, and then OCR detection and recognition is performed on the three areas of image, text and title, and the table is performed table recognition, where the image will also be stored for use. -### 4.2 DOC-VQA +### 4.2 KIE * SER * -![](docs/vqa/result_ser/zh_val_0_ser.jpg) | ![](docs/vqa/result_ser/zh_val_42_ser.jpg) +![](docs/kie/result_ser/zh_val_0_ser.jpg) | ![](docs/kie/result_ser/zh_val_42_ser.jpg) ---|--- Different colored boxes in the figure represent different categories. For xfun dataset, there are three categories: query, answer and header: @@ -62,7 +62,7 @@ The corresponding category and OCR recognition results are also marked at the to * RE -![](docs/vqa/result_re/zh_val_21_re.jpg) | ![](docs/vqa/result_re/zh_val_40_re.jpg) +![](docs/kie/result_re/zh_val_21_re.jpg) | ![](docs/kie/result_re/zh_val_40_re.jpg) ---|--- @@ -88,9 +88,9 @@ Layout analysis classifies image by region, including the use of Python scripts Table recognition converts table images into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed instructions, please refer to [document](table/README.md) -### 6.2 DOC-VQA +### 6.2 KIE -Document Visual Question Answering (DOC-VQA) if a type of Visual Question Answering (VQA), which includes Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks. Based on SER task, text recognition and classification in images can be completed. Based on THE RE task, we can extract the relation of the text content in the image, such as judge the problem pair. For details, please refer to [document](vqa/README.md) +Multi-modal based Key Information Extraction (KIE) methods include Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks. Based on SER task, text recognition and classification in images can be completed. Based on THE RE task, we can extract the relation of the text content in the image, such as judge the problem pair. For details, please refer to [document](kie/README.md) ## 7. Model List @@ -110,7 +110,7 @@ PP-Structure Series Model List (Updating) |ch_PP-OCRv3_rec_slim |[New] Slim qunatization with distillation lightweight model, supporting Chinese, English text recognition| 4.9M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_slim_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_slim_train.tar) | |ch_ppstructure_mobile_v2.0_SLANet|Chinese table recognition model trained on PubTabNet dataset based on SLANet|9.3M|[inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_train.tar) | -### 7.3 DOC-VQA model +### 7.3 KIE model |model name|description|model size|download| | --- | --- | --- | --- | diff --git a/ppstructure/README_ch.md b/ppstructure/README_ch.md index 64af0cbe53265c85fd9027fe48e82102f4b5ea57..efd25eb2cbda585c3fc2e192cd8184ccc7e10c0d 100644 --- a/ppstructure/README_ch.md +++ b/ppstructure/README_ch.md @@ -7,17 +7,17 @@ - [3. 特性](#3) - [4. 效果展示](#4) - [4.1 版面分析和表格识别](#41) - - [4.2 DocVQA](#42) + - [4.2 关键信息抽取](#42) - [5. 快速体验](#5) - [6. PP-Structure 介绍](#6) - [6.1 版面分析+表格识别](#61) - [6.1.1 版面分析](#611) - [6.1.2 表格识别](#612) - - [6.2 DocVQA](#62) + - [6.2 关键信息抽取](#62) - [7. 模型库](#7) - [7.1 版面分析模型](#71) - [7.2 OCR和表格识别模型](#72) - - [7.3 DocVQA 模型](#73) + - [7.3 关键信息抽取模型](#73) ## 1. 简介 @@ -25,8 +25,8 @@ PP-Structure是一个可用于复杂文档结构分析和处理的OCR工具包 ## 2. 近期更新 -* 2022.02.12 DocVQA增加LayoutLMv2模型。 -* 2021.12.07 新增[DOC-VQA任务SER和RE](vqa/README.md)。 +* 2022.02.12 KIE增加LayoutLMv2模型。 +* 2021.12.07 新增[关键信息抽取任务SER和RE](kie/README.md)。 ## 3. 特性 @@ -37,7 +37,7 @@ PP-Structure的主要特性如下: - 支持表格区域进行结构化分析,最终结果输出Excel文件 - 支持python whl包和命令行两种方式,简单易用 - 支持版面分析和表格结构化两类任务自定义训练 -- 支持文档视觉问答(Document Visual Question Answering,DocVQA)任务-语义实体识别(Semantic Entity Recognition,SER)和关系抽取(Relation Extraction,RE) +- 支持基于多模态的关键信息抽取(Key Information Extraction,KIE)任务-语义实体识别(Semantic Entity Recognition,SER)和关系抽取(Relation Extraction,RE) ## 4. 效果展示 @@ -50,11 +50,11 @@ PP-Structure的主要特性如下: 图中展示了版面分析+表格识别的整体流程,图片先有版面分析划分为图像、文本、标题和表格四种区域,然后对图像、文本和标题三种区域进行OCR的检测识别,对表格进行表格识别,其中图像还会被存储下来以便使用。 -### 4.2 DOC-VQA +### 4.2 关键信息抽取 * SER -![](./docs/vqa/result_ser/zh_val_0_ser.jpg) | ![](./docs/vqa/result_ser/zh_val_42_ser.jpg) +![](./docs/kie/result_ser/zh_val_0_ser.jpg) | ![](./docs/kie/result_ser/zh_val_42_ser.jpg) ---|--- 图中不同颜色的框表示不同的类别,对于XFUN数据集,有`QUESTION`, `ANSWER`, `HEADER` 3种类别 @@ -67,7 +67,7 @@ PP-Structure的主要特性如下: * RE -![](./docs/vqa/result_re/zh_val_21_re.jpg) | ![](./docs/vqa/result_re/zh_val_40_re.jpg) +![](./docs/kie/result_re/zh_val_21_re.jpg) | ![](./docs/kie/result_re/zh_val_40_re.jpg) ---|--- @@ -99,9 +99,9 @@ PP-Structure的主要特性如下: 表格识别将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md)。 -### 6.2 DocVQA +### 6.2 关键信息抽取 -DocVQA指文档视觉问答,其中包括语义实体识别 (Semantic Entity Recognition, SER) 和关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair),详细说明参考[文档](vqa/README.md)。 +关键信息抽取包括语义实体识别 (Semantic Entity Recognition, SER) 和关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair),详细说明参考[文档](kie/README.md)。 ## 7. 模型库 @@ -126,7 +126,7 @@ PP-Structure系列模型列表(更新中) -### 7.3 DocVQA 模型 +### 7.3 KIE 模型 |模型名称|模型简介|模型大小|下载地址| | --- | --- | --- | --- | diff --git a/ppstructure/docs/inference.md b/ppstructure/docs/inference.md index 7604246da5a79b0ee2939c9fb4c91602531ec7de..b050900760067402b2b738ed8d0e94d6788aca4f 100644 --- a/ppstructure/docs/inference.md +++ b/ppstructure/docs/inference.md @@ -72,9 +72,9 @@ mkdir inference && cd inference wget https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar && tar xf PP-Layout_v1.0_ser_pretrained.tar cd .. -python3 predict_system.py --model_name_or_path=vqa/PP-Layout_v1.0_ser_pretrained/ \ - --mode=vqa \ - --image_dir=vqa/images/input/zh_val_0.jpg \ +python3 predict_system.py --model_name_or_path=kie/PP-Layout_v1.0_ser_pretrained/ \ + --mode=kie \ + --image_dir=kie/images/input/zh_val_0.jpg \ --vis_font_path=../doc/fonts/simfang.ttf ``` -运行完成后,每张图片会在`output`字段指定的目录下的`vqa`目录下存放可视化之后的图片,图片名和输入图片名一致。 +运行完成后,每张图片会在`output`字段指定的目录下的`kie`目录下存放可视化之后的图片,图片名和输入图片名一致。 diff --git a/ppstructure/docs/inference_en.md b/ppstructure/docs/inference_en.md index 2a0fb30543eaa06c4ede5f82a443135c959db37d..ad16f048e3b08a45d6e6d76e630ba48483f263d4 100644 --- a/ppstructure/docs/inference_en.md +++ b/ppstructure/docs/inference_en.md @@ -73,9 +73,9 @@ mkdir inference && cd inference wget https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar && tar xf PP-Layout_v1.0_ser_pretrained.tar cd .. -python3 predict_system.py --model_name_or_path=vqa/PP-Layout_v1.0_ser_pretrained/ \ - --mode=vqa \ - --image_dir=vqa/images/input/zh_val_0.jpg \ +python3 predict_system.py --model_name_or_path=kie/PP-Layout_v1.0_ser_pretrained/ \ + --mode=kie \ + --image_dir=kie/images/input/zh_val_0.jpg \ --vis_font_path=../doc/fonts/simfang.ttf ``` -After the operation is completed, each image will store the visualized image in the `vqa` directory under the directory specified by the `output` field, and the image name is the same as the input image name. +After the operation is completed, each image will store the visualized image in the `kie` directory under the directory specified by the `output` field, and the image name is the same as the input image name. diff --git a/ppstructure/docs/installation.md b/ppstructure/docs/installation.md index 3f564cb2ddfe546642e6f92e2c024bbe3a1f7ffc..3649e729d04ec83ba2d97571af993d75358eec73 100644 --- a/ppstructure/docs/installation.md +++ b/ppstructure/docs/installation.md @@ -1,7 +1,7 @@ - [快速安装](#快速安装) - [1. PaddlePaddle 和 PaddleOCR](#1-paddlepaddle-和-paddleocr) - [2. 安装其他依赖](#2-安装其他依赖) - - [2.1 VQA所需依赖](#21--vqa所需依赖) + - [2.1 VQA所需依赖](#21--kie所需依赖) # 快速安装 diff --git a/ppstructure/docs/vqa/input/zh_val_0.jpg b/ppstructure/docs/kie/input/zh_val_0.jpg similarity index 100% rename from ppstructure/docs/vqa/input/zh_val_0.jpg rename to ppstructure/docs/kie/input/zh_val_0.jpg diff --git a/ppstructure/docs/vqa/input/zh_val_21.jpg b/ppstructure/docs/kie/input/zh_val_21.jpg similarity index 100% rename from ppstructure/docs/vqa/input/zh_val_21.jpg rename to ppstructure/docs/kie/input/zh_val_21.jpg diff --git a/ppstructure/docs/vqa/input/zh_val_40.jpg b/ppstructure/docs/kie/input/zh_val_40.jpg similarity index 100% rename from ppstructure/docs/vqa/input/zh_val_40.jpg rename to ppstructure/docs/kie/input/zh_val_40.jpg diff --git a/ppstructure/docs/vqa/input/zh_val_42.jpg b/ppstructure/docs/kie/input/zh_val_42.jpg similarity index 100% rename from ppstructure/docs/vqa/input/zh_val_42.jpg rename to ppstructure/docs/kie/input/zh_val_42.jpg diff --git a/ppstructure/docs/vqa/result_re/zh_val_21_re.jpg b/ppstructure/docs/kie/result_re/zh_val_21_re.jpg similarity index 100% rename from ppstructure/docs/vqa/result_re/zh_val_21_re.jpg rename to ppstructure/docs/kie/result_re/zh_val_21_re.jpg diff --git a/ppstructure/docs/vqa/result_re/zh_val_40_re.jpg b/ppstructure/docs/kie/result_re/zh_val_40_re.jpg similarity index 100% rename from ppstructure/docs/vqa/result_re/zh_val_40_re.jpg rename to ppstructure/docs/kie/result_re/zh_val_40_re.jpg diff --git a/ppstructure/docs/vqa/result_re/zh_val_42_re.jpg b/ppstructure/docs/kie/result_re/zh_val_42_re.jpg similarity index 100% rename from ppstructure/docs/vqa/result_re/zh_val_42_re.jpg rename to ppstructure/docs/kie/result_re/zh_val_42_re.jpg diff --git a/ppstructure/docs/vqa/result_re_with_gt_ocr/zh_val_42_re.jpg b/ppstructure/docs/kie/result_re_with_gt_ocr/zh_val_42_re.jpg similarity index 100% rename from ppstructure/docs/vqa/result_re_with_gt_ocr/zh_val_42_re.jpg rename to ppstructure/docs/kie/result_re_with_gt_ocr/zh_val_42_re.jpg diff --git a/ppstructure/docs/vqa/result_ser/zh_val_0_ser.jpg b/ppstructure/docs/kie/result_ser/zh_val_0_ser.jpg similarity index 100% rename from ppstructure/docs/vqa/result_ser/zh_val_0_ser.jpg rename to ppstructure/docs/kie/result_ser/zh_val_0_ser.jpg diff --git a/ppstructure/docs/vqa/result_ser/zh_val_42_ser.jpg b/ppstructure/docs/kie/result_ser/zh_val_42_ser.jpg similarity index 100% rename from ppstructure/docs/vqa/result_ser/zh_val_42_ser.jpg rename to ppstructure/docs/kie/result_ser/zh_val_42_ser.jpg diff --git a/ppstructure/docs/vqa/result_ser_with_gt_ocr/zh_val_42_ser.jpg b/ppstructure/docs/kie/result_ser_with_gt_ocr/zh_val_42_ser.jpg similarity index 100% rename from ppstructure/docs/vqa/result_ser_with_gt_ocr/zh_val_42_ser.jpg rename to ppstructure/docs/kie/result_ser_with_gt_ocr/zh_val_42_ser.jpg diff --git a/ppstructure/docs/models_list.md b/ppstructure/docs/models_list.md index ef2994cabea38709464780d25b5f32c3b9801b4c..0b2f41deb5588c82238e93d835dc8c606e4fde2e 100644 --- a/ppstructure/docs/models_list.md +++ b/ppstructure/docs/models_list.md @@ -34,8 +34,8 @@ |模型名称|模型简介|推理模型大小|下载地址| | --- | --- | --- | --- | -|en_ppocr_mobile_v2.0_table_structure|基于TableRec-RARE在PubTabNet数据集上训练的英文表格识别模型|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | -|en_ppstructure_mobile_v2.0_SLANet|基于SLANet在PubTabNet数据集上训练的英文表格识别模型|9M|[推理模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_train.tar) | +|en_ppocr_mobile_v2.0_table_structure|基于TableRec-RARE在PubTabNet数据集上训练的英文表格识别模型|6.8M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | +|en_ppstructure_mobile_v2.0_SLANet|基于SLANet在PubTabNet数据集上训练的英文表格识别模型|9.2M|[推理模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_train.tar) | |ch_ppstructure_mobile_v2.0_SLANet|基于SLANet在PubTabNet数据集上训练的中文表格识别模型|9.3M|[推理模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_train.tar) | diff --git a/ppstructure/docs/models_list_en.md b/ppstructure/docs/models_list_en.md index 64a7cdebc3e3c7ac18ae2f61013aa4e8a7c3ead8..7ba1d30464287eaf67a0265464fcc261e3b4407f 100644 --- a/ppstructure/docs/models_list_en.md +++ b/ppstructure/docs/models_list_en.md @@ -4,7 +4,7 @@ - [2. OCR and Table Recognition](#2-ocr-and-table-recognition) - [2.1 OCR](#21-ocr) - [2.2 Table Recognition](#22-table-recognition) -- [3. VQA](#3-vqa) +- [3. VQA](#3-kie) - [4. KIE](#4-kie) @@ -35,8 +35,8 @@ If you need to use other OCR models, you can download the model in [PP-OCR model |model| description |inference model size|download| | --- |-----------------------------------------------------------------------------| --- | --- | -|en_ppocr_mobile_v2.0_table_structure| English table recognition model trained on PubTabNet dataset based on TableRec-RARE |18.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | -|en_ppstructure_mobile_v2.0_SLANet|English table recognition model trained on PubTabNet dataset based on SLANet|9M|[inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_train.tar) | +|en_ppocr_mobile_v2.0_table_structure| English table recognition model trained on PubTabNet dataset based on TableRec-RARE |6.8M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | +|en_ppstructure_mobile_v2.0_SLANet|English table recognition model trained on PubTabNet dataset based on SLANet|9.2M|[inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_train.tar) | |ch_ppstructure_mobile_v2.0_SLANet|Chinese table recognition model trained on PubTabNet dataset based on SLANet|9.3M|[inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_train.tar) | diff --git a/ppstructure/docs/quickstart.md b/ppstructure/docs/quickstart.md index f4645bdfe011a12370bedc7bd7a125b28ded41ff..9a538a6f11d99e9caa4c3483421aaccc344079de 100644 --- a/ppstructure/docs/quickstart.md +++ b/ppstructure/docs/quickstart.md @@ -7,16 +7,16 @@ - [2.1.2 版面分析+表格识别](#212-版面分析表格识别) - [2.1.3 版面分析](#213-版面分析) - [2.1.4 表格识别](#214-表格识别) - - [2.1.5 DocVQA](#215-docvqa) + - [2.1.5 DocVQA](#215-dockie) - [2.2 代码使用](#22-代码使用) - [2.2.1 图像方向分类版面分析表格识别](#221-图像方向分类版面分析表格识别) - [2.2.2 版面分析+表格识别](#222-版面分析表格识别) - [2.2.3 版面分析](#223-版面分析) - [2.2.4 表格识别](#224-表格识别) - - [2.2.5 DocVQA](#225-docvqa) + - [2.2.5 DocVQA](#225-dockie) - [2.3 返回结果说明](#23-返回结果说明) - [2.3.1 版面分析+表格识别](#231-版面分析表格识别) - - [2.3.2 DocVQA](#232-docvqa) + - [2.3.2 DocVQA](#232-dockie) - [2.4 参数说明](#24-参数说明) @@ -64,7 +64,7 @@ paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/table.jpg --type=structur #### 2.1.5 DocVQA -请参考:[文档视觉问答](../vqa/README.md)。 +请参考:[文档视觉问答](../kie/README.md)。 ### 2.2 代码使用 @@ -172,7 +172,7 @@ for line in result: #### 2.2.5 DocVQA -请参考:[文档视觉问答](../vqa/README.md)。 +请参考:[文档视觉问答](../kie/README.md)。 ### 2.3 返回结果说明 @@ -210,7 +210,7 @@ dict 里各个字段说明如下 #### 2.3.2 DocVQA -请参考:[文档视觉问答](../vqa/README.md)。 +请参考:[文档视觉问答](../kie/README.md)。 ### 2.4 参数说明 @@ -226,10 +226,10 @@ dict 里各个字段说明如下 | layout_dict_path | 版面分析模型字典| ../ppocr/utils/dict/layout_publaynet_dict.txt | | layout_score_threshold | 版面分析模型检测框阈值| 0.5| | layout_nms_threshold | 版面分析模型nms阈值| 0.5| -| vqa_algorithm | vqa模型算法| LayoutXLM| +| kie_algorithm | kie模型算法| LayoutXLM| | ser_model_dir | ser模型 inference 模型地址| None| | ser_dict_path | ser模型字典| ../train_data/XFUND/class_list_xfun.txt| -| mode | structure or vqa | structure | +| mode | structure or kie | structure | | image_orientation | 前向中是否执行图像方向分类 | False | | layout | 前向中是否执行版面分析 | True | | table | 前向中是否执行表格识别 | True | diff --git a/ppstructure/docs/quickstart_en.md b/ppstructure/docs/quickstart_en.md index b4dee3f02d3c2762ef71720995f4da697ae43622..cf9d12ff9c1dadef95fedd3a02acb2146607aa96 100644 --- a/ppstructure/docs/quickstart_en.md +++ b/ppstructure/docs/quickstart_en.md @@ -7,16 +7,16 @@ - [2.1.2 layout analysis + table recognition](#212-layout-analysis--table-recognition) - [2.1.3 layout analysis](#213-layout-analysis) - [2.1.4 table recognition](#214-table-recognition) - - [2.1.5 DocVQA](#215-docvqa) + - [2.1.5 DocVQA](#215-dockie) - [2.2 Use by code](#22-use-by-code) - [2.2.1 image orientation + layout analysis + table recognition](#221-image-orientation--layout-analysis--table-recognition) - [2.2.2 layout analysis + table recognition](#222-layout-analysis--table-recognition) - [2.2.3 layout analysis](#223-layout-analysis) - [2.2.4 table recognition](#224-table-recognition) - - [2.2.5 DocVQA](#225-docvqa) + - [2.2.5 DocVQA](#225-dockie) - [2.3 Result description](#23-result-description) - [2.3.1 layout analysis + table recognition](#231-layout-analysis--table-recognition) - - [2.3.2 DocVQA](#232-docvqa) + - [2.3.2 DocVQA](#232-dockie) - [2.4 Parameter Description](#24-parameter-description) @@ -64,7 +64,7 @@ paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/table.jpg --type=structur #### 2.1.5 DocVQA -Please refer to: [Documentation Visual Q&A](../vqa/README.md) . +Please refer to: [Documentation Visual Q&A](../kie/README.md) . ### 2.2 Use by code @@ -172,7 +172,7 @@ for line in result: #### 2.2.5 DocVQA -Please refer to: [Documentation Visual Q&A](../vqa/README.md) . +Please refer to: [Documentation Visual Q&A](../kie/README.md) . ### 2.3 Result description @@ -210,7 +210,7 @@ After the recognition is completed, each image will have a directory with the sa #### 2.3.2 DocVQA -Please refer to: [Documentation Visual Q&A](../vqa/README.md) . +Please refer to: [Documentation Visual Q&A](../kie/README.md) . ### 2.4 Parameter Description @@ -226,10 +226,10 @@ Please refer to: [Documentation Visual Q&A](../vqa/README.md) . | layout_dict_path | The dictionary path of layout analysis model| ../ppocr/utils/dict/layout_publaynet_dict.txt | | layout_score_threshold | The box threshold path of layout analysis model| 0.5| | layout_nms_threshold | The nms threshold path of layout analysis model| 0.5| -| vqa_algorithm | vqa model algorithm| LayoutXLM| +| kie_algorithm | kie model algorithm| LayoutXLM| | ser_model_dir | Ser model inference model path| None| | ser_dict_path | The dictionary path of Ser model| ../train_data/XFUND/class_list_xfun.txt| -| mode | structure or vqa | structure | +| mode | structure or kie | structure | | image_orientation | Whether to perform image orientation classification in forward | False | | layout | Whether to perform layout analysis in forward | True | | table | Whether to perform table recognition in forward | True | diff --git a/ppstructure/docs/table/layout.jpg b/ppstructure/docs/table/layout.jpg index db7246b314556d73cd49d049b9b480887b6ef994..c5c39dac7267d8c76121ee686a5931a551903d6f 100644 Binary files a/ppstructure/docs/table/layout.jpg and b/ppstructure/docs/table/layout.jpg differ diff --git a/ppstructure/docs/table/paper-image.jpg b/ppstructure/docs/table/paper-image.jpg index db7246b314556d73cd49d049b9b480887b6ef994..c5c39dac7267d8c76121ee686a5931a551903d6f 100644 Binary files a/ppstructure/docs/table/paper-image.jpg and b/ppstructure/docs/table/paper-image.jpg differ diff --git a/ppstructure/kie/README.md b/ppstructure/kie/README.md new file mode 100644 index 0000000000000000000000000000000000000000..9e1b72e772f03a9dadd202268c39cba11f8f121e --- /dev/null +++ b/ppstructure/kie/README.md @@ -0,0 +1,260 @@ +English | [简体中文](README_ch.md) + +- [1. Introduction](#1-introduction) + +- [2. Accuracy and performance](#2-Accuracy-and-performance) +- [3. Visualization](#3-Visualization) + - [3.1 SER](#31-ser) + - [3.2 RE](#32-re) +- [4. Usage](#4-usage) + - [4.1 Prepare for the environment](#41-Prepare-for-the-environment) + - [4.2 Quick start](#42-Quick-start) + - [4.3 More](#43-More) +- [5. Reference](#5-Reference) +- [6. License](#6-License) + + +## 1. Introduction + +Key information extraction (KIE) refers to extracting key information from text or images. As downstream task of OCR, the key information extraction task of document image has many practical application scenarios, such as form recognition, ticket information extraction, ID card information extraction, etc. + +PP-Structure conducts research based on the LayoutXLM multi-modal, and proposes the VI-LayoutXLM, which gets rid of visual features when finetuning the downstream tasks. An textline sorting method is also utilized to fit in reading order. What's more, UDML knowledge distillation is used for higher accuracy. Finally, the accuracy and inference speed of VI-LayoutXLM surpass those of LayoutXLM. + +The main features of the key information extraction module in PP-Structure are as follows. + + +- Integrate multi-modal methods such as [LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf), VI-LayoutXLM, and PP-OCR inference engine. +- Supports Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks based on multimodal methods. Based on the SER task, the text recognition and classification in the image can be completed; based on the RE task, the relationship extraction of the text content in the image can be completed, such as judging the problem pair (pair). +- Supports custom training for SER tasks and RE tasks. +- Supports end-to-end system prediction and evaluation of OCR+SER. +- Supports end-to-end system prediction of OCR+SER+RE. +- Support SER model export and inference using PaddleInference. + + +## 2. Accuracy and performance + +We evaluate the methods on the Chinese dataset of [XFUND](https://github.com/doc-analysis/XFUND), and the performance is as follows + +|Model | Backbone | Task | Config file | Hmean | Inference time (ms) | Download link| +| --- | --- | --- | --- | --- | --- | --- | +|VI-LayoutXLM| VI-LayoutXLM-base | SER | [ser_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml)|**93.19%**| 15.49|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | SER | [ser_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml)|90.38%| 19.49 | [trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar)| +|VI-LayoutXLM| VI-LayoutXLM-base | RE | [re_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml)|**83.92%**| 15.49|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | RE | [re_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml)|74.83%| 19.49|[trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar)| + + +* Note:Inference environment:V100 GPU + cuda10.2 + cudnn8.1.1 + TensorRT 7.2.3.4,tested using fp16. + +For more KIE models in PaddleOCR, please refer to [KIE model zoo](../../doc/doc_en/algorithm_overview_en.md). + + +## 3. Visualization + +There are two main solutions to the key information extraction task based on VI-LayoutXLM series model. + +(1) Text detection + text recognition + semantic entity recognition (SER) + +(2) Text detection + text recognition + semantic entity recognition (SER) + relationship extraction (RE) + + +The following images are demo results of the SER and RE models. For more detailed introduction to the above solutions, please refer to [KIE Guide](./how_to_do_kie.md). + +### 3.1 SER + +Demo results for SER task are as follows. + +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ + + +**Note:** test pictures are from [xfund dataset](https://github.com/doc-analysis/XFUND), [invoice dataset](https://aistudio.baidu.com/aistudio/datasetdetail/165561) and a composite ID card dataset. + + +Boxes of different colors in the image represent different categories. + +The invoice and application form images have three categories: `request`, `answer` and `header`. The `question` and 'answer' can be used to extract the relationship. + +For the ID card image, the mdoel can be directly identify the key information such as `name`, `gender`, `nationality`, so that the subsequent relationship extraction process is not required, and the key information extraction task can be completed using only on model. + +### 3.2 RE + +Demo results for RE task are as follows. + + +
+ +
+ +
+ +
+ +
+ +
+ +Red boxes are questions, blue boxes are answers. The green lines means the two conected objects are a pair. + + +## 4. Usage + +### 4.1 Prepare for the environment + + +Use the following command to install KIE dependencies. + + +```bash +git clone https://github.com/PaddlePaddle/PaddleOCR.git +cd PaddleOCR +pip install -r requirements.txt +pip install -r ppstructure/kie/requirements.txt +# 安装PaddleOCR引擎用于预测 +pip install paddleocr -U +``` + +The visualized results of SER are saved in the `./output` folder by default. Examples of results are as follows. + + +
+ +
+ + +### 4.2 Quick start + +Here we use XFUND dataset to quickly experience the SER model and RE model. + + +#### 4.2.1 Prepare for the dataset + +```bash +mkdir train_data +cd train_data +# download and uncompress the dataset +wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar && tar -xf XFUND.tar +cd .. +``` + +#### 4.2.2 Predict images using the trained model + +Use the following command to download the models. + +```bash +mkdir pretrained_model +cd pretrained_model +# download and uncompress the SER trained model +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar && tar -xf ser_vi_layoutxlm_xfund_pretrained.tar + +# download and uncompress the RE trained model +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar && tar -xf re_vi_layoutxlm_xfund_pretrained.tar +``` + + +If you want to use OCR engine to obtain end-to-end prediction results, you can use the following command to predict. + +```bash +# just predict using SER trained model +python3 tools/infer_kie_token_ser.py \ + -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./ppstructure/docs/kie/input/zh_val_42.jpg + +# predict using SER and RE trained model at the same time +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/image/zh_val_42.jpg \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy +``` + +The visual result images and the predicted text file will be saved in the `Global.save_res_path` directory. + + +If you want to load the text detection and recognition results collected before, you can use the following command to predict. + +```bash +# just predict using SER trained model +python3 tools/infer_kie_token_ser.py \ + -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/val.json \ + Global.infer_mode=False + +# predict using SER and RE trained model at the same time +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/val.json \ + Global.infer_mode=False \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy +``` + +#### 4.2.3 Inference using PaddleInference + +At present, only SER model supports inference using PaddleInference. + +Firstly, download the inference SER inference model. + + +```bash +mkdir inference +cd inference +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_infer.tar && tar -xf ser_vi_layoutxlm_xfund_infer.tar +``` + +Use the following command for inference. + + +```bash +cd ppstructure +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ + --ser_model_dir=../inference/ser_vi_layoutxlm_xfund_infer \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ + --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ + --vis_font_path=../doc/fonts/simfang.ttf \ + --ocr_order_method="tb-yx" +``` + +The visual results and text file will be saved in directory `output`. + + +### 4.3 More + +For training, evaluation and inference tutorial for KIE models, please refer to [KIE doc](../../doc/doc_en/kie_en.md). + +For training, evaluation and inference tutorial for text detection models, please refer to [text detection doc](../../doc/doc_en/detection_en.md). + +For training, evaluation and inference tutorial for text recognition models, please refer to [text recognition doc](../../doc/doc_en/recognition.md). + +If you want to finish the KIE tasks in your scene, and don't know what to prepare, please refer to [End cdoc](../../doc/doc_en/recognition.md). + +关于怎样在自己的场景中完成关键信息抽取任务,请参考:[Guide to End-to-end KIE](./how_to_do_kie_en.md)。 + + +## 5. Reference + +- LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding, https://arxiv.org/pdf/2104.08836.pdf +- microsoft/unilm/layoutxlm, https://github.com/microsoft/unilm/tree/master/layoutxlm +- XFUND dataset, https://github.com/doc-analysis/XFUND + +## 6. License + +The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) diff --git a/ppstructure/kie/README_ch.md b/ppstructure/kie/README_ch.md new file mode 100644 index 0000000000000000000000000000000000000000..56c99ab73abe2b33ccfa18d4181312cd5f4d3622 --- /dev/null +++ b/ppstructure/kie/README_ch.md @@ -0,0 +1,241 @@ +[English](README.md) | 简体中文 + +# 关键信息抽取 + +- [1. 简介](#1-简介) +- [2. 精度与性能](#2-精度与性能) +- [3. 效果演示](#3-效果演示) + - [3.1 SER](#31-ser) + - [3.2 RE](#32-re) +- [4. 使用](#4-使用) + - [4.1 准备环境](#41-准备环境) + - [4.2 快速开始](#42-快速开始) + - [4.3 更多](#43-更多) +- [5. 参考链接](#5-参考链接) +- [6. License](#6-License) + + +## 1. 简介 + +关键信息抽取 (Key Information Extraction, KIE)指的是是从文本或者图像中,抽取出关键的信息。针对文档图像的关键信息抽取任务作为OCR的下游任务,存在非常多的实际应用场景,如表单识别、车票信息抽取、身份证信息抽取等。 + +PP-Structure 基于 LayoutXLM 文档多模态系列方法进行研究与优化,设计了视觉特征无关的多模态模型结构VI-LayoutXLM,同时引入符合阅读顺序的文本行排序方法以及UDML联合互学习蒸馏方法,最终在精度与速度均超越LayoutXLM。 + +PP-Structure中关键信息抽取模块的主要特性如下: + +- 集成[LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf)、VI-LayoutXLM等多模态模型以及PP-OCR预测引擎。 +- 支持基于多模态方法的语义实体识别 (Semantic Entity Recognition, SER) 以及关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair)。 +- 支持SER任务和RE任务的自定义训练。 +- 支持OCR+SER的端到端系统预测与评估。 +- 支持OCR+SER+RE的端到端系统预测。 +- 支持SER模型的动转静导出与基于PaddleInfernece的模型推理。 + + +## 2. 精度与性能 + + +我们在 [XFUND](https://github.com/doc-analysis/XFUND) 的中文数据集上对算法进行了评估,SER与RE上的任务性能如下 + +|模型|骨干网络|任务|配置文件|hmean|预测耗时(ms)|下载链接| +| --- | --- | --- | --- | --- | --- | --- | +|VI-LayoutXLM| VI-LayoutXLM-base | SER | [ser_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh_udml.yml)|**93.19%**| 15.49|[训练模型](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | SER | [ser_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/ser_layoutxlm_xfund_zh.yml)|90.38%| 19.49 | [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar)| +|VI-LayoutXLM| VI-LayoutXLM-base | RE | [re_vi_layoutxlm_xfund_zh_udml.yml](../../configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh_udml.yml)|**83.92%**| 15.49|[训练模型](https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar)| +|LayoutXLM| LayoutXLM-base | RE | [re_layoutxlm_xfund_zh.yml](../../configs/kie/layoutlm_series/re_layoutxlm_xfund_zh.yml)|74.83%| 19.49|[训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar)| + + +* 注:预测耗时测试条件:V100 GPU + cuda10.2 + cudnn8.1.1 + TensorRT 7.2.3.4,使用FP16进行测试。 + +更多关于PaddleOCR中关键信息抽取模型的介绍,请参考[关键信息抽取模型库](../../doc/doc_ch/algorithm_overview.md)。 + + +## 3. 效果演示 + +基于多模态模型的关键信息抽取任务有2种主要的解决方案。 + +(1)文本检测 + 文本识别 + 语义实体识别(SER) +(2)文本检测 + 文本识别 + 语义实体识别(SER) + 关系抽取(RE) + +下面给出SER与RE任务的示例效果,关于上述解决方案的详细介绍,请参考[关键信息抽取全流程指南](./how_to_do_kie.md)。 + +### 3.1 SER + +对于SER任务,效果如下所示。 + +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +**注意:** 测试图片来源于[XFUND数据集](https://github.com/doc-analysis/XFUND)、[发票数据集](https://aistudio.baidu.com/aistudio/datasetdetail/165561)以及合成的身份证数据集。 + + +图中不同颜色的框表示不同的类别。 + +图中的发票以及申请表图像,有`QUESTION`, `ANSWER`, `HEADER` 3种类别,识别的`QUESTION`, `ANSWER`可以用于后续的问题与答案的关系抽取。 + +图中的身份证图像,则直接识别出其中的`姓名`、`性别`、`民族`等关键信息,这样就无需后续的关系抽取过程,一个模型即可完成关键信息抽取。 + + +### 3.2 RE + +对于RE任务,效果如下所示。 + +
+ +
+ +
+ +
+ +
+ +
+ + +红色框是问题,蓝色框是答案。绿色线条表示连接的两端为一个key-value的pair。 + +## 4. 使用 + +### 4.1 准备环境 + +使用下面的命令安装运行SER与RE关键信息抽取的依赖。 + +```bash +git clone https://github.com/PaddlePaddle/PaddleOCR.git +cd PaddleOCR +pip install -r requirements.txt +pip install -r ppstructure/kie/requirements.txt +# 安装PaddleOCR引擎用于预测 +pip install paddleocr -U +``` + +### 4.2 快速开始 + +下面XFUND数据集,快速体验SER模型与RE模型。 + +#### 4.2.1 准备数据 + +```bash +mkdir train_data +cd train_data +# 下载与解压数据 +wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar && tar -xf XFUND.tar +cd .. +``` + +#### 4.2.2 基于动态图的预测 + +首先下载模型。 + +```bash +mkdir pretrained_model +cd pretrained_model +# 下载并解压SER预训练模型 +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_pretrained.tar && tar -xf ser_vi_layoutxlm_xfund_pretrained.tar + +# 下载并解压RE预训练模型 +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_pretrained.tar && tar -xf re_vi_layoutxlm_xfund_pretrained.tar +``` + +如果希望使用OCR引擎,获取端到端的预测结果,可以使用下面的命令进行预测。 + +```bash +# 仅预测SER模型 +python3 tools/infer_kie_token_ser.py \ + -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./ppstructure/docs/kie/input/zh_val_42.jpg + +# SER + RE模型串联 +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/image/zh_val_42.jpg \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy +``` + +`Global.save_res_path`目录中会保存可视化的结果图像以及预测的文本文件。 + + +如果希望加载标注好的文本检测与识别结果,仅预测可以使用下面的命令进行预测。 + +```bash +# 仅预测SER模型 +python3 tools/infer_kie_token_ser.py \ + -c configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/val.json \ + Global.infer_mode=False + +# SER + RE模型串联 +python3 ./tools/infer_kie_token_ser_re.py \ + -c configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml \ + -o Architecture.Backbone.checkpoints=./pretrain_models/re_vi_layoutxlm_xfund_pretrained/best_accuracy \ + Global.infer_img=./train_data/XFUND/zh_val/val.json \ + Global.infer_mode=False \ + -c_ser configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml \ + -o_ser Architecture.Backbone.checkpoints=./pretrain_models/ser_vi_layoutxlm_xfund_pretrained/best_accuracy +``` + +#### 4.2.3 基于PaddleInference的预测 + +目前仅SER模型支持PaddleInference推理。 + +首先下载SER的推理模型。 + + +```bash +mkdir inference +cd inference +wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_infer.tar && tar -xf ser_vi_layoutxlm_xfund_infer.tar +``` + +执行下面的命令进行预测。 + +```bash +cd ppstructure +python3 kie/predict_kie_token_ser.py \ + --kie_algorithm=LayoutXLM \ + --ser_model_dir=../inference/ser_vi_layoutxlm_xfund_infer \ + --image_dir=./docs/kie/input/zh_val_42.jpg \ + --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \ + --vis_font_path=../doc/fonts/simfang.ttf \ + --ocr_order_method="tb-yx" +``` + +可视化结果保存在`output`目录下。 + +### 4.3 更多 + +关于KIE模型的训练评估与推理,请参考:[关键信息抽取教程](../../doc/doc_ch/kie.md)。 + +关于文本检测模型的训练评估与推理,请参考:[文本检测教程](../../doc/doc_ch/detection.md)。 + +关于文本识别模型的训练评估与推理,请参考:[文本识别教程](../../doc/doc_ch/recognition.md)。 + +关于怎样在自己的场景中完成关键信息抽取任务,请参考:[关键信息抽取全流程指南](./how_to_do_kie.md)。 + + +## 5. 参考链接 + +- LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding, https://arxiv.org/pdf/2104.08836.pdf +- microsoft/unilm/layoutxlm, https://github.com/microsoft/unilm/tree/master/layoutxlm +- XFUND dataset, https://github.com/doc-analysis/XFUND + +## 6. License + +The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) diff --git a/ppstructure/vqa/how_to_do_kie.md b/ppstructure/kie/how_to_do_kie.md similarity index 100% rename from ppstructure/vqa/how_to_do_kie.md rename to ppstructure/kie/how_to_do_kie.md diff --git a/ppstructure/kie/how_to_do_kie_en.md b/ppstructure/kie/how_to_do_kie_en.md new file mode 100644 index 0000000000000000000000000000000000000000..23b2394f5aa3911a1311d3bc3be8f362861d34af --- /dev/null +++ b/ppstructure/kie/how_to_do_kie_en.md @@ -0,0 +1,179 @@ + +# Key Information Extraction Pipeline + +- [1. Introduction](#1-Introduction) + - [1.1 Background](#11-Background) + - [1.2 Mainstream Deep-learning Solutions](#12-Mainstream-Deep-learning-Solutions) +- [2. KIE Pipeline](#2-KIE-Pipeline) + - [2.1 Train OCR Models](#21-Train-OCR-Models) + - [2.2 Train KIE Models](#22-Train-KIE-Models) +- [3. Reference](#3-Reference) + + +## 1. Introduction + +### 1.1 Background + +Key information extraction (KIE) refers to extracting key information from text or images. As the downstream task of OCR, KIE of document image has many practical application scenarios, such as form recognition, ticket information extraction, ID card information extraction, etc. However, it is time-consuming and laborious to extract key information from these document images by manpower. It's challengable but also valuable to combine multi-modal features (visual, layout, text, etc) together and complete KIE tasks. + +For the document images in a specific scene, the position and layout of the key information are relatively fixed. Therefore, in the early stage of the research, there are many methods based on template matching to extract the key information. This method is still widely used in many simple scenarios at present. However, it takes long time to adjut the template for different scenarios. + + +The KIE in the document image generally contains 2 subtasks, which is as shown follows. + +* (1) SER: semantic entity recognition, which classifies each detected textline, such as dividing it into name and ID card. As shown in the red boxes in the following figure. + +* (2) RE: relationship extraction, which matches the question and answer based on SER results. As shown in the figure below, the yellow arrows match the question and answer. + +
+ +
+ + + +### 1.2 Mainstream Deep-learning Solutions + +General KIE methods are based on Named Entity Recognition (NER), but such methods only use text information and ignore location and visual feature information, which leads to limited accuracy. In recent years, most scholars have started to combine mutil-modal features to improve the accuracy of KIE model. The main methods are as follows: + +* (1) Grid based methods. These methods mainly focus on the fusion of multi-modal information at the image level. Most texts are of character granularity. The text and structure information embedding method is simple, such as the algorithm of chargrid [1]. + +* (2) Token based methods. These methods refer to the NLP methods such as Bert, which encode the position, vision and other feature information into the multi-modal model, and conduct pre-training on large-scale datasets, so that in downstream tasks, only a small amount of annotation data is required to obtain excellent results. The representative algorithms are layoutlm [2], layoutlmv2 [3], layoutxlm [4], structext [5], etc. + +* (3) GCN based methods. These methods try to learn the structural information between images and characters, so as to solve the problem of extracting open set information (templates not seen in the training set), such as GCN [6], SDMGR [7] and other algorithms. + +* (4) End to end based methods: these methods put the existing OCR character recognition and KIE information extraction tasks into a unified network for common learning, and strengthen each other in the learning process. Such as TRIE [8]. + + +For more detailed introduction of the algorithms, please refer to Chapter 6 of [Diving into OCR](https://aistudio.baidu.com/aistudio/education/group/info/25207). + +## 2. KIE Pipeline + +Token based methods such as LayoutXLM are implemented in PaddleOCR. What's more, in PP-Structurev2, we simplify the LayoutXLM model and proposed VI-LayoutXLM, in which the visual feature extraction module is removed for speed-up. The textline sorting strategy conforming to the human reading order and UDML knowledge distillation strategy are utilized for higher model accuracy. + + +In the non end-to-end KIE method, KIE needs at least ** 2 steps**. Firstly, the OCR model is used to extract the text and its position. Secondly, the KIE model is used to extract the key information according to the image, text position and text content. + + +### 2.1 Train OCR Models + +#### 2.1.1 Text Detection + +**(1) Data** + +Most of the models provided in PaddleOCR are general models. In the process of text detection, the detection of adjacent text lines is generally based on the distance of the position. As shown in the figure above, when using PP-OCRv3 general English detection model for text detection, it is easy to detect the two fields representing different propoerties as one. Therefore, it is suggested to finetune a detection model according to your scenario firstly during the KIE task. + + +During data annotation, the different key information needs to be separated. Otherwise, it will increase the difficulty of subsequent KIE tasks. + +For downstream tasks, generally speaking, `200~300` training images can guarantee the basic training effect. If there is not too much prior knowledge, **`200~300`** images can be labeled firstly for subsequent text detection model training. + +**(2) Model** + +In terms of model selection, PP-OCRv3 detection model is recommended. For more information about the training methods of the detection model, please refer to: [Text detection tutorial](../../doc/doc_en/detection_en.md) and [PP-OCRv3 detection model tutorial](../../doc/doc_ch/PPOCRv3_det_train.md). + +#### 2.1.2 Text recognition + + +Compared with the natural scene, the text recognition in the document image is generally relatively easier (the background is not too complex), so **it is suggested to** try the PP-OCRv3 general text recognition model provided in PaddleOCR ([PP-OCRv3 model list](../../doc/doc_en/models_list_en.md)) + + +**(1) Data** + +However, there are also some challenges in some document scenarios, such as rare words in ID card scenarios and special fonts in invoice and other scenarios. These problems will increase the difficulty of text recognition. At this time, if you want to ensure or further improve the model accuracy, it is recommended to load PP-OCRv3 model based on the text recognition dataset of specific document scenarios for finetuning. + +In the process of model finetuning, it is recommended to prepare at least `5000` vertical scene text recognition images to ensure the basic model fine-tuning effect. If you want to improve the accuracy and generalization ability of the model, you can synthesize more text recognition images similar to the scene, collect general real text recognition data from the public data set, and add them to the text recognition training process. In the training process, it is suggested that the ratio of real data, synthetic data and general data of each epoch should be around `1:1:1`, which can be controlled by setting the sampling ratio of different data sources. If there are 3 training text files, including 10k, 20k and 50k pieces of data respectively, the data can be set in the configuration file as follows: + +```yml +Train: + dataset: + name: SimpleDataSet + data_dir: ./train_data/ + label_file_list: + - ./train_data/train_list_10k.txt + - ./train_data/train_list_10k.txt + - ./train_data/train_list_50k.txt + ratio_list: [1.0, 0.5, 0.2] + ... +``` + +**(2) Model** + +In terms of model selection, PP-OCRv3 recognition model is recommended. For more information about the training methods of the recognition model, please refer to: [Text recognition tutorial](../../doc/doc_en/recognition_en.md) and [PP-OCRv3 model list](../../doc/doc_en/models_list_en.md). + + +### 2.2 Train KIE Models + +There are two main methods to extract the key information from the recognized texts. + +(1) Directly use SER model to obtain the key information category. For example, in the ID card scenario, we mark "name" and "Geoff Sample" as "name_key" and "name_value", respectively. The **text field** corresponding to the category "name_value" finally identified is the key information we need. + +(2) Joint use SER and RE models. For this case, we firstly use SER model to obtain all questions (keys) and questions (values) for the image text, and then use RE model to match all keys and values to find the relationship, so as to complete the extraction of key information. + +#### 2.2.1 SER + +Take the ID card scenario as an example. The key information generally includes `name`, `DOB`, etc. We can directly mark the corresponding fields as specific categories, as shown in the following figure. + +
+ +
+ +**Note:** + +- In the labeling process, text content without key information about KIE shall be labeled as`other`, which is equivalent to background information. For example, in the ID card scenario, if we do not pay attention to `DOB` information, we can mark the categories of `DOB` and `Area manager` as `other`. +- In the annotation process of, it is required to annotate the **textline** position rather than the character. + + +In terms of data, generally speaking, for relatively fixed scenes, **50** training images can achieve acceptable effects. You can refer to [PPOCRLabel](../../PPOCRLabel/README.md) for finish the labeling process. + +In terms of model, it is recommended to use the VI-layoutXLM model proposed in PP-Structurev2. It is improved based on the LayoutXLM model, removing the visual feature extraction module, and further improving the model inference speed without the significant reduction on model accuracy. For more tutorials, please refer to [VI-LayoutXLM introduction](../../doc/doc_en/algorithm_kie_vi_layoutxlm_en.md) and [KIE tutorial](../../doc/doc_en/kie_en.md). + + +#### 2.2.2 SER + RE + +The SER model is mainly used to identify all keys and values in the document image, and the RE model is mainly used to match all keys and values. + +Taking the ID card scenario as an example, the key information generally includes key information such as `name`, `DOB`, etc. in the SER stage, we need to identify all questions (keys) and answers (values). The demo annotation is as follows. All keys can be annotated as `question`, and all keys can be annotated as `answer`. + + +
+ +
+ + +In the RE stage, the ID and connection information of each field need to be marked, as shown in the following figure. + +
+ +
+ +For each textline, you need to add 'ID' and 'linking' field information. The 'ID' records the unique identifier of the textline. Different text contents in the same images cannot be repeated. The 'linking' is a list that records the connection information between different texts. If the ID of the field "name" is 0 and the ID of the field "Geoff Sample" is 1, then they all have [[0, 1]] 'linking' marks, indicating that the fields with `id=0` and `id=1` form a key value relationship (the fields such as DOB and Expires are similar, and will not be repeated here). + + +**Note:** + +-During annotation, if value is multiple textines, a key value pair can be added in linking, such as `[[0, 1], [0, 2]]`. + +In terms of data, generally speaking, for relatively fixed scenes, about **50** training images can achieve acceptable effects. + +In terms of model, it is recommended to use the VI-layoutXLM model proposed in PP-Structurev2. It is improved based on the LayoutXLM model, removing the visual feature extraction module, and further improving the model inference speed without the significant reduction on model accuracy. For more tutorials, please refer to [VI-LayoutXLM introduction](../../doc/doc_en/algorithm_kie_vi_layoutxlm_en.md) and [KIE tutorial](../../doc/doc_en/kie_en.md). + + + +## 3. Reference + + +[1] Katti A R, Reisswig C, Guder C, et al. Chargrid: Towards understanding 2d documents[J]. arXiv preprint arXiv:1809.08799, 2018. + +[2] Xu Y, Li M, Cui L, et al. Layoutlm: Pre-training of text and layout for document image understanding[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020: 1192-1200. + +[3] Xu Y, Xu Y, Lv T, et al. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding[J]. arXiv preprint arXiv:2012.14740, 2020. + +[4]: Xu Y, Lv T, Cui L, et al. Layoutxlm: Multimodal pre-training for multilingual visually-rich document understanding[J]. arXiv preprint arXiv:2104.08836, 2021. + +[5] Li Y, Qian Y, Yu Y, et al. StrucTexT: Structured Text Understanding with Multi-Modal Transformers[C]//Proceedings of the 29th ACM International Conference on Multimedia. 2021: 1912-1920. + +[6] Liu X, Gao F, Zhang Q, et al. Graph convolution for multimodal information extraction from visually rich documents[J]. arXiv preprint arXiv:1903.11279, 2019. + +[7] Sun H, Kuang Z, Yue X, et al. Spatial Dual-Modality Graph Reasoning for Key Information Extraction[J]. arXiv preprint arXiv:2103.14470, 2021. + +[8] Zhang P, Xu Y, Cheng Z, et al. Trie: End-to-end text reading and information extraction for document understanding[C]//Proceedings of the 28th ACM International Conference on Multimedia. 2020: 1413-1422. diff --git a/ppstructure/vqa/predict_vqa_token_ser.py b/ppstructure/kie/predict_kie_token_ser.py similarity index 98% rename from ppstructure/vqa/predict_vqa_token_ser.py rename to ppstructure/kie/predict_kie_token_ser.py index 7647af9d10684bc6621b32e95d55e05948cb59b7..48cfc528a28e0a2bdfb51d3a537f26e891ae3286 100644 --- a/ppstructure/vqa/predict_vqa_token_ser.py +++ b/ppstructure/kie/predict_kie_token_ser.py @@ -30,7 +30,7 @@ from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger from ppocr.utils.visual import draw_ser_results -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppstructure.utility import parse_args from paddleocr import PaddleOCR @@ -49,7 +49,7 @@ class SerPredictor(object): pre_process_list = [{ 'VQATokenLabelEncode': { - 'algorithm': args.vqa_algorithm, + 'algorithm': args.kie_algorithm, 'class_path': args.ser_dict_path, 'contains_re': False, 'ocr_engine': self.ocr_engine, @@ -138,7 +138,7 @@ def main(args): os.path.join(args.output, 'infer.txt'), mode='w', encoding='utf-8') as f_w: for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) img = img[:, :, ::-1] diff --git a/ppstructure/kie/requirements.txt b/ppstructure/kie/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..53a7315d051704640b9a692ffaa52ce05fd16274 --- /dev/null +++ b/ppstructure/kie/requirements.txt @@ -0,0 +1,7 @@ +sentencepiece +yacs +seqeval +git+https://github.com/PaddlePaddle/PaddleNLP +pypandoc +attrdict +python_docx diff --git a/ppstructure/vqa/tools/eval_with_label_end2end.py b/ppstructure/kie/tools/eval_with_label_end2end.py similarity index 100% rename from ppstructure/vqa/tools/eval_with_label_end2end.py rename to ppstructure/kie/tools/eval_with_label_end2end.py diff --git a/ppstructure/vqa/tools/trans_funsd_label.py b/ppstructure/kie/tools/trans_funsd_label.py similarity index 100% rename from ppstructure/vqa/tools/trans_funsd_label.py rename to ppstructure/kie/tools/trans_funsd_label.py diff --git a/ppstructure/vqa/tools/trans_xfun_data.py b/ppstructure/kie/tools/trans_xfun_data.py similarity index 100% rename from ppstructure/vqa/tools/trans_xfun_data.py rename to ppstructure/kie/tools/trans_xfun_data.py diff --git a/ppstructure/layout/predict_layout.py b/ppstructure/layout/predict_layout.py index a58a63f4931336686cf7e7b4841819b17c31fdbf..9f8c884e144654901737191141622abfaa872d24 100755 --- a/ppstructure/layout/predict_layout.py +++ b/ppstructure/layout/predict_layout.py @@ -28,12 +28,13 @@ import tools.infer.utility as utility from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppstructure.utility import parse_args from picodet_postprocess import PicoDetPostProcess logger = get_logger() + class LayoutPredictor(object): def __init__(self, args): pre_process_list = [{ @@ -109,7 +110,7 @@ def main(args): repeats = 50 for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/ppstructure/predict_system.py b/ppstructure/predict_system.py index 68a84a53e6572d393e98260e1f180fe39645ad2c..d63ab3b3daf018af7d0872e42bd14b8823d193ae 100644 --- a/ppstructure/predict_system.py +++ b/ppstructure/predict_system.py @@ -74,7 +74,7 @@ class StructureSystem(object): else: self.table_system = TableSystem(args) - elif self.mode == 'vqa': + elif self.mode == 'kie': raise NotImplementedError def __call__(self, img, img_idx=0, return_ocr_result_in_table=False): @@ -85,7 +85,7 @@ class StructureSystem(object): 'table_match': 0, 'det': 0, 'rec': 0, - 'vqa': 0, + 'kie': 0, 'all': 0 } start = time.time() @@ -174,7 +174,7 @@ class StructureSystem(object): end = time.time() time_dict['all'] = end - start return res_list, time_dict - elif self.mode == 'vqa': + elif self.mode == 'kie': raise NotImplementedError return None, None @@ -233,7 +233,7 @@ def main(args): save_structure_res(res, save_folder, img_name) draw_img = draw_structure_result(img, res, args.vis_font_path) img_save_path = os.path.join(save_folder, img_name, 'show.jpg') - elif structure_sys.mode == 'vqa': + elif structure_sys.mode == 'kie': raise NotImplementedError # draw_img = draw_ser_results(img, res, args.vis_font_path) # img_save_path = os.path.join(save_folder, img_name + '.jpg') @@ -263,7 +263,7 @@ def main(args): args.vis_font_path) img_save_path = os.path.join(save_folder, img_name, 'show_{}.jpg'.format(index)) - elif structure_sys.mode == 'vqa': + elif structure_sys.mode == 'kie': raise NotImplementedError # draw_img = draw_ser_results(img, res, args.vis_font_path) # img_save_path = os.path.join(save_folder, img_name + '.jpg') diff --git a/ppstructure/table/README.md b/ppstructure/table/README.md index 3732a89c54b3686a6d8cf390d3b9043826c4f459..a5d0da3ccd7b1893d826f026609ec39b804218da 100644 --- a/ppstructure/table/README.md +++ b/ppstructure/table/README.md @@ -33,8 +33,8 @@ We evaluated the algorithm on the PubTabNet[1] eval dataset, and the |Method|Acc|[TEDS(Tree-Edit-Distance-based Similarity)](https://github.com/ibm-aur-nlp/PubTabNet/tree/master/src)|Speed| | --- | --- | --- | ---| | EDD[2] |x| 88.3 |x| -| TableRec-RARE(ours) |73.8%| 95.3% |1550ms| -| SLANet(ours) | 76.2%| 95.85% |766ms| +| TableRec-RARE(ours) | 71.73%| 93.88% |779ms| +| SLANet(ours) | 76.31%| 95.89%|766ms| The performance indicators are explained as follows: - Acc: The accuracy of the table structure in each image, a wrong token is considered an error. diff --git a/ppstructure/table/README_ch.md b/ppstructure/table/README_ch.md index cc73f8bcec727f6eff1bf412fb877373d405e489..e83c81befbea95ab1e0a1f532901e39a4d80bd9d 100644 --- a/ppstructure/table/README_ch.md +++ b/ppstructure/table/README_ch.md @@ -39,8 +39,8 @@ |算法|Acc|[TEDS(Tree-Edit-Distance-based Similarity)](https://github.com/ibm-aur-nlp/PubTabNet/tree/master/src)|Speed| | --- | --- | --- | ---| | EDD[2] |x| 88.3% |x| -| TableRec-RARE(ours) |73.8%| 95.3% |1550ms| -| SLANet(ours) | 76.2%| 95.85% |766ms| +| TableRec-RARE(ours) | 71.73%| 93.88% |779ms| +| SLANet(ours) |76.31%| 95.89%|766ms| 性能指标解释如下: - Acc: 模型对每张图像里表格结构的识别准确率,错一个token就算错误。 diff --git a/ppstructure/table/predict_structure.py b/ppstructure/table/predict_structure.py index a580947aad428a0744e3da4b8302f047c6b11bee..45cbba3e298004d3711b05e6fb7cffecae637601 100755 --- a/ppstructure/table/predict_structure.py +++ b/ppstructure/table/predict_structure.py @@ -29,7 +29,7 @@ import tools.infer.utility as utility from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppocr.utils.visual import draw_rectangle from ppstructure.utility import parse_args @@ -133,7 +133,7 @@ def main(args): os.path.join(args.output, 'infer.txt'), mode='w', encoding='utf-8') as f_w: for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/ppstructure/table/predict_table.py b/ppstructure/table/predict_table.py index e94347d86144cd66474546e99a2c9dffee4978d9..aeec66deca62f648df249a5833dbfa678d2da612 100644 --- a/ppstructure/table/predict_table.py +++ b/ppstructure/table/predict_table.py @@ -31,7 +31,7 @@ import tools.infer.predict_rec as predict_rec import tools.infer.predict_det as predict_det import tools.infer.utility as utility from tools.infer.predict_system import sorted_boxes -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppocr.utils.logging import get_logger from ppstructure.table.matcher import TableMatch from ppstructure.table.table_master_match import TableMasterMatcher @@ -194,7 +194,7 @@ def main(args): for i, image_file in enumerate(image_file_list): logger.info("[{}/{}] {}".format(i, img_num, image_file)) - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) excel_path = os.path.join( args.output, os.path.basename(image_file).split('.')[0] + '.xlsx') if not flag: diff --git a/ppstructure/utility.py b/ppstructure/utility.py index 2cf20eb53f87a8f8fbe2bdb4c3ead77f40120370..270ee3aef9ced40f47eaa5dd9aac3054469d69a8 100644 --- a/ppstructure/utility.py +++ b/ppstructure/utility.py @@ -49,8 +49,8 @@ def init_args(): type=float, default=0.5, help="Threshold of nms.") - # params for vqa - parser.add_argument("--vqa_algorithm", type=str, default='LayoutXLM') + # params for kie + parser.add_argument("--kie_algorithm", type=str, default='LayoutXLM') parser.add_argument("--ser_model_dir", type=str) parser.add_argument( "--ser_dict_path", @@ -63,7 +63,7 @@ def init_args(): "--mode", type=str, default='structure', - help='structure and vqa is supported') + help='structure and kie is supported') parser.add_argument( "--image_orientation", type=bool, @@ -90,10 +90,7 @@ def init_args(): default=False, help='Whether to enable layout of recovery') parser.add_argument( - "--save_pdf", - type=bool, - default=False, - help='Whether to save pdf file') + "--save_pdf", type=bool, default=False, help='Whether to save pdf file') return parser diff --git a/ppstructure/vqa/README.md b/ppstructure/vqa/README.md deleted file mode 100644 index 28b794383bceccf655bdf00df5ee0c98841e2e95..0000000000000000000000000000000000000000 --- a/ppstructure/vqa/README.md +++ /dev/null @@ -1,285 +0,0 @@ -English | [简体中文](README_ch.md) - -- [1 Introduction](#1-introduction) -- [2. Performance](#2-performance) -- [3. Effect demo](#3-effect-demo) - - [3.1 SER](#31-ser) - - [3.2 RE](#32-re) -- [4. Install](#4-install) - - [4.1 Install dependencies](#41-install-dependencies) - - [5.3 RE](#53-re) -- [6. Reference Links](#6-reference-links) -- [License](#license) - -# Document Visual Question Answering - -## 1 Introduction - -VQA refers to visual question answering, which mainly asks and answers image content. DOC-VQA is one of the VQA tasks. DOC-VQA mainly asks questions about the text content of text images. - -The DOC-VQA algorithm in PP-Structure is developed based on the PaddleNLP natural language processing algorithm library. - -The main features are as follows: - -- Integrate [LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf) model and PP-OCR prediction engine. -- Supports Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks based on multimodal methods. Based on the SER task, the text recognition and classification in the image can be completed; based on the RE task, the relationship extraction of the text content in the image can be completed, such as judging the problem pair (pair). -- Supports custom training for SER tasks and RE tasks. -- Supports end-to-end system prediction and evaluation of OCR+SER. -- Supports end-to-end system prediction of OCR+SER+RE. - - -This project is an open source implementation of [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/pdf/2104.08836.pdf) on Paddle 2.2, -Included fine-tuning code on [XFUND dataset](https://github.com/doc-analysis/XFUND). - -## 2. Performance - -We evaluate the algorithm on the Chinese dataset of [XFUND](https://github.com/doc-analysis/XFUND), and the performance is as follows - -| Model | Task | hmean | Model download address | -|:---:|:---:|:---:| :---:| -| LayoutXLM | SER | 0.9038 | [link](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | -| LayoutXLM | RE | 0.7483 | [link](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | -| LayoutLMv2 | SER | 0.8544 | [link](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLMv2_xfun_zh.tar) -| LayoutLMv2 | RE | 0.6777 | [link](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutLMv2_xfun_zh.tar) | -| LayoutLM | SER | 0.7731 | [link](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLM_xfun_zh.tar) | - -## 3. Effect demo - -**Note:** The test images are from the XFUND dataset. - - -### 3.1 SER - -![](../docs/vqa/result_ser/zh_val_0_ser.jpg) | ![](../docs/vqa/result_ser/zh_val_42_ser.jpg) ----|--- - -Boxes with different colors in the figure represent different categories. For the XFUND dataset, there are 3 categories: `QUESTION`, `ANSWER`, `HEADER` - -* Dark purple: HEADER -* Light purple: QUESTION -* Army Green: ANSWER - -The corresponding categories and OCR recognition results are also marked on the upper left of the OCR detection frame. - - -### 3.2 RE - -![](../docs/vqa/result_re/zh_val_21_re.jpg) | ![](../docs/vqa/result_re/zh_val_40_re.jpg) ----|--- - - -The red box in the figure represents the question, the blue box represents the answer, and the question and the answer are connected by a green line. The corresponding categories and OCR recognition results are also marked on the upper left of the OCR detection frame. - -## 4. Install - -### 4.1 Install dependencies - -- **(1) Install PaddlePaddle** - -```bash -python3 -m pip install --upgrade pip - -# GPU installation -python3 -m pip install "paddlepaddle-gpu>=2.2" -i https://mirror.baidu.com/pypi/simple - -# CPU installation -python3 -m pip install "paddlepaddle>=2.2" -i https://mirror.baidu.com/pypi/simple - -```` -For more requirements, please refer to the instructions in [Installation Documentation](https://www.paddlepaddle.org.cn/install/quick). - -### 4.2 Install PaddleOCR - -- **(1) pip install PaddleOCR whl package quickly (prediction only)** - -```bash -python3 -m pip install paddleocr -```` - -- **(2) Download VQA source code (prediction + training)** - -```bash -[Recommended] git clone https://github.com/PaddlePaddle/PaddleOCR - -# If the pull cannot be successful due to network problems, you can also choose to use the hosting on the code cloud: -git clone https://gitee.com/paddlepaddle/PaddleOCR - -# Note: Code cloud hosting code may not be able to synchronize the update of this github project in real time, there is a delay of 3 to 5 days, please use the recommended method first. -```` - -- **(3) Install VQA's `requirements`** - -```bash -python3 -m pip install -r ppstructure/vqa/requirements.txt -```` - -## 5. Usage - -### 5.1 Data and Model Preparation - -If you want to experience the prediction process directly, you can download the pre-training model provided by us, skip the training process, and just predict directly. - -* Download the processed dataset - -The download address of the processed XFUND Chinese dataset: [link](https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar). - - -Download and unzip the dataset, and place the dataset in the current directory after unzipping. - -```shell -wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar -```` - -* Convert the dataset - -If you need to train other XFUND datasets, you can use the following commands to convert the datasets - -```bash -python3 ppstructure/vqa/tools/trans_xfun_data.py --ori_gt_path=path/to/json_path --output_path=path/to/save_path -```` - -* Download the pretrained models -```bash -mkdir pretrain && cd pretrain -#download the SER model -wget https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar && tar -xvf ser_LayoutXLM_xfun_zh.tar -#download the RE model -wget https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar && tar -xvf re_LayoutXLM_xfun_zh.tar -cd ../ -```` - - -### 5.2 SER - -Before starting training, you need to modify the following four fields - -1. `Train.dataset.data_dir`: point to the directory where the training set images are stored -2. `Train.dataset.label_file_list`: point to the training set label file -3. `Eval.dataset.data_dir`: refers to the directory where the validation set images are stored -4. `Eval.dataset.label_file_list`: point to the validation set label file - -* start training -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/ser/layoutxlm.yml -```` - -Finally, `precision`, `recall`, `hmean` and other indicators will be printed. -In the `./output/ser_layoutxlm/` folder will save the training log, the optimal model and the model for the latest epoch. - -* resume training - -To resume training, assign the folder path of the previously trained model to the `Architecture.Backbone.checkpoints` field. - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -```` - -* evaluate - -Evaluation requires assigning the folder path of the model to be evaluated to the `Architecture.Backbone.checkpoints` field. - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -```` -Finally, `precision`, `recall`, `hmean` and other indicators will be printed - -* `OCR + SER` tandem prediction based on training engine - -Use the following command to complete the series prediction of `OCR engine + SER`, taking the SER model based on LayoutXLM as an example:: - -```shell -python3.7 tools/export_model.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ Global.save_inference_dir=output/ser/infer -```` - -Finally, the prediction result visualization image and the prediction result text file will be saved in the directory configured by the `config.Global.save_res_path` field. The prediction result text file is named `infer_results.txt`. - -* End-to-end evaluation of `OCR + SER` prediction system - -First use the `tools/infer_vqa_token_ser.py` script to complete the prediction of the dataset, then use the following command to evaluate. - -```shell -export CUDA_VISIBLE_DEVICES=0 -python3 tools/eval_with_label_end2end.py --gt_json_path XFUND/zh_val/xfun_normalize_val.json --pred_json_path output_res/infer_results.txt -```` -* export model - -Use the following command to complete the model export of the SER model, taking the SER model based on LayoutXLM as an example: - -```shell -python3.7 tools/export_model.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ Global.save_inference_dir=output/ser/infer -``` -The converted model will be stored in the directory specified by the `Global.save_inference_dir` field. - -* `OCR + SER` tandem prediction based on prediction engine - -Use the following command to complete the tandem prediction of `OCR + SER` based on the prediction engine, taking the SER model based on LayoutXLM as an example: - -```shell -cd ppstructure -CUDA_VISIBLE_DEVICES=0 python3.7 vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM --ser_model_dir=../output/ser/infer --ser_dict_path=../train_data/XFUND/class_list_xfun.txt --vis_font_path=../doc/fonts/simfang.ttf --image_dir=docs/vqa/input/zh_val_42.jpg --output=output -``` -After the prediction is successful, the visualization images and results will be saved in the directory specified by the `output` field - - -### 5.3 RE - -* start training - -Before starting training, you need to modify the following four fields - -1. `Train.dataset.data_dir`: point to the directory where the training set images are stored -2. `Train.dataset.label_file_list`: point to the training set label file -3. `Eval.dataset.data_dir`: refers to the directory where the validation set images are stored -4. `Eval.dataset.label_file_list`: point to the validation set label file - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/re/layoutxlm.yml -```` - -Finally, `precision`, `recall`, `hmean` and other indicators will be printed. -In the `./output/re_layoutxlm/` folder will save the training log, the optimal model and the model for the latest epoch. - -* resume training - -To resume training, assign the folder path of the previously trained model to the `Architecture.Backbone.checkpoints` field. - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -```` - -* evaluate - -Evaluation requires assigning the folder path of the model to be evaluated to the `Architecture.Backbone.checkpoints` field. - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -```` -Finally, `precision`, `recall`, `hmean` and other indicators will be printed - -* Use `OCR engine + SER + RE` tandem prediction - -Use the following command to complete the series prediction of `OCR engine + SER + RE`, taking the pretrained SER and RE models as an example: -```shell -export CUDA_VISIBLE_DEVICES=0 -python3 tools/infer_vqa_token_ser_re.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/re_LayoutXLM_xfun_zh/Global.infer_img=ppstructure/docs/vqa/input/zh_val_21.jpg -c_ser configs/vqa/ser/layoutxlm. yml -o_ser Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ -```` - -Finally, the prediction result visualization image and the prediction result text file will be saved in the directory configured by the `config.Global.save_res_path` field. The prediction result text file is named `infer_results.txt`. - -* export model - -cooming soon - -* `OCR + SER + RE` tandem prediction based on prediction engine - -cooming soon - -## 6. Reference Links - -- LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding, https://arxiv.org/pdf/2104.08836.pdf -- microsoft/unilm/layoutxlm, https://github.com/microsoft/unilm/tree/master/layoutxlm -- XFUND dataset, https://github.com/doc-analysis/XFUND - -## License - -The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) diff --git a/ppstructure/vqa/README_ch.md b/ppstructure/vqa/README_ch.md deleted file mode 100644 index f168110ed9b2e750b3b2ee6f5ab0116daebc3e77..0000000000000000000000000000000000000000 --- a/ppstructure/vqa/README_ch.md +++ /dev/null @@ -1,283 +0,0 @@ -[English](README.md) | 简体中文 - -- [1. 简介](#1-简介) -- [2. 性能](#2-性能) -- [3. 效果演示](#3-效果演示) - - [3.1 SER](#31-ser) - - [3.2 RE](#32-re) -- [4. 安装](#4-安装) - - [4.1 安装依赖](#41-安装依赖) - - [4.2 安装PaddleOCR(包含 PP-OCR 和 VQA)](#42-安装paddleocr包含-pp-ocr-和-vqa) -- [5. 使用](#5-使用) - - [5.1 数据和预训练模型准备](#51-数据和预训练模型准备) - - [5.2 SER](#52-ser) - - [5.3 RE](#53-re) -- [6. 参考链接](#6-参考链接) -- [License](#license) - -# 文档视觉问答(DOC-VQA) - -## 1. 简介 - -VQA指视觉问答,主要针对图像内容进行提问和回答,DOC-VQA是VQA任务中的一种,DOC-VQA主要针对文本图像的文字内容提出问题。 - -PP-Structure 里的 DOC-VQA算法基于PaddleNLP自然语言处理算法库进行开发。 - -主要特性如下: - -- 集成[LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf)模型以及PP-OCR预测引擎。 -- 支持基于多模态方法的语义实体识别 (Semantic Entity Recognition, SER) 以及关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair)。 -- 支持SER任务和RE任务的自定义训练。 -- 支持OCR+SER的端到端系统预测与评估。 -- 支持OCR+SER+RE的端到端系统预测。 - -本项目是 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/pdf/2104.08836.pdf) 在 Paddle 2.2上的开源实现, -包含了在 [XFUND数据集](https://github.com/doc-analysis/XFUND) 上的微调代码。 - -## 2. 性能 - -我们在 [XFUND](https://github.com/doc-analysis/XFUND) 的中文数据集上对算法进行了评估,性能如下 - -| 模型 | 任务 | hmean | 模型下载地址 | -|:---:|:---:|:---:| :---:| -| LayoutXLM | SER | 0.9038 | [链接](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | -| LayoutXLM | RE | 0.7483 | [链接](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | -| LayoutLMv2 | SER | 0.8544 | [链接](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLMv2_xfun_zh.tar) -| LayoutLMv2 | RE | 0.6777 | [链接](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutLMv2_xfun_zh.tar) | -| LayoutLM | SER | 0.7731 | [链接](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLM_xfun_zh.tar) | - -## 3. 效果演示 - -**注意:** 测试图片来源于XFUND数据集。 - -### 3.1 SER - -![](../docs/vqa/result_ser/zh_val_0_ser.jpg) | ![](../docs/vqa/result_ser/zh_val_42_ser.jpg) ----|--- - -图中不同颜色的框表示不同的类别,对于XFUND数据集,有`QUESTION`, `ANSWER`, `HEADER` 3种类别 - -* 深紫色:HEADER -* 浅紫色:QUESTION -* 军绿色:ANSWER - -在OCR检测框的左上方也标出了对应的类别和OCR识别结果。 - -### 3.2 RE - -![](../docs/vqa/result_re/zh_val_21_re.jpg) | ![](../docs/vqa/result_re/zh_val_40_re.jpg) ----|--- - - -图中红色框表示问题,蓝色框表示答案,问题和答案之间使用绿色线连接。在OCR检测框的左上方也标出了对应的类别和OCR识别结果。 - -## 4. 安装 - -### 4.1 安装依赖 - -- **(1) 安装PaddlePaddle** - -```bash -python3 -m pip install --upgrade pip - -# GPU安装 -python3 -m pip install "paddlepaddle-gpu>=2.2" -i https://mirror.baidu.com/pypi/simple - -# CPU安装 -python3 -m pip install "paddlepaddle>=2.2" -i https://mirror.baidu.com/pypi/simple - -``` -更多需求,请参照[安装文档](https://www.paddlepaddle.org.cn/install/quick)中的说明进行操作。 - -### 4.2 安装PaddleOCR(包含 PP-OCR 和 VQA) - -- **(1)pip快速安装PaddleOCR whl包(仅预测)** - -```bash -python3 -m pip install paddleocr -``` - -- **(2)下载VQA源码(预测+训练)** - -```bash -【推荐】git clone https://github.com/PaddlePaddle/PaddleOCR - -# 如果因为网络问题无法pull成功,也可选择使用码云上的托管: -git clone https://gitee.com/paddlepaddle/PaddleOCR - -# 注:码云托管代码可能无法实时同步本github项目更新,存在3~5天延时,请优先使用推荐方式。 -``` - -- **(3)安装VQA的`requirements`** - -```bash -python3 -m pip install -r ppstructure/vqa/requirements.txt -``` - -## 5. 使用 - -### 5.1 数据和预训练模型准备 - -如果希望直接体验预测过程,可以下载我们提供的预训练模型,跳过训练过程,直接预测即可。 - -* 下载处理好的数据集 - -处理好的XFUND中文数据集下载地址:[链接](https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar)。 - - -下载并解压该数据集,解压后将数据集放置在当前目录下。 - -```shell -wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar -``` - -* 转换数据集 - -若需进行其他XFUND数据集的训练,可使用下面的命令进行数据集的转换 - -```bash -python3 ppstructure/vqa/tools/trans_xfun_data.py --ori_gt_path=path/to/json_path --output_path=path/to/save_path -``` - -* 下载预训练模型 -```bash -mkdir pretrain && cd pretrain -#下载SER模型 -wget https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar && tar -xvf ser_LayoutXLM_xfun_zh.tar -#下载RE模型 -wget https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar && tar -xvf re_LayoutXLM_xfun_zh.tar -cd ../ -``` - -### 5.2 SER - -启动训练之前,需要修改下面的四个字段 - -1. `Train.dataset.data_dir`:指向训练集图片存放目录 -2. `Train.dataset.label_file_list`:指向训练集标注文件 -3. `Eval.dataset.data_dir`:指指向验证集图片存放目录 -4. `Eval.dataset.label_file_list`:指向验证集标注文件 - -* 启动训练 -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/ser/layoutxlm.yml -``` - -最终会打印出`precision`, `recall`, `hmean`等指标。 -在`./output/ser_layoutxlm/`文件夹中会保存训练日志,最优的模型和最新epoch的模型。 - -* 恢复训练 - -恢复训练需要将之前训练好的模型所在文件夹路径赋值给 `Architecture.Backbone.checkpoints` 字段。 - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -``` - -* 评估 - -评估需要将待评估的模型所在文件夹路径赋值给 `Architecture.Backbone.checkpoints` 字段。 - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -``` -最终会打印出`precision`, `recall`, `hmean`等指标 - -* 基于训练引擎的`OCR + SER`串联预测 - -使用如下命令即可完成基于训练引擎的`OCR + SER`的串联预测, 以基于LayoutXLM的SER模型为例: -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/infer_vqa_token_ser.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ Global.infer_img=doc/vqa/input/zh_val_42.jpg -``` - -最终会在`config.Global.save_res_path`字段所配置的目录下保存预测结果可视化图像以及预测结果文本文件,预测结果文本文件名为`infer_results.txt`。 - -* 对`OCR + SER`预测系统进行端到端评估 - -首先使用 `tools/infer_vqa_token_ser.py` 脚本完成数据集的预测,然后使用下面的命令进行评估。 - -```shell -export CUDA_VISIBLE_DEVICES=0 -python3 tools/eval_with_label_end2end.py --gt_json_path XFUND/zh_val/xfun_normalize_val.json --pred_json_path output_res/infer_results.txt -``` -* 模型导出 - -使用如下命令即可完成SER模型的模型导出, 以基于LayoutXLM的SER模型为例: - -```shell -python3.7 tools/export_model.py -c configs/vqa/ser/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ Global.save_inference_dir=output/ser/infer -``` -转换后的模型会存放在`Global.save_inference_dir`字段指定的目录下。 - -* 基于预测引擎的`OCR + SER`串联预测 - -使用如下命令即可完成基于预测引擎的`OCR + SER`的串联预测, 以基于LayoutXLM的SER模型为例: - -```shell -cd ppstructure -CUDA_VISIBLE_DEVICES=0 python3.7 vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM --ser_model_dir=../output/ser/infer --ser_dict_path=../train_data/XFUND/class_list_xfun.txt --vis_font_path=../doc/fonts/simfang.ttf --image_dir=docs/vqa/input/zh_val_42.jpg --output=output -``` -预测成功后,可视化图片和结果会保存在`output`字段指定的目录下 - -### 5.3 RE - -* 启动训练 - -启动训练之前,需要修改下面的四个字段 - -1. `Train.dataset.data_dir`:指向训练集图片存放目录 -2. `Train.dataset.label_file_list`:指向训练集标注文件 -3. `Eval.dataset.data_dir`:指指向验证集图片存放目录 -4. `Eval.dataset.label_file_list`:指向验证集标注文件 - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/re/layoutxlm.yml -``` - -最终会打印出`precision`, `recall`, `hmean`等指标。 -在`./output/re_layoutxlm/`文件夹中会保存训练日志,最优的模型和最新epoch的模型。 - -* 恢复训练 - -恢复训练需要将之前训练好的模型所在文件夹路径赋值给 `Architecture.Backbone.checkpoints` 字段。 - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -``` - -* 评估 - -评估需要将待评估的模型所在文件夹路径赋值给 `Architecture.Backbone.checkpoints` 字段。 - -```shell -CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=path/to/model_dir -``` -最终会打印出`precision`, `recall`, `hmean`等指标 - -* 基于训练引擎的`OCR + SER + RE`串联预测 - -使用如下命令即可完成基于训练引擎的`OCR + SER + RE`串联预测, 以基于LayoutXLMSER和RE模型为例: -```shell -export CUDA_VISIBLE_DEVICES=0 -python3 tools/infer_vqa_token_ser_re.py -c configs/vqa/re/layoutxlm.yml -o Architecture.Backbone.checkpoints=pretrain/re_LayoutXLM_xfun_zh/ Global.infer_img=ppstructure/docs/vqa/input/zh_val_21.jpg -c_ser configs/vqa/ser/layoutxlm.yml -o_ser Architecture.Backbone.checkpoints=pretrain/ser_LayoutXLM_xfun_zh/ -``` - -最终会在`config.Global.save_res_path`字段所配置的目录下保存预测结果可视化图像以及预测结果文本文件,预测结果文本文件名为`infer_results.txt`。 - -* 模型导出 - -cooming soon - -* 基于预测引擎的`OCR + SER + RE`串联预测 - -cooming soon - -## 6. 参考链接 - -- LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding, https://arxiv.org/pdf/2104.08836.pdf -- microsoft/unilm/layoutxlm, https://github.com/microsoft/unilm/tree/master/layoutxlm -- XFUND dataset, https://github.com/doc-analysis/XFUND - -## License - -The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) diff --git a/ppstructure/vqa/requirements.txt b/ppstructure/vqa/requirements.txt deleted file mode 100644 index fcd882274c4402ba2a1d34f20ee6e2befa157121..0000000000000000000000000000000000000000 --- a/ppstructure/vqa/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -sentencepiece -yacs -seqeval -paddlenlp>=2.2.1 -pypandoc -attrdict -python_docx \ No newline at end of file diff --git a/test_tipc/configs/layoutxlm_ser/train_infer_python.txt b/test_tipc/configs/layoutxlm_ser/train_infer_python.txt index 5284ffabe2de4eb8bb000e7fb745ef2846ed6b64..549a31e69e367237ec0396778162a5f91c8b7412 100644 --- a/test_tipc/configs/layoutxlm_ser/train_infer_python.txt +++ b/test_tipc/configs/layoutxlm_ser/train_infer_python.txt @@ -9,7 +9,7 @@ Global.save_model_dir:./output/ Train.loader.batch_size_per_card:lite_train_lite_infer=4|whole_train_whole_infer=8 Architecture.Backbone.checkpoints:null train_model_name:latest -train_infer_img_dir:ppstructure/docs/vqa/input/zh_val_42.jpg +train_infer_img_dir:ppstructure/docs/kie/input/zh_val_42.jpg null:null ## trainer:norm_train @@ -37,7 +37,7 @@ export2:null infer_model:null infer_export:null infer_quant:False -inference:ppstructure/vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM --ser_dict_path=train_data/XFUND/class_list_xfun.txt --output=output +inference:ppstructure/kie/predict_kie_token_ser.py --kie_algorithm=LayoutXLM --ser_dict_path=train_data/XFUND/class_list_xfun.txt --output=output --use_gpu:True|False --enable_mkldnn:False --cpu_threads:6 @@ -45,7 +45,7 @@ inference:ppstructure/vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM - --use_tensorrt:False --precision:fp32 --ser_model_dir: ---image_dir:./ppstructure/docs/vqa/input/zh_val_42.jpg +--image_dir:./ppstructure/docs/kie/input/zh_val_42.jpg null:null --benchmark:False null:null diff --git a/test_tipc/configs/vi_layoutxlm_ser/train_infer_python.txt b/test_tipc/configs/vi_layoutxlm_ser/train_infer_python.txt index 59d347461171487c186c052e290f6b13236aa5c9..adad78bb76e34635a632ef7c1b55e212bc4b636a 100644 --- a/test_tipc/configs/vi_layoutxlm_ser/train_infer_python.txt +++ b/test_tipc/configs/vi_layoutxlm_ser/train_infer_python.txt @@ -9,7 +9,7 @@ Global.save_model_dir:./output/ Train.loader.batch_size_per_card:lite_train_lite_infer=4|whole_train_whole_infer=8 Architecture.Backbone.checkpoints:null train_model_name:latest -train_infer_img_dir:ppstructure/docs/vqa/input/zh_val_42.jpg +train_infer_img_dir:ppstructure/docs/kie/input/zh_val_42.jpg null:null ## trainer:norm_train @@ -37,7 +37,7 @@ export2:null infer_model:null infer_export:null infer_quant:False -inference:ppstructure/vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM --ser_dict_path=train_data/XFUND/class_list_xfun.txt --output=output --ocr_order_method=tb-yx +inference:ppstructure/kie/predict_kie_token_ser.py --kie_algorithm=LayoutXLM --ser_dict_path=train_data/XFUND/class_list_xfun.txt --output=output --ocr_order_method=tb-yx --use_gpu:True|False --enable_mkldnn:False --cpu_threads:6 @@ -45,7 +45,7 @@ inference:ppstructure/vqa/predict_vqa_token_ser.py --vqa_algorithm=LayoutXLM - --use_tensorrt:False --precision:fp32 --ser_model_dir: ---image_dir:./ppstructure/docs/vqa/input/zh_val_42.jpg +--image_dir:./ppstructure/docs/kie/input/zh_val_42.jpg null:null --benchmark:False null:null diff --git a/test_tipc/prepare.sh b/test_tipc/prepare.sh index 259a1159cb326760384645b2aff313b75da6084a..31bbbe30befe727e9e2a132e6ab4f8515035af79 100644 --- a/test_tipc/prepare.sh +++ b/test_tipc/prepare.sh @@ -107,7 +107,7 @@ if [ ${MODE} = "benchmark_train" ];then cd ../ fi if [ ${model_name} == "layoutxlm_ser" ] || [ ${model_name} == "vi_layoutxlm_ser" ]; then - pip install -r ppstructure/vqa/requirements.txt + pip install -r ppstructure/kie/requirements.txt pip install paddlenlp\>=2.3.5 --force-reinstall -i https://mirrors.aliyun.com/pypi/simple/ wget -nc -P ./train_data/ https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar --no-check-certificate cd ./train_data/ && tar xf XFUND.tar @@ -221,7 +221,7 @@ if [ ${MODE} = "lite_train_lite_infer" ];then cd ./pretrain_models/ && tar xf rec_r32_gaspin_bilstm_att_train.tar && cd ../ fi if [ ${model_name} == "layoutxlm_ser" ] || [ ${model_name} == "vi_layoutxlm_ser" ]; then - pip install -r ppstructure/vqa/requirements.txt + pip install -r ppstructure/kie/requirements.txt pip install paddlenlp\>=2.3.5 --force-reinstall -i https://mirrors.aliyun.com/pypi/simple/ wget -nc -P ./train_data/ https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar --no-check-certificate cd ./train_data/ && tar xf XFUND.tar diff --git a/tools/infer/predict_cls.py b/tools/infer/predict_cls.py index ed2f47c04de6f4ab6a874db052e953a1ce4e0b76..d2b7108ca35666acfa53e785686fd7b9dfc21ed5 100755 --- a/tools/infer/predict_cls.py +++ b/tools/infer/predict_cls.py @@ -30,7 +30,7 @@ import traceback import tools.infer.utility as utility from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read logger = get_logger() @@ -128,7 +128,7 @@ def main(args): valid_image_file_list = [] img_list = [] for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/tools/infer/predict_det.py b/tools/infer/predict_det.py index 394a48948b1f284bd405532769b76eeb298668bd..9f5c480d3c55367a02eacb48bed6ae3d38282f05 100755 --- a/tools/infer/predict_det.py +++ b/tools/infer/predict_det.py @@ -27,7 +27,7 @@ import sys import tools.infer.utility as utility from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process import json @@ -289,7 +289,7 @@ if __name__ == "__main__": os.makedirs(draw_img_save) save_results = [] for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/tools/infer/predict_e2e.py b/tools/infer/predict_e2e.py index fb2859f0c7e0d3aa0b87dbe11123dfc88f4b4e8e..de315d701c7172ded4d30e48e79abee367f42239 100755 --- a/tools/infer/predict_e2e.py +++ b/tools/infer/predict_e2e.py @@ -27,7 +27,7 @@ import sys import tools.infer.utility as utility from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppocr.data import create_operators, transform from ppocr.postprocess import build_post_process @@ -148,7 +148,7 @@ if __name__ == "__main__": if not os.path.exists(draw_img_save): os.makedirs(draw_img_save) for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/tools/infer/predict_rec.py b/tools/infer/predict_rec.py index 53dab6f26d8b84a224360f2fa6fe5f411eea751f..7c46e17bacdf1fff464322d284e4549bd8edacf2 100755 --- a/tools/infer/predict_rec.py +++ b/tools/infer/predict_rec.py @@ -30,7 +30,7 @@ import paddle import tools.infer.utility as utility from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read logger = get_logger() @@ -68,7 +68,7 @@ class TextRecognizer(object): 'name': 'SARLabelDecode', "character_dict_path": args.rec_char_dict_path, "use_space_char": args.use_space_char - } + } elif self.rec_algorithm == "VisionLAN": postprocess_params = { 'name': 'VLLabelDecode', @@ -399,7 +399,9 @@ class TextRecognizer(object): norm_img_batch.append(norm_img) elif self.rec_algorithm == "RobustScanner": norm_img, _, _, valid_ratio = self.resize_norm_img_sar( - img_list[indices[ino]], self.rec_image_shape, width_downsample_ratio=0.25) + img_list[indices[ino]], + self.rec_image_shape, + width_downsample_ratio=0.25) norm_img = norm_img[np.newaxis, :] valid_ratio = np.expand_dims(valid_ratio, axis=0) valid_ratios = [] @@ -484,12 +486,8 @@ class TextRecognizer(object): elif self.rec_algorithm == "RobustScanner": valid_ratios = np.concatenate(valid_ratios) word_positions_list = np.concatenate(word_positions_list) - inputs = [ - norm_img_batch, - valid_ratios, - word_positions_list - ] - + inputs = [norm_img_batch, valid_ratios, word_positions_list] + if self.use_onnx: input_dict = {} input_dict[self.input_tensor.name] = norm_img_batch @@ -555,7 +553,7 @@ def main(args): res = text_recognizer([img] * int(args.rec_batch_num)) for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/tools/infer/predict_sr.py b/tools/infer/predict_sr.py index b10d90bf1d6ce3de6d2947e9cc1f73443736518d..ca99f6819f4b207ecc0f0d1383fe1d26d07fbf50 100755 --- a/tools/infer/predict_sr.py +++ b/tools/infer/predict_sr.py @@ -30,7 +30,7 @@ import paddle import tools.infer.utility as utility from ppocr.postprocess import build_post_process from ppocr.utils.logging import get_logger -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read logger = get_logger() @@ -120,7 +120,7 @@ def main(args): res = text_recognizer([img] * int(args.sr_batch_num)) for image_file in image_file_list: - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = Image.open(image_file).convert("RGB") if img is None: diff --git a/tools/infer/predict_system.py b/tools/infer/predict_system.py index 3f322ecc6f44da58491f2fef06cda71f00da4f46..e0f2c41fa2aba23491efee920afbd76db1ec84e0 100755 --- a/tools/infer/predict_system.py +++ b/tools/infer/predict_system.py @@ -32,7 +32,7 @@ import tools.infer.utility as utility import tools.infer.predict_rec as predict_rec import tools.infer.predict_det as predict_det import tools.infer.predict_cls as predict_cls -from ppocr.utils.utility import get_image_file_list, check_and_read_gif +from ppocr.utils.utility import get_image_file_list, check_and_read from ppocr.utils.logging import get_logger from tools.infer.utility import draw_ocr_box_txt, get_rotate_crop_image logger = get_logger() @@ -159,7 +159,7 @@ def main(args): count = 0 for idx, image_file in enumerate(image_file_list): - img, flag = check_and_read_gif(image_file) + img, flag, _ = check_and_read(image_file) if not flag: img = cv2.imread(image_file) if img is None: diff --git a/tools/infer/utility.py b/tools/infer/utility.py index 3d9bdee0ee13af2017deab05e6844a0dd2fb34d3..a547bbdba4332bfbd3a7f18e5187f356c2df0964 100644 --- a/tools/infer/utility.py +++ b/tools/infer/utility.py @@ -181,14 +181,21 @@ def create_predictor(args, mode, logger): return sess, sess.get_inputs()[0], None, None else: - model_file_path = model_dir + "/inference.pdmodel" - params_file_path = model_dir + "/inference.pdiparams" + file_names = ['model', 'inference'] + for file_name in file_names: + model_file_path = '{}/{}.pdmodel'.format(model_dir, file_name) + params_file_path = '{}/{}.pdiparams'.format(model_dir, file_name) + if os.path.exists(model_file_path) and os.path.exists( + params_file_path): + break if not os.path.exists(model_file_path): - raise ValueError("not find model file path {}".format( - model_file_path)) + raise ValueError( + "not find model.pdmodel or inference.pdmodel in {}".format( + model_dir)) if not os.path.exists(params_file_path): - raise ValueError("not find params file path {}".format( - params_file_path)) + raise ValueError( + "not find model.pdiparams or inference.pdiparams in {}".format( + model_dir)) config = inference.Config(model_file_path, params_file_path) diff --git a/tools/infer_vqa_token_ser.py b/tools/infer_kie_token_ser.py similarity index 97% rename from tools/infer_vqa_token_ser.py rename to tools/infer_kie_token_ser.py index a15d83b17cc738a5c3349d461c3bce119c2355e7..2fc5749b9c10b9c89bc16e561fbe9c5ce58eb13c 100755 --- a/tools/infer_vqa_token_ser.py +++ b/tools/infer_kie_token_ser.py @@ -75,6 +75,8 @@ class SerPredictor(object): self.ocr_engine = PaddleOCR( use_angle_cls=False, show_log=False, + rec_model_dir=global_config.get("kie_rec_model_dir", None), + det_model_dir=global_config.get("kie_det_model_dir", None), use_gpu=global_config['use_gpu']) # create data ops diff --git a/tools/infer_vqa_token_ser_re.py b/tools/infer_kie_token_ser_re.py similarity index 98% rename from tools/infer_vqa_token_ser_re.py rename to tools/infer_kie_token_ser_re.py index 51378bdaeb03d4ec6d7684de80625c5029963745..40784e39be4784621c56ae84c1819720231c032f 100755 --- a/tools/infer_vqa_token_ser_re.py +++ b/tools/infer_kie_token_ser_re.py @@ -205,9 +205,7 @@ if __name__ == '__main__': result = ser_re_engine(data) result = result[0] fout.write(img_path + "\t" + json.dumps( - { - "ser_result": result, - }, ensure_ascii=False) + "\n") + result, ensure_ascii=False) + "\n") img_res = draw_re_results(img_path, result) cv2.imwrite(save_img_path, img_res) diff --git a/tools/program.py b/tools/program.py index 5a4d3ea4d2ec6832e6735d15096d46fbb62f86dd..7af1fe7354106f06b4384abb56de7675e4dbe053 100755 --- a/tools/program.py +++ b/tools/program.py @@ -173,7 +173,7 @@ def to_float32(preds): elif isinstance(preds[k], paddle.Tensor): preds[k] = preds[k].astype(paddle.float32) elif isinstance(preds, paddle.Tensor): - preds = preds.astype(paddle.float32) + preds = preds.astype(paddle.float32) return preds @@ -278,11 +278,13 @@ def train(config, model_average = True # use amp if scaler: - custom_black_list = config['Global'].get('amp_custom_black_list',[]) - with paddle.amp.auto_cast(level=amp_level, custom_black_list=custom_black_list): + custom_black_list = config['Global'].get( + 'amp_custom_black_list', []) + with paddle.amp.auto_cast( + level=amp_level, custom_black_list=custom_black_list): if model_type == 'table' or extra_input: preds = model(images, data=batch[1:]) - elif model_type in ["kie", 'vqa']: + elif model_type in ["kie"]: preds = model(batch) else: preds = model(images) @@ -295,7 +297,7 @@ def train(config, else: if model_type == 'table' or extra_input: preds = model(images, data=batch[1:]) - elif model_type in ["kie", 'vqa', 'sr']: + elif model_type in ["kie", 'sr']: preds = model(batch) else: preds = model(images) @@ -499,7 +501,7 @@ def eval(model, with paddle.amp.auto_cast(level='O2'): if model_type == 'table' or extra_input: preds = model(images, data=batch[1:]) - elif model_type in ["kie", 'vqa']: + elif model_type in ["kie"]: preds = model(batch) elif model_type in ['sr']: preds = model(batch) @@ -511,7 +513,7 @@ def eval(model, else: if model_type == 'table' or extra_input: preds = model(images, data=batch[1:]) - elif model_type in ["kie", 'vqa']: + elif model_type in ["kie"]: preds = model(batch) elif model_type in ['sr']: preds = model(batch) @@ -529,11 +531,12 @@ def eval(model, # Obtain usable results from post-processing methods total_time += time.time() - start # Evaluate the results of the current batch - if model_type in ['kie']: - eval_class(preds, batch_numpy) - elif model_type in ['table', 'vqa']: - post_result = post_process_class(preds, batch_numpy) - eval_class(post_result, batch_numpy) + if model_type in ['table', 'kie']: + if post_process_class is None: + eval_class(preds, batch_numpy) + else: + post_result = post_process_class(preds, batch_numpy) + eval_class(post_result, batch_numpy) elif model_type in ['sr']: eval_class(preds, batch_numpy) else: