{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. PP-StructureV2模型简介\n", "\n", "PP-StructureV2在PP-StructureV1的基础上进一步改进,主要有以下3个方面升级:\n", "\n", " * **系统功能升级** :新增图像矫正和版面恢复模块,图像转word/pdf、关键信息抽取能力全覆盖!\n", " * **系统性能优化** :\n", "\t * 版面分析:发布轻量级版面分析模型,速度提升**11倍**,平均CPU耗时仅需**41ms**!\n", "\t * 表格识别:设计3大优化策略,预测耗时不变情况下,模型精度提升**6%**。\n", "\t * 关键信息抽取:设计视觉无关模型结构,语义实体识别精度提升**2.8%**,关系抽取精度提升**9.1%**。\n", " * **中文场景适配** :完成对版面分析与表格识别的中文场景适配,开源**开箱即用**的中文场景版面结构化模型!\n", "\n", "PP-StructureV2系统流程图如下所示,文档图像首先经过图像矫正模块,判断整图方向并完成转正,随后可以完成版面信息分析与关键信息抽取2类任务。版面分析任务中,图像首先经过版面分析模型,将图像划分为文本、表格、图像等不同区域,随后对这些区域分别进行识别,如,将表格区域送入表格识别模块进行结构化识别,将文本区域送入OCR引擎进行文字识别,最后使用版面恢复模块将其恢复为与原始图像布局一致的word或者pdf格式的文件;关键信息抽取任务中,首先使用OCR引擎提取文本内容,然后由语义实体识别模块获取图像中的语义实体,最后经关系抽取模块获取语义实体之间的对应关系,从而提取需要的关键信息。\n", "\n", "
\n", "\n", "
\n", "\n", "\n", "从算法改进思路来看,对系统中的3个关键子模块,共进行了8个方面的改进。\n", "\n", "* 版面分析\n", "\t* PP-PicoDet: 轻量级版面分析模型\n", "\t* FGD: 兼顾全局与局部特征的模型蒸馏算法\n", "\n", "* 表格识别\n", "\t* PP-LCNet: CPU友好型轻量级骨干网络\n", "\t* CSP-PAN: 轻量级高低层特征融合模块\n", "\t* SLAHead: 结构与位置信息对齐的特征解码模块\n", "\n", "* 关键信息抽取\n", "\t* VI-LayoutXLM: 视觉特征无关的多模态预训练模型结构\n", "\t* TB-YX: 考虑阅读顺序的文本行排序逻辑\n", "\t* UDML: 联合互学习知识蒸馏策略\n", "\n", "最终,与PP-StructureV1相比:\n", "\n", "- 版面分析模型参数量减少95.6%,推理速度提升11倍,精度提升0.4%;\n", "- 表格识别预测耗时不变,模型精度提升6%,端到端TEDS提升2%;\n", "- 关键信息抽取模型速度提升2.8倍,语义实体识别模型精度提升2.8%;关系抽取模型精度提升9.1%。\n", "\n", "\n", "更详细的优化细节可参考技术报告:https://arxiv.org/abs/2210.05391v2 。\n", "\n", "更多关于PaddleOCR的内容,可以点击 https://github.com/PaddlePaddle/PaddleOCR 进行了解。\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. 模型效果\n", "\n", "PP-StructureV2的效果如下:\n", "\n", "- 版面分析\n", " \n", "
\n", "\n", "
\n", "\n", "- 表格识别\n", " \n", "
\n", "\n", "
\n", "\n", "- 版面恢复\n", " \n", "
\n", "\n", "
\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. 模型如何使用\n", "\n", "### 3.1 模型推理:\n", "* 安装PaddleOCR whl包" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "scrolled": true, "tags": [] }, "outputs": [], "source": [ "! pip install paddleocr>=2.6.1.0 -i http://mirrors.cloud.tencent.com/pypi/simple" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* 快速体验\n", " \n", "图像方向分类+版面分析+表格识别" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! paddleocr --image_dir=1.png --type=structure --image_orientation=true" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "版面分析+表格识别" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! paddleocr --image_dir=1.png --type=structure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "版面分析" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! paddleocr --image_dir=1.png --type=structure --table=false --ocr=false" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "表格识别" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/table.jpg\n", "! paddleocr --image_dir=table.jpg --type=structure --layout=false" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 模型训练\n", "PP-StructureV2系统由版面分析模型、文本检测模型、文本识别模型和表格识别模型构成,四个模型训练教程可参考如下文档:\n", "1. 版面分析模型: [版面分析模型训练教程](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/ppstructure/layout/README_ch.md)\n", "2. 文本检测模型: [文本检测训练教程](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/detection.md)\n", "3. 文本识别模型: [文本识别训练教程](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/recognition.md)\n", "3. 表格识别模型: [表格识别训练教程](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/table_recognition.md)\n", "\n", "模型训练完成后,可以通过指定模型路径的方式串联使用\n", "命令参考如下:\n", "```python\n", "paddleocr --image_dir 11.jpg --layout_model_dir=/path/to/layout_inference_model --det_model_dir=/path/to/det_inference_model --rec_model_dir=/path/to/rec_inference_model --table_model_dir=/path/to/table_inference_model\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. 原理\n", "\n", "各模块优化思路具体如下\n", "\n", "1. 整图方向矫正\n", " \n", "由于训练集一般以正方向图像为主,旋转过的文档图像直接输入模型会增加识别难度,影响识别效果。PP-StructureV2引入了整图方向矫正模块来判断含文字图像的方向,并将其进行方向调整。\n", "\n", "我们直接调用PaddleClas中提供的文字图像方向分类模型-[PULC_text_image_orientation](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/PULC/PULC_text_image_orientation.md),该模型部分数据集图像如下所示。不同于文本行方向分类器,文字图像方向分类模型针对整图进行方向判别。文字图像方向分类模型在验证集上精度高达99%,单张图像CPU预测耗时仅为`2.16ms`。\n", "\n", "
\n", " \n", "
\n", "\n", "2. 版面分析\n", "\n", "版面分析指的是对图片形式的文档进行区域划分,定位其中的关键区域,如文字、标题、表格、图片等,PP-StructureV1使用了PaddleDetection中开源的高效检测算法PP-YOLOv2完成版面分析的任务。\n", "\n", "**(1)轻量级版面分析模型PP-PicoDet**\n", "\n", "`PP-PicoDet`是PaddleDetection中提出的轻量级目标检测模型,通过使用PP-LCNet骨干网络、CSP-PAN特征融合模块、SimOTA标签分配方法等优化策略,最终在CPU与移动端具有卓越的性能。我们将PP-StructureV1中采用的PP-YOLOv2模型替换为`PP-PicoDet`,同时针对版面分析场景优化预测尺度,从针对目标检测设计的`640*640`调整为更适配文档图像的`800*608`,在`1.0x`配置下,模型精度与PP-YOLOv2相当,CPU平均预测速度可提升11倍。\n", "\n", "**(2)FGD知识蒸馏**\n", "\n", "FGD(Focal and Global Knowledge Distillation for Detectors),是一种兼顾局部全局特征信息的模型蒸馏方法,分为Focal蒸馏和Global蒸馏2个部分。Focal蒸馏分离图像的前景和背景,让学生模型分别关注教师模型的前景和背景部分特征的关键像素;Global蒸馏部分重建不同像素之间的关系并将其从教师转移到学生,以补偿Focal蒸馏中丢失的全局信息。我们基于FGD蒸馏策略,使用教师模型PP-PicoDet-LCNet2.5x(mAP=94.2%)蒸馏学生模型PP-PicoDet-LCNet1.0x(mAP=93.5%),可将学生模型精度提升0.5%,和教师模型仅差0.2%,而预测速度比教师模型快1倍。\n", "\n", "\n", "3. 表格识别\n", "\n", "PP-StructureV2中,我们对模型结构和损失函数等5个方面进行升级,提出了 SLANet (Structure Location Alignment Network) ,优化点如下:\n", "\n", "**(1) CPU友好型轻量级骨干网络PP-LCNet**\n", "\n", "PP-LCNet是结合Intel-CPU端侧推理特性而设计的轻量高性能骨干网络,该方案在图像分类任务上取得了比ShuffleNetV2、MobileNetV3、GhostNet等轻量级模型更优的“精度-速度”均衡。PP-StructureV2中,我们采用PP-LCNet作为骨干网络,表格识别模型精度从71.73%提升至72.98%;同时加载通过SSLD知识蒸馏方案训练得到的图像分类模型权重作为表格识别的预训练模型,最终精度进一步提升2.95%至74.71%。\n", "\n", "**(2)轻量级高低层特征融合模块CSP-PAN**\n", "\n", "对骨干网络提取的特征进行融合,可以有效解决尺度变化较大等复杂场景中的模型预测问题。早期,FPN模块被提出并用于特征融合,但是它的特征融合过程仅包含单向(高->低),融合不够充分。CSP-PAN基于PAN进行改进,在保证特征融合更为充分的同时,使用CSP block、深度可分离卷积等策略减小了计算量。在表格识别场景中,我们进一步将CSP-PAN的通道数从128降低至96以降低模型大小。最终表格识别模型精度提升0.97%至75.68%,预测速度提升10%。\n", "\n", "**(3)结构与位置信息对齐的特征解码模块SLAHead**\n", "\n", "TableRec-RARE的TableAttentionHead如下图a所示,TableAttentionHead在执行完全部step的计算后拿到最终隐藏层状态表征(hiddens),随后hiddens经由SDM(Structure Decode Module)和CLDM(Cell Location Decode Module)模块生成全部的表格结构token和单元格坐标。但是这种设计忽略了单元格token和坐标之间一一对应的关系。\n", "\n", "PP-StructureV2中,我们设计SLAHead模块,对单元格token和坐标之间做了对齐操作,如下图b所示。在SLAHead中,每一个step的隐藏层状态表征会分别送入SDM和CLDM来得到当前step的token和坐标,每个step的token和坐标输出分别进行concat得到表格的html表达和全部单元格的坐标。此外,考虑到表格识别模型的单元格准确率依赖于表格结构的识别准确,我们将损失函数中表格结构分支与单元格定位分支的权重比从1:1提升到8:1,并使用收敛更稳定的Smoothl1 Loss替换定位分支中的MSE Loss。最终模型精度从75.68%提高至77.7%。\n", "\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "**(4)其他**\n", "\n", "TableRec-RARE算法中,我们使用``和``两个单独的token来表示一个非跨行列单元格,这种表示方式限制了网络对于单元格数量较多表格的处理能力。\n", "\n", "PP-StructureV2中,我们参考TableMaster中的token处理方法,将``和``合并为一个token-``。合并token后,验证集中token长度大于500的图片也参与模型评估,最终模型精度降低为76.31%,但是端到端TEDS提升1.04%。\n", "\n", "\n", "4. 版面恢复\n", "\n", "版面恢复指的是文档图像经过OCR识别、版面分析、表格识别等方法处理后的内容可以与原始文档保持相同的排版方式,并输出到word等文档中。PP-StructureV2中,我们版面恢复系统,包含版面分析、表格识别、OCR文本检测与识别等子模块。\n", "下图展示了版面恢复的结果:\n", "\n", "
\n", " \n", "
\n", "\n", "5. 关键信息抽取\n", "\n", "关键信息抽取指的是针对文档图像的文字内容,提取出用户关注的关键信息,如身份证中的姓名、住址等字段。PP-Structure中支持了基于多模态LayoutLM系列模型的语义实体识别 (Semantic Entity Recognition, SER) 以及关系抽取 (Relation Extraction, RE) 任务。PP-StructureV2中,我们对模型结构以及下游任务训练方法进行升级,提出了VI-LayoutXLM(Visual-feature Independent LayoutXLM),具体流程图如下所示。\n", "\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "具体优化策略如下:\n", "\n", "**(1) VI-LayoutXLM(Visual-feature Independent LayoutXLM)**\n", "\n", "LayoutLMv2以及LayoutXLM中引入视觉骨干网络,用于提取视觉特征,并与后续的text embedding进行联合,作为多模态的输入embedding。但是该模块为基于`ResNet_x101_64x4d`的特征提取网络,特征抽取阶段耗时严重,因此我们将其去除,同时仍然保留文本、位置以及布局等信息,最终发现针对LayoutXLM进行改进,下游SER任务精度无损,针对LayoutLMv2进行改进,下游SER任务精度仅降低`2.1%`,而模型大小减小了约`340M`。同时,基于XFUND数据集,VI-LayoutXLM在RE任务上的精度也进一步提升了`1.06%`。\n", "\n", "**(2) TB-YX排序方法(Threshold-Based YX sorting algorithm)**\n", "\n", "文本阅读顺序对于信息抽取与文本理解等任务至关重要,传统多模态模型中,没有考虑不同OCR工具可能产生的不正确阅读顺序,而模型输入中包含位置编码,阅读顺序会直接影响预测结果,在预处理中,我们对文本行按照从上到下,从左到右(YX)的顺序进行排序,为防止文本行位置轻微干扰带来的排序结果不稳定问题,在排序的过程中,引入位置偏移阈值Th,对于Y方向距离小于Th的2个文本内容,使用x方向的位置从左到右进行排序。\n", "\n", "不同排序方法的结果对比如下所示,可以看出引入偏离阈值之后,排序结果更加符合人类的阅读顺序。\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "使用该策略,最终XFUND数据集上,SER任务F1指标提升`2.06%`,RE任务F1指标提升`7.04%`。\n", "\n", "**(3) 互学习蒸馏策略**\n", "\n", "UDML(Unified-Deep Mutual Learning)联合互学习是PP-OCRv2与PP-OCRv3中采用的对于文本识别非常有效的提升模型效果的策略。在训练时,引入2个完全相同的模型进行互学习,计算2个模型之间的互蒸馏损失函数(DML loss),同时对transformer中间层的输出结果计算距离损失函数(L2 loss)。使用该策略,最终XFUND数据集上,SER任务F1指标提升`0.6%`,RE任务F1指标提升`5.01%`。\n", "\n", "最终优化后模型基于SER任务的可视化结果如下所示。\n", "\n", "
\n", " \n", "
\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "RE任务的可视化结果如下所示。\n", "\n", "\n", "
\n", " \n", "
\n", "\n", "
\n", " \n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. 注意事项\n", "\n", "1. PP-StructureV2系列模型训练过程中均公开数据集,如在实际场景中表现不满意,可标注少量数据进行finetune。\n", "2. 在线体验目前仅开放表格识别的体验,如需版面分析和版面恢复,请参考`3.1 模型推理`。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. 相关论文以及引用信息\n", "```\n", "@article{li2022pp,\n", " title={PP-StructureV2: A Stronger Document Analysis System},\n", " author={Li, Chenxia and Guo, Ruoyu and Zhou, Jun and An, Mengtao and Du, Yuning and Zhu, Lingfeng and Liu, Yi and Hu, Xiaoguang and Yu, Dianhai},\n", " journal={arXiv preprint arXiv:2210.05391},\n", " year={2022}\n", "}\n", "```\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.13 ('py38')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" }, "vscode": { "interpreter": { "hash": "58fd1890da6594cebec461cf98c6cb9764024814357f166387d10d267624ecd6" } } }, "nbformat": 4, "nbformat_minor": 4 }