{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. PP-StructureV2 Introduction\n", "\n", "PP-StructureV2 is further improved on the basis of PP-StructureV1, mainly in the following three aspects:\n", "\n", " * **System function upgrade**: Added image correction and layout restoration modules, image conversion to word/pdf, and key information extraction capabilities!\n", " * **System performance optimization** :\n", "\t * Layout analysis: released a lightweight layout analysis model, the speed is increased by **11 times**, and the average CPU time is only **41ms**!\n", "\t * Table recognition: three optimization strategies are designed, and the model accuracy is improved by **6%** when the prediction time is constant.\n", "\t * Key information extraction: designing a visually irrelevant model structure, the accuracy of semantic entity recognition is improved by **2.8%**, and the accuracy of relation extraction is improved by **9.1%**.\n", " * **Chinese scene adaptation**: Complete the Chinese scene adaptation for layout analysis and table recognition, open source **out-of-the-box** Chinese scene layout structure model!\n", "\n", "The PP-StructureV2 framework is shown in the figure below. Firstly, the input document image direction is corrected by the Image Direction Correction module. For the Layout Information Extraction subsystem, as shown in the upper branch, the corrected image is firstly divided into different areas such as text, table and image through the layout analysis module, and then these areas are recognized respectively. For example, the table area is sent to the table recognition module for structural recognition, and the text area is sent to the OCR engine for text recognition. Finally, the layout recovery module is used to restore the image to an editable Word file consistent with the original image layout. For the Key Information Extraction subsystem, as shown in the lower branch, OCR engine is used to extract the text content, then the Semantic Entity Recognition module and Relation Extraction module are used to obtain the entities and their relationship in the image, respectively, so as to extract the required key information.\n", "\n", "
\n", "\n", "
\n", "\n", "\n", "We made 8 improvements to 3 key sub-modules in the system.\n", "\n", "* Layout analysis\n", "\t* PP-PicoDet: A better real-time object detector on mobile devices\n", "\t* FGD: Focal and Global Knowledge Distillation\n", "\n", "* Table Recognition\n", "\t* PP-LCNet: CPU-friendly Lightweight Backbone\n", "\t* CSP-PAN: Lightweight Multi-level Feature Fusion Module\n", "\t* SLAHead: Structure and Location Alignment Module\n", "\n", "* Key Information Extraction\n", "\t* VI-LayoutXLM: Visual-feature Independent LayoutXLM\n", "\t* TB-YX: Threshold-Based YX sorting algorithm\n", "\t* UDML: Unified-Deep Mutual Learning\n", "\n", "Finally, compared to PP-StructureV1:\n", "\n", "- The number of parameters of the layout analysis model is reduced by 95.6%, the inference speed is increased by 11 times, and the accuracy is increased by 0.4%;\n", "- The table recognition model improves the model accuracy by 6% and the end-to-end TEDS by 2% without changing the prediction time.\n", "- The speed of the key information extraction model is increased by 2.8 times, the accuracy of the semantic entity recognition model is increased by 2.8%, and the accuracy of the relationship extraction model is increased by 9.1%.\n", "\n", "\n", "For more details, please refer to the technical report: https://arxiv.org/abs/2210.05391v2 .\n", "\n", "For more information about PaddleOCR, you can click https://github.com/PaddlePaddle/PaddleOCR to learn more.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Model Effects\n", "\n", "The results of PP-StructureV2 are as follows:\n", "\n", "- Layout analysis\n", " \n", "
\n", "\n", "
\n", "\n", "- Table recognition\n", " \n", "
\n", "\n", "
\n", "\n", "- Layout recovery\n", " \n", "
\n", "\n", "
\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. How to Use the Model\n", "\n", "### 3.1 Inference\n", "* Install PaddleOCR whl package" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "scrolled": true, "tags": [] }, "outputs": [], "source": [ "! pip install \"paddleocr>=2.6.1.0\" --user" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Quick experience\n", " \n", "image orientation + layout analysis + table recognition" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! pip install paddleclas\n", "! paddleocr --image_dir=1.png --type=structure --image_orientation=true" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "layout analysis + table recognition" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! paddleocr --image_dir=1.png --type=structure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "layout analysis" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n", "! paddleocr --image_dir=1.png --type=structure --table=false --ocr=false" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "table recognition" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/table.jpg\n", "! paddleocr --image_dir=table.jpg --type=structure --layout=false" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 Train the model\n", "The PP-StructureV2 system consists of a layout analysis model, a text detection model, a text recognition model and a table recognition model. For the four model training tutorials, please refer to the following documents:\n", "1. Layout analysis model: [Layout analysis model training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/ppstructure/layout/README_ch.md)\n", "2. text detection model: [text detection training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/detection.md)\n", "3. text recognition model: [text recognition training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/recognition.md)\n", "3. table recognition model: [table recognition training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/table_recognition.md)\n", "\n", "After the model training is completed, it can be used in series by specifying the model path. The command reference is as follows:\n", "```python\n", "paddleocr --image_dir 11.jpg --layout_model_dir=/path/to/layout_inference_model --det_model_dir=/path/to/det_inference_model --rec_model_dir=/path/to/rec_inference_model --table_model_dir=/path/to/table_inference_model\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Model Principles\n", "\n", "The enhancement strategies of each module are as follows\n", "\n", "1. Image Direction Correction Module\n", " \n", "Since the training set is generally dominated by 0-degree images, the information extraction effect of rotated images is often compromised. In PP-StructureV2, the input image direction is firstly corrected by the PULC text image direction model provided by PaddleClas. Some demo images in the dataset are shown below. Different from the text line direction classifier, the text image direction classifier performs direction classification for the entire image. The text image direction classification model achieves 99% accuracy on the validation set with 463 FPS on CPU device.\n", "\n", "
\n", " \n", "
\n", "\n", "1. Layout Analysis\n", "\n", "Layout Analysis refers to dividing document images into predefined areas such as text, title, table, and figure. In PP-Structure, we adopted the object detection algorithm PP-YOLOv2 as the layout detector.\n", "\n", "**(1)PP-PicoDet: A better real-time object detector on mobile devices**\n", "\n", "PaddleDetection proposed a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on mobile devices. PP-PicoDet adopts the CSP structure to constructure CSP-PAN as the neck, SimOTA as label assignment strategy, PP-LCNet as the backbone, and an improved detection One-shot Neural Architecture Search(NAS) is proposed to find the optimal architecture automatically for object detection. We replace PP-YOLOv2 adopted by PP-Structure with PP-PicoDet, and adjust the input scale from 640*640 to 800*608, which is more suitable for document images. With 1.0x configuration, the accuracy is comparable to PP-YOLOv2, and the CPU inference speed is 11 times faster.\n", "\n", "**(2) FGD: Focal and Global Knowledge Distillation**\n", "\n", "FGD, a knowledge distillation algorithm for object detection, takes into account local and global feature maps, combining focal distillation and global distillation. Focal distillation separates the foreground and background of the image, forcing the student to focus on the teacher’s critical pixels and channels. Global distillation rebuilds the relation between different pixels and transfers it from teachers to students, compensating for missing global information in focal distillation. Based on the FGD distillation strategy, the student model (LCNet1.0x based PP-PicoDet) gets 0.5% mAP improvement with the knowledge from the teacher model (LCNet2.5x based PP-PicoDet). Finally the student model is only 0.2% lower than the teacher model on mAP, but 100% faster.\n", "\n", "1. Table Recognition\n", "\n", "In PP-StructureV2, we propose an efficient Table Recognition algorithm named SLANet (Structure Location Alignment Network). Compared with TableRec-RARE, SLANet has been upgraded in terms of model structure and loss. The enhancement strategies are as follows:\n", "\n", "**(1) PP-LCNet: CPU-friendly Lightweight Backbone**\n", "\n", "PP-LCNet is a lightweight CPU network based on the MKLDNN acceleration strategy, which achieves better performance on multiple tasks than lightweight models such as ShuffleNetV2, MobileNetV3, and GhostNet. Additionally, pre-trained weights trained by SSLD on ImageNet are used for Table Recognition model training process for higher accuracy.\n", "\n", "**(2) CSP-PAN: Lightweight Multi-level Feature Fusion Module**\n", "\n", "Fusion of the features extracted by the backbone network can effectively alleviate problems brought by scale changes in complex scenes. In the early days, the FPN module was proposed and used for feature fusion, but its feature fusion process was one-way (from high-level to low-level), which was not sufficient. CSP-PAN is improved based on PAN. While ensuring more sufficient feature fusion, strategies such as CSP block and depthwise separable convolution are used to reduce the computational cost. In SLANet, we reduce the output channels of CSP-PAN from 128 to 96 in order to reduce the model size.\n", "\n", "\n", "**(3) SLAHead: Structure and Location Alignment Module**\n", "\n", "In the TableRec-RARE head, output of each step is concatenated and fed into SDM (Structure Decode Module) and CLDM (Cell Location Decode Module) to generate all cell tokens and coordinates, which ignores the one-to-one correspondence between cell token and coordinates. Therefore, we propose the SLAHead to align cell token and coordinates. In SLAHead, output of each step is fed into SDM and CLDM to get the token and coordinates of the current step, the token and coordinates of all steps are concatenated to get the HTML table representation and coordinates of all cells.\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "**(4) Merge Token**\n", "\n", "In TableRec-RARE, we use two separate tokens `` and `` to represent a non-cross-row-column cell, which limits the network’s ability to handle tables with a large number of cells. Inspired by TableMaster, we regard `` and `` as one token `` in SLANet.\n", "\n", "\n", "1. Layout Recovery\n", "\n", "Layout Recovery a newly added module which is responsible for restoring the image to an editable Word file according to the analysis results. The following figure shows the result of layout restoration:\n", "\n", "
\n", " \n", "
\n", "\n", "1. Key Information Extraction\n", "\n", "Key Information Extraction (KIE) is usually used to extract the specific information such as name, address and other fields in the ID card or forms. Semantic Entity Recognition (SER) and Relationship Extraction (RE) are two subtasks in KIE, which have been supported in PP-Structure. In PP-StructureV2, we design a visual-feature independent LayoutXLM structure for less inference time cost. TB-YX sorting algorithm and U-DML knowledge distillation are utilized for higher accuracy. The following figure shows the KIE framework.\n", "\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "The enhancement strategies are as follows:\n", "\n", "**(1) VI-LayoutXLM(Visual-feature Independent LayoutXLM)**\n", "\n", "Visual backbone network is introduced in LayoutLMv2 and LayoutXLM to extract visual features and combine with subsequent text embedding as multi-modal input embedding. Considering that the visual backbone is base on ResNet x101 64x4d, which takes much time during the visual feature extraction process, we remove this submodule from LayoutXLM. Surprisingly, we found that Hmean of SER and RE tasks based on LayoutXLM is not decreased, and Hmean of SER task based on LayoutLMv2 is just reduced by 2.1%, while the model size is reduced by about 340MB. At the same time, based on the XFUND dataset, the accuracy of VI-LayoutXLM on the RE task is improved by `1.06%`.\n", "\n", "**(2) TB-YX: Threshold-Based YX sorting algorithm**\n", "\n", "Text reading order is important for KIE tasks. In traditional multi-modal KIE methods, incorrect reading order that may be generated by different OCR engines is not considered, which will directly affect the position embedding and final inference result. Generally, we sort the OCR results from top to bottom and then left to right according to the absolute coordinates of the detected text boxes (YX). The obtained order is usually unstable and not consistent with the reading order. We introduce a position offset threshold th to address this problem (TB-YX). The text boxes are still sorted from top to bottom first, but when the distance between the two text boxes in the Y direction is less than the threshold th, their order is determined by the order in the X direction.\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "Using this strategy, on the XFUND dataset, the F1 index of the SER task increased by `2.06%`, and the F1 index of the RE task increased by `7.04%`.\n", "\n", "**(3) U-DML: Unified-Deep Mutual Learning**\n", "\n", "U-DML is a distillation method proposed in PP-OCRv2 which can effectively improve the accuracy without increasing model size. In PP-StructureV2, we apply U-DML to the training process of SER and RE tasks, and Hmean is increased by 0.6% and 5.1%, repectively.\n", "\n", "\n", "The visualization results of VI-LayoutXLM based on the SER task are shown below.\n", "\n", "
\n", " \n", "
\n", "\n", "
\n", " \n", "
\n", "\n", "\n", "The visualization results of VI-LayoutXLM based on the RE task are shown below.\n", "\n", "\n", "
\n", " \n", "
\n", "\n", "
\n", " \n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Attention\n", "\n", "1. The PP-StructureV2 series of models have public data sets during the training process. If the performance is not satisfactory in the actual scene, a small amount of data can be marked for finetune.\n", "2. The online experience currently only supports table recognition. For layout analysis and layout recovery, please refer to `3.1 Model Inference`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Related papers and citations\n", "```\n", "@article{li2022pp,\n", " title={PP-StructureV2: A Stronger Document Analysis System},\n", " author={Li, Chenxia and Guo, Ruoyu and Zhou, Jun and An, Mengtao and Du, Yuning and Zhu, Lingfeng and Liu, Yi and Hu, Xiaoguang and Yu, Dianhai},\n", " journal={arXiv preprint arXiv:2210.05391},\n", " year={2022}\n", "}\n", "```\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.13 ('py38')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" }, "vscode": { "interpreter": { "hash": "58fd1890da6594cebec461cf98c6cb9764024814357f166387d10d267624ecd6" } } }, "nbformat": 4, "nbformat_minor": 4 }