introduction_en.ipynb 20.1 KB
Notebook
Newer Older
文幕地方's avatar
文幕地方 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. PP-StructureV2 Introduction\n",
    "\n",
    "PP-StructureV2 is further improved on the basis of PP-StructureV1, mainly in the following three aspects:\n",
    "\n",
    " * **System function upgrade**: Added image correction and layout restoration modules, image conversion to word/pdf, and key information extraction capabilities!\n",
    " * **System performance optimization** :\n",
    "\t * Layout analysis: released a lightweight layout analysis model, the speed is increased by **11 times**, and the average CPU time is only **41ms**!\n",
    "\t * Table recognition: three optimization strategies are designed, and the model accuracy is improved by **6%** when the prediction time is constant.\n",
    "\t * Key information extraction: designing a visually irrelevant model structure, the accuracy of semantic entity recognition is improved by **2.8%**, and the accuracy of relation extraction is improved by **9.1%**.\n",
    " * **Chinese scene adaptation**: Complete the Chinese scene adaptation for layout analysis and table recognition, open source **out-of-the-box** Chinese scene layout structure model!\n",
    "\n",
    "The PP-StructureV2 framework is shown in the figure below. Firstly, the input document image direction is corrected by the Image Direction Correction module. For the Layout Information Extraction subsystem, as shown in the upper branch, the corrected image is firstly divided into different areas such as text, table and image through the layout analysis module, and then these areas are recognized respectively. For example, the table area is sent to the table recognition module for structural recognition, and the text area is sent to the OCR engine for text recognition. Finally, the layout recovery module is used to restore the image to an editable Word file consistent with the original image layout. For the Key Information Extraction subsystem, as shown in the lower branch, OCR engine is used to extract the text content, then the Semantic Entity Recognition module and Relation Extraction module are used to obtain the entities and their relationship in the image, respectively, so as to extract the required key information.\n",
    "\n",
    "<div align=\"center\">\n",
    "<img src=\"https://user-images.githubusercontent.com/14270174/185939247-57e53254-399c-46c4-a610-da4fa79232f5.png\"  width = \"80%\"  />\n",
    "</div>\n",
    "\n",
    "\n",
    "We made 8 improvements to 3 key sub-modules in the system.\n",
    "\n",
    "* Layout analysis\n",
    "\t* PP-PicoDet: A better real-time object detector on mobile devices\n",
    "\t* FGD: Focal and Global Knowledge Distillation\n",
    "\n",
    "* Table Recognition\n",
    "\t* PP-LCNet: CPU-friendly Lightweight Backbone\n",
    "\t* CSP-PAN: Lightweight Multi-level Feature Fusion Module\n",
    "\t* SLAHead: Structure and Location Alignment Module\n",
    "\n",
    "* Key Information Extraction\n",
    "\t* VI-LayoutXLM: Visual-feature Independent LayoutXLM\n",
    "\t* TB-YX: Threshold-Based YX sorting algorithm\n",
    "\t* UDML: Unified-Deep Mutual Learning\n",
    "\n",
    "Finally, compared to PP-StructureV1:\n",
    "\n",
    "- The number of parameters of the layout analysis model is reduced by 95.6%, the inference speed is increased by 11 times, and the accuracy is increased by 0.4%;\n",
    "- The table recognition model improves the model accuracy by 6% and the end-to-end TEDS by 2% without changing the prediction time.\n",
    "- The speed of the key information extraction model is increased by 2.8 times, the accuracy of the semantic entity recognition model is increased by 2.8%, and the accuracy of the relationship extraction model is increased by 9.1%.\n",
    "\n",
    "\n",
    "For more details, please refer to the technical report: https://arxiv.org/abs/2210.05391v2 .\n",
    "\n",
    "For more information about PaddleOCR, you can click https://github.com/PaddlePaddle/PaddleOCR to learn more.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Model Effects\n",
    "\n",
    "The results of PP-StructureV2 are as follows:\n",
    "\n",
    "- Layout analysis\n",
    "  \n",
    "<div align=\"center\">\n",
    "<img src=\"https://user-images.githubusercontent.com/14270174/185940654-956ef614-888a-4779-bf63-a6c2b61b97fa.png\"  width = \"60%\"  />\n",
    "</div>\n",
    "\n",
    "- Table recognition\n",
    "  \n",
    "<div align=\"center\">\n",
    "<img src=\"https://user-images.githubusercontent.com/14270174/185941221-c94e3d45-524c-4073-9644-21ba6a9fd93e.png\"  width = \"60%\"  />\n",
    "</div>\n",
    "\n",
    "- Layout recovery\n",
    "  \n",
    "<div align=\"center\">\n",
    "<img src=\"https://user-images.githubusercontent.com/14270174/185941816-4dabb3e8-a0db-4094-98ea-52e0a0fda8e8.png\"  width = \"60%\"  />\n",
    "</div>\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. How to Use the Model\n",
    "\n",
    "### 3.1 Inference\n",
    "* Install PaddleOCR whl package"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    },
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
文幕地方's avatar
文幕地方 已提交
106
    "! pip install \"paddleocr>=2.6.1.0\" --user"
文幕地方's avatar
文幕地方 已提交
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* Quick experience\n",
    "  \n",
    "image orientation + layout analysis + table recognition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n",
文幕地方's avatar
文幕地方 已提交
128
    "! pip install paddleclas\n",
文幕地方's avatar
文幕地方 已提交
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370
    "! paddleocr --image_dir=1.png --type=structure --image_orientation=true"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "layout analysis + table recognition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n",
    "! paddleocr --image_dir=1.png --type=structure"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "layout analysis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/1.png\n",
    "! paddleocr --image_dir=1.png --type=structure --table=false --ocr=false"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "table recognition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/release/2.6/ppstructure/docs/table/table.jpg\n",
    "! paddleocr --image_dir=table.jpg --type=structure --layout=false"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 Train the model\n",
    "The PP-StructureV2 system consists of a layout analysis model, a text detection model, a text recognition model and a table recognition model. For the four model training tutorials, please refer to the following documents:\n",
    "1. Layout analysis model: [Layout analysis model training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/ppstructure/layout/README_ch.md)\n",
    "2. text detection model: [text detection training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/detection.md)\n",
    "3. text recognition model: [text recognition training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/recognition.md)\n",
    "3. table recognition model: [table recognition training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/table_recognition.md)\n",
    "\n",
    "After the model training is completed, it can be used in series by specifying the model path. The command reference is as follows:\n",
    "```python\n",
    "paddleocr --image_dir 11.jpg --layout_model_dir=/path/to/layout_inference_model --det_model_dir=/path/to/det_inference_model --rec_model_dir=/path/to/rec_inference_model --table_model_dir=/path/to/table_inference_model\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Model Principles\n",
    "\n",
    "The enhancement strategies of each module are as follows\n",
    "\n",
    "1. Image Direction Correction Module\n",
    "   \n",
    "Since the training set is generally dominated by 0-degree images, the information extraction effect of rotated images is often compromised. In PP-StructureV2, the input image direction is firstly corrected by the PULC text image direction model provided by PaddleClas. Some demo images in the dataset are shown below. Different from the text line direction classifier, the text image direction classifier performs direction classification for the entire image. The text image direction classification model achieves 99% accuracy on the validation set with 463 FPS on CPU device.\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185939683-f6465473-3303-4a0c-95be-51f04fb9f387.png\" width=\"600\">\n",
    "</div>\n",
    "\n",
    "1. Layout Analysis\n",
    "\n",
    "Layout Analysis refers to dividing document images into predefined areas such as text, title, table, and figure. In PP-Structure, we adopted the object detection algorithm PP-YOLOv2 as the layout detector.\n",
    "\n",
    "**(1)PP-PicoDet: A better real-time object detector on mobile devices**\n",
    "\n",
    "PaddleDetection proposed a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on mobile devices. PP-PicoDet adopts the CSP structure to constructure CSP-PAN as the neck, SimOTA as label assignment strategy, PP-LCNet as the backbone, and an improved detection One-shot Neural Architecture Search(NAS) is proposed to find the optimal architecture automatically for object detection. We replace PP-YOLOv2 adopted by PP-Structure with PP-PicoDet, and adjust the input scale from 640*640 to 800*608, which is more suitable for document images. With 1.0x configuration, the accuracy is comparable to PP-YOLOv2, and the CPU inference speed is 11 times faster.\n",
    "\n",
    "**(2) FGD: Focal and Global Knowledge Distillation**\n",
    "\n",
    "FGD, a knowledge distillation algorithm for object detection, takes into account local and global feature maps, combining focal distillation and global distillation. Focal distillation separates the foreground and background of the image, forcing the student to focus on the teacher’s critical pixels and channels. Global distillation rebuilds the relation between different pixels and transfers it from teachers to students, compensating for missing global information in focal distillation. Based on the FGD distillation strategy, the student model (LCNet1.0x based PP-PicoDet) gets 0.5% mAP improvement with the knowledge from the teacher model (LCNet2.5x based PP-PicoDet). Finally the student model is only 0.2% lower than the teacher model on mAP, but 100% faster.\n",
    "\n",
    "1. Table Recognition\n",
    "\n",
    "In PP-StructureV2, we propose an efficient Table Recognition algorithm named SLANet (Structure Location Alignment Network). Compared with TableRec-RARE, SLANet has been upgraded in terms of model structure and loss. The enhancement strategies are as follows:\n",
    "\n",
    "**(1) PP-LCNet: CPU-friendly Lightweight Backbone**\n",
    "\n",
    "PP-LCNet is a lightweight CPU network based on the MKLDNN acceleration strategy, which achieves better performance on multiple tasks than lightweight models such as ShuffleNetV2, MobileNetV3, and GhostNet. Additionally, pre-trained weights trained by SSLD on ImageNet are used for Table Recognition model training process for higher accuracy.\n",
    "\n",
    "**(2) CSP-PAN: Lightweight Multi-level Feature Fusion Module**\n",
    "\n",
    "Fusion of the features extracted by the backbone network can effectively alleviate problems brought by scale changes in complex scenes. In the early days, the FPN module was proposed and used for feature fusion, but its feature fusion process was one-way (from high-level to low-level), which was not sufficient. CSP-PAN is improved based on PAN. While ensuring more sufficient feature fusion, strategies such as CSP block and depthwise separable convolution are used to reduce the computational cost. In SLANet, we reduce the output channels of CSP-PAN from 128 to 96 in order to reduce the model size.\n",
    "\n",
    "\n",
    "**(3) SLAHead: Structure and Location Alignment Module**\n",
    "\n",
    "In the TableRec-RARE head, output of each step is concatenated and fed into SDM (Structure Decode Module) and CLDM (Cell Location Decode Module) to generate all cell tokens and coordinates, which ignores the one-to-one correspondence between cell token and coordinates. Therefore, we propose the SLAHead to align cell token and coordinates. In SLAHead, output of each step is fed into SDM and CLDM to get the token and coordinates of the current step, the token and coordinates of all steps are concatenated to get the HTML table representation and coordinates of all cells.\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185940968-e3a2fbac-78d7-4b74-af54-a1dab860f470.png\" width=\"1200\">\n",
    "</div>\n",
    "\n",
    "\n",
    "**(4) Merge Token**\n",
    "\n",
    "In TableRec-RARE, we use two separate tokens `<td>` and `</td>` to represent a non-cross-row-column cell, which limits the network’s ability to handle tables with a large number of cells. Inspired by TableMaster, we regard `<td>` and `</td>` as one token `<td></td>` in SLANet.\n",
    "\n",
    "\n",
    "1. Layout Recovery\n",
    "\n",
    "Layout Recovery a newly added module which is responsible for restoring the image to an editable Word file according to the analysis results. The following figure shows the result of layout restoration:\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185941816-4dabb3e8-a0db-4094-98ea-52e0a0fda8e8.png\" width=\"1200\">\n",
    "</div>\n",
    "\n",
    "1. Key Information Extraction\n",
    "\n",
    "Key Information Extraction (KIE) is usually used to extract the specific information such as name, address and other fields in the ID card or forms. Semantic Entity Recognition (SER) and Relationship Extraction (RE) are two subtasks in KIE, which have been supported in PP-Structure. In PP-StructureV2, we design a visual-feature independent LayoutXLM structure for less inference time cost. TB-YX sorting algorithm and U-DML knowledge distillation are utilized for higher accuracy. The following figure shows the KIE framework.\n",
    "\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185941978-abec7d4a-5e3a-4141-83f8-088d04ef898e.png\" width=\"1000\">\n",
    "</div>\n",
    "\n",
    "\n",
    "The enhancement strategies are as follows:\n",
    "\n",
    "**(1) VI-LayoutXLM(Visual-feature Independent LayoutXLM)**\n",
    "\n",
    "Visual backbone network is introduced in LayoutLMv2 and LayoutXLM to extract visual features and combine with subsequent text embedding as multi-modal input embedding. Considering that the visual backbone is base on ResNet x101 64x4d, which takes much time during the visual feature extraction process, we remove this submodule from LayoutXLM. Surprisingly, we found that Hmean of SER and RE tasks based on LayoutXLM is not decreased, and Hmean of SER task based on LayoutLMv2 is just reduced by 2.1%, while the model size is reduced by about 340MB. At the same time, based on the XFUND dataset, the accuracy of VI-LayoutXLM on the RE task is improved by `1.06%`.\n",
    "\n",
    "**(2) TB-YX: Threshold-Based YX sorting algorithm**\n",
    "\n",
    "Text reading order is important for KIE tasks. In traditional multi-modal KIE methods, incorrect reading order that may be generated by different OCR engines is not considered, which will directly affect the position embedding and final inference result. Generally, we sort the OCR results from top to bottom and then left to right according to the absolute coordinates of the detected text boxes (YX). The obtained order is usually unstable and not consistent with the reading order. We introduce a position offset threshold th to address this problem (TB-YX). The text boxes are still sorted from top to bottom first, but when the distance between the two text boxes in the Y direction is less than the threshold th, their order is determined by the order in the X direction.\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185942080-9d4bafc9-fa7f-4da4-b139-b2bd703dc76d.png\" width=\"800\">\n",
    "</div>\n",
    "\n",
    "\n",
    "Using this strategy, on the XFUND dataset, the F1 index of the SER task increased by `2.06%`, and the F1 index of the RE task increased by `7.04%`.\n",
    "\n",
    "**(3) U-DML: Unified-Deep Mutual Learning**\n",
    "\n",
    "U-DML is a distillation method proposed in PP-OCRv2 which can effectively improve the accuracy without increasing model size. In PP-StructureV2, we apply U-DML to the training process of SER and RE tasks, and Hmean is increased by 0.6% and 5.1%, repectively.\n",
    "\n",
    "\n",
    "The visualization results of VI-LayoutXLM based on the SER task are shown below.\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185942213-0909135b-3bcd-4d79-9e69-847dfb1c3b82.png\" width=\"800\">\n",
    "</div>\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185942237-72923b42-8590-42eb-b687-fa819b1c3afd.png\" width=\"800\">\n",
    "</div>\n",
    "\n",
    "\n",
    "The visualization results of VI-LayoutXLM based on the RE task are shown below.\n",
    "\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185942400-8920dc3c-de7f-46d0-b0bc-baca9536e0e1.png\" width=\"800\">\n",
    "</div>\n",
    "\n",
    "<div align=\"center\">\n",
    "    <img src=\"https://user-images.githubusercontent.com/14270174/185942416-ca4fd8b0-9227-4c65-b969-0afbda525b85.png\" width=\"800\">\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Attention\n",
    "\n",
    "1. The PP-StructureV2 series of models have public data sets during the training process. If the performance is not satisfactory in the actual scene, a small amount of data can be marked for finetune.\n",
    "2. The online experience currently only supports table recognition. For layout analysis and layout recovery, please refer to `3.1 Model Inference`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. Related papers and citations\n",
    "```\n",
    "@article{li2022pp,\n",
    "  title={PP-StructureV2: A Stronger Document Analysis System},\n",
    "  author={Li, Chenxia and Guo, Ruoyu and Zhou, Jun and An, Mengtao and Du, Yuning and Zhu, Lingfeng and Liu, Yi and Hu, Xiaoguang and Yu, Dianhai},\n",
    "  journal={arXiv preprint arXiv:2210.05391},\n",
    "  year={2022}\n",
    "}\n",
    "```\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.8.13 ('py38')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  },
  "vscode": {
   "interpreter": {
    "hash": "58fd1890da6594cebec461cf98c6cb9764024814357f166387d10d267624ecd6"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}