{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. PP-OCRv3 Introduction\n", "\n", "PP-OCRv3 is further upgraded on the basis of PP-OCRv2. The pipeline is the same as PP-OCRv2, optimized for detection model and recognition model. Among them, the detection module is still optimized based on the DB algorithm, while the recognition module uses CVRT to replace CRNN, and makes industrial adaptation to it. The pipeline of PP-OCRv3 is as follows (the new strategy for PP-OCRv3 is in the pink box):\n", "\n", "
\n", "\n", "
\n", "\n", "\n", "PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. \n", "\n", "- Text detection:\n", " - LK-PAN:LK-PAN: a PAN module with large receptive field;\n", " - DML: deep mutual learning for teacher model;\n", " - RSE-FPN: a FPN module with residual attention mechanism;\n", "\n", "\n", "- Text recognition\n", " - SVTR-LCNet: lightweight text recognition network;\n", " - GTC: Guided rraining of CTC by attention;\n", " - TextConAug: data augmentation for mining text context information;\n", " - TextRotNet: self-supervised pre-trained model;\n", " - U-DML: unified-deep mutual learning;\n", " - UIM: unlabeled images mining;\n", "\n", "In the case of comparable speeds, the accuracy of various scenarios has been greatly improved:\n", "- Compared with the PP-OCRv2 Chinese model, the Chinese scene is improved by more than 5%;\n", "- Compared with the PP-OCRv2 English model in the English digital scene, it is improved by 11%;\n", "- In multi-language scenarios, the recognition performance of 80+ languages is optimized, and the average accuracy rate is increased by more than 5%.\n", "\n", "\n", "\n", "For more details, please refer to the technical report: https://arxiv.org/abs/2206.03001 .\n", "\n", "For more information about PaddleOCR, you can click https://github.com/PaddlePaddle/PaddleOCR to learn more.\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Model Effects\n", "\n", "The results of PP-OCRv3 are as follows:\n", "\n", "
\n", "\n", "
\n", "
\n", "\n", "
\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. How to Use the Model\n", "\n", "### 3.1 Inference\n", "* Install PaddleOCR whl package" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "scrolled": true, "tags": [] }, "outputs": [], "source": [ "! pip install paddleocr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Quick experience" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# command line usage\n", "! wget https://raw.githubusercontent.com/PaddlePaddle/PaddleOCR/dygraph/doc/imgs/11.jpg\n", "! paddleocr --image_dir 11.jpg --use_angle_cls true" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After the operation is complete, the following results will be output in the terminal:\n", "```log\n", "[[[28.0, 37.0], [302.0, 39.0], [302.0, 72.0], [27.0, 70.0]], ('纯臻营养护发素', 0.96588134765625)]\n", "[[[26.0, 81.0], [172.0, 83.0], [172.0, 104.0], [25.0, 101.0]], ('产品信息/参数', 0.9113278985023499)]\n", "[[[28.0, 115.0], [330.0, 115.0], [330.0, 132.0], [28.0, 132.0]], ('(45元/每公斤,100公斤起订)', 0.8843421936035156)]\n", "......\n", "```\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 Train the model.\n", "The PP-OCR system consists of a text detection model, an angle classifier and a text recognition model. For the three model training tutorials, please refer to the following documents:\n", "1. text detection model: [text detection training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/detection.md)\n", "1. angle classifier: [angle classifier training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/angle_class.md)\n", "1. text recognition model: [text recognition training tutorial](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_ch/recognition.md)\n", "\n", "After the model training is completed, it can be used in series by specifying the model path. The command reference is as follows:\n", "```python\n", "paddleocr --image_dir 11.jpg --use_angle_cls true --ocr_version PP-OCRv2 --det_model_dir=/path/to/det_inference_model --cls_model_dir=/path/to/cls_inference_model --rec_model_dir=/path/to/rec_inference_model\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Model Principles\n", "\n", "The optimization ideas are as follows\n", "\n", "1. Text detection enhancement strategies\n", "- LK-PAN: a PAN module with large receptive field\n", " \n", "LK-PAN (Large Kernel PAN) is a lightweight PAN structure with a larger receptive field. The core is to change the convolution kernel in the path augmentation of the PAN structure from 3*3 to 9*9. By increasing the convolution kernel, the receptive field covered by each position of the feature map is improved, and it is easier to detect text in large fonts and text with extreme aspect ratios. Using the LK-PAN structure, the hmean of the teacher model can be improved from 83.2% to 85.0%.\n", "\n", "
\n", " \n", "
\n", "\n", "- DML: deep mutual learning for teacher model\n", "\n", "[DML](https://arxiv.org/abs/1706.00384) (Deep Mutual Learning) The mutual learning distillation method, as shown in the figure below, can effectively improve the accuracy of the text detection model by learning from each other with two models with the same structure. The teacher model adopts the DML strategy, and the hmean is increased from 85% to 86%. By updating the teacher model of CML in PP-OCRv2 to the above higher-accuracy teacher model, the hmean of the student model can be further improved from 83.2% to 84.3%.\n", "
\n", " \n", "
\n", "\n", "- RSE-FPN: a FPN module with residual attention mechanism\n", "\n", "RSE-FPN (Residual Squeeze-and-Excitation FPN), as shown in the figure below, introduces a residual structure and a channel attention structure, and replaces the convolutional layer in the FPN with the RSEConv layer of the channel attention structure to further improve the representation of the feature map ability. Considering that the number of FPN channels in the detection model of PP-OCRv2 is very small, only 96, if SEblock is directly used to replace the convolution in FPN, the features of some channels will be suppressed, and the accuracy will be reduced. The introduction of residual structure in RSEConv will alleviate the above problems and improve the text detection effect. By further updating the FPN structure of the student model of CML in PP-OCRv2 to RSE-FPN, the hmean of the student model can be further improved from 84.3% to 85.4%.\n", "\n", "
\n", "\n", "
\n", "\n", "1. Text recognition enhancement strategies\n", "- SVTR_LCNet: lightweight text recognition network\n", "\n", "SVTR_LCNet is a lightweight text recognition network that integrates Transformer-based SVTR network and lightweight CNN network PP-LCNet for text recognition tasks. Using this network, the prediction speed is 20% better than the recognition model of PP-OCRv2, but the effect of the recognition model is slightly worse because the distillation strategy is not adopted. In addition, the normalization height of the input image is further increased from 32 to 48, and the prediction speed is slightly slower, but the model effect is greatly improved, and the recognition accuracy rate reaches 73.98% (+2.08%), which is close to the recognition model effect of PP-OCRv2 using the distillation strategy.\n", "\n", "- GTC: Guided rraining of CTC by attention\n", "\n", "[GTC](https://arxiv.org/pdf/2002.01276.pdf) (Guided Training of CTC), which uses the Attention module CTC training and integrates the expression of multiple text features is an effective strategy to improve text recognition. Using this strategy, the Attention module is completely removed during prediction, and no time-consuming is added in the inference stage, and the accuracy of the recognition model is further improved to 75.8% (+1.82%). The training process is as follows:\n", "\n", "
\n", "\n", "
\n", "\n", "- TextConAug: data augmentation for mining text context information\n", "\n", "TextConAug is a data augmentation strategy for mining textual context information. The main idea comes from the paper [ConCLR](https://www.cse.cuhk.edu.hk/~byu/papers/C139-AAAI2022-ConCLR.pdf) , the author proposes ConAug data augmentation to connect 2 different images in a batch to form new images and perform self-supervised comparative learning. PP-OCRv3 applies this method to supervised learning tasks, and designs the TextConAug data augmentation method, which can enrich the context information of training data and improve the diversity of training data. Using this strategy, the accuracy of the recognition model is further improved to 76.3% (+0.5%). The schematic diagram of TextConAug is as follows:\n", "\n", "
\n", "\n", "
\n", "\n", "- TextRotNet: self-supervised pre-trained model\n", "\n", "TextRotNet is a pre-training model that uses a large amount of unlabeled text line data and is trained in a self-supervised manner. Refer to the paper [STR-Fewer-Labels](https://github.com/ku21fan/STR-Fewer-Labels). This model can initialize the initial weights of SVTR_LCNet, which helps the text recognition model to converge to a better position. Using this strategy, the accuracy of the recognition model is further improved to 76.9% (+0.6%). The TextRotNet training process is shown in the following figure:\n", "\n", "
\n", "\n", "
\n", "\n", "- U-DML: unified-deep mutual learning\n", "\n", "UDML (Unified-Deep Mutual Learning) joint mutual learning is a strategy adopted in PP-OCRv2 that is very effective for text recognition to improve the model effect. In PP-OCRv3, for two different SVTR_LCNet and Attention structures, the feature map of PP-LCNet, the output of the SVTR module and the output of the Attention module between them are simultaneously supervised and trained. Using this strategy, the accuracy of the recognition model is further improved to 78.4% (+1.5%).\n", "\n", "- UIM: unlabeled images mining\n", "\n", "UIM (Unlabeled Images Mining) is a very simple unlabeled data mining scheme. The core idea is to use a high-precision text recognition model to predict unlabeled data, obtain pseudo-labels, and select samples with high prediction confidence as training data for training small models. Using this strategy, the accuracy of the recognition model is further improved to 79.4% (+1%).\n", "\n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Attention\n", "\n", "\n", "General data are used in the training process of PP-OCR series models. If the performance is not satisfactory in the actual scene, a small amount of data can be marked for finetune." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Related papers and citations\n", "```\n", "@article{du2021pp,\n", " title={PP-OCRv2: bag of tricks for ultra lightweight OCR system},\n", " author={Du, Yuning and Li, Chenxia and Guo, Ruoyu and Cui, Cheng and Liu, Weiwei and Zhou, Jun and Lu, Bin and Yang, Yehua and Liu, Qiwen and Hu, Xiaoguang and others},\n", " journal={arXiv preprint arXiv:2109.03144},\n", " year={2021}\n", "}\n", "\n", "@inproceedings{zhang2018deep,\n", " title={Deep mutual learning},\n", " author={Zhang, Ying and Xiang, Tao and Hospedales, Timothy M and Lu, Huchuan},\n", " booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},\n", " pages={4320--4328},\n", " year={2018}\n", "}\n", "\n", "@inproceedings{hu2020gtc,\n", " title={Gtc: Guided training of ctc towards efficient and accurate scene text recognition},\n", " author={Hu, Wenyang and Cai, Xiaocong and Hou, Jun and Yi, Shuai and Lin, Zhiping},\n", " booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n", " volume={34},\n", " number={07},\n", " pages={11005--11012},\n", " year={2020}\n", "}\n", "\n", "@inproceedings{zhang2022context,\n", " title={Context-based Contrastive Learning for Scene Text Recognition},\n", " author={Zhang, Xinyun and Zhu, Binwu and Yao, Xufeng and Sun, Qi and Li, Ruiyu and Yu, Bei},\n", " year={2022},\n", " organization={AAAI}\n", "}\n", "```\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8" } }, "nbformat": 4, "nbformat_minor": 4 }