“release/2.3”上不存在“doc/inference.md”
    README.md

    English | 简体中文


    Introduction

    PaddleOCR aims to create multilingual, awesome, leading, and practical OCR tools that help users train better models and apply them into practice.

    Recent updates

    • PaddleOCR R&D team would like to share the key points of PP-OCRv2, at 20:15 pm on September 8th, Live Address.

    • 2021.9.7 release PaddleOCR v2.3, PP-OCRv2 is proposed. The inference speed of PP-OCRv2 is 220% higher than that of PP-OCR server in CPU device. The F-score of PP-OCRv2 is 7% higher than that of PP-OCR mobile. (arxiv paper)

    • 2021.8.3 released PaddleOCR v2.2, add a new structured documents analysis toolkit, i.e., PP-Structure, support layout analysis and table recognition (One-key to export chart images to Excel files).

    • 2021.4.8 release end-to-end text recognition algorithm PGNet which is published in AAAI 2021. Find tutorial here;release multi language recognition models, support more than 80 languages recognition; especically, the performance of English recognition model is Optimized.

    • more

    Features

    • PP-OCR series of high-quality pre-trained models, comparable to commercial effects
      • Ultra lightweight PP-OCRv2 series models: detection (3.1M) + direction classifier (1.4M) + recognition 8.5M) = 13.0M
      • Ultra lightweight PP-OCR mobile series models: detection (3.0M) + direction classifier (1.4M) + recognition (5.0M) = 9.4M
      • General PP-OCR server series models: detection (47.1M) + direction classifier (1.4M) + recognition (94.9M) = 143.4M
      • Support Chinese, English, and digit recognition, vertical text recognition, and long text recognition
      • Support multi-language recognition: Korean, Japanese, German, French
    • Rich toolkits related to the OCR areas
      • Semi-automatic data annotation tool, i.e., PPOCRLabel: support fast and efficient data annotation
      • Data synthesis tool, i.e., Style-Text: easy to synthesize a large number of images which are similar to the target scene image
    • Support user-defined training, provides rich predictive inference deployment solutions
    • Support PIP installation, easy to use
    • Support Linux, Windows, MacOS and other systems

    Visualization

    The above pictures are the visualizations of the general ppocr_server model. For more effect pictures, please see More visualizations.

    Community

    • Scan the QR code below with your Wechat, you can access to official technical exchange group. Look forward to your participation.

    Quick Experience

    You can also quickly experience the ultra-lightweight OCR : Online Experience

    Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): Sign in to the website to obtain the QR code for installing the App

    Also, you can scan the QR code below to install the App (Android support only)

    PP-OCR Series Model List(Update on September 8th)

    Model introduction Model name Recommended scene Detection model Direction classifier Recognition model
    Chinese and English ultra-lightweight PP-OCRv2 model(11.6M) ch_PP-OCRv2_xx Mobile&Server inference model / pre-trained model inference model / pre-trained model inference model / pre-trained model
    Chinese and English ultra-lightweight PP-OCR model (9.4M) ch_ppocr_mobile_v2.0_xx Mobile & server inference model / pre-trained model inference model / pre-trained model inference model / pre-trained model
    Chinese and English general PP-OCR model (143.4M) ch_ppocr_server_v2.0_xx Server inference model / pre-trained model inference model / pre-trained model inference model / pre-trained model

    For more model downloads (including multiple languages), please refer to PP-OCR series model downloads.

    For a new language request, please refer to Guideline for new language_requests.

    Tutorials

    PP-OCRv2 Pipeline

    [1] PP-OCR is a practical ultra-lightweight OCR system. It is mainly composed of three parts: DB text detection, detection frame correction and CRNN text recognition. The system adopts 19 effective strategies from 8 aspects including backbone network selection and adjustment, prediction head design, data augmentation, learning rate transformation strategy, regularization parameter selection, pre-training model use, and automatic model tailoring and quantization to optimize and slim down the models of each module (as shown in the green box above). The final results are an ultra-lightweight Chinese and English OCR model with an overall size of 3.5M and a 2.8M English digital OCR model. For more details, please refer to the PP-OCR technical article (https://arxiv.org/abs/2009.09941).

    [2] On the basis of PP-OCR, PP-OCRv2 is further optimized in five aspects. The detection model adopts CML(Collaborative Mutual Learning) knowledge distillation strategy and CopyPaste data expansion strategy. The recognition model adopts LCNet lightweight backbone network, U-DML knowledge distillation strategy and enhanced CTC loss function improvement (as shown in the red box above), which further improves the inference speed and prediction effect. For more details, please refer to the technical report of PP-OCRv2.

    Visualization more

    • Chinese OCR model
    • English OCR model
    • Multilingual OCR model

    Guideline for New Language Requests

    If you want to request a new language support, a PR with 2 following files are needed:

    1. In folder ppocr/utils/dict, it is necessary to submit the dict text to this path and name it with {language}_dict.txt that contains a list of all characters. Please see the format example from other files in that folder.

    2. In folder ppocr/utils/corpus, it is necessary to submit the corpus to this path and name it with {language}_corpus.txt that contains a list of words in your language. Maybe, 50000 words per language is necessary at least. Of course, the more, the better.

    If your language has unique elements, please tell me in advance within any way, such as useful links, wikipedia and so on.

    More details, please refer to Multilingual OCR Development Plan.

    License

    This project is released under Apache 2.0 license

    Contribution

    We welcome all the contributions to PaddleOCR and appreciate for your feedback very much.

    • Many thanks to Khanh Tran and Karl Horky for contributing and revising the English documentation.
    • Many thanks to zhangxin for contributing the new visualize function、add .gitignore and discard set PYTHONPATH manually.
    • Many thanks to lyl120117 for contributing the code for printing the network structure.
    • Thanks xiangyubo for contributing the handwritten Chinese OCR datasets.
    • Thanks authorfu for contributing Android demo and xiadeye contributing iOS demo, respectively.
    • Thanks BeyondYourself for contributing many great suggestions and simplifying part of the code style.
    • Thanks tangmq for contributing Dockerized deployment services to PaddleOCR and supporting the rapid release of callable Restful API services.
    • Thanks lijinhan for contributing a new way, i.e., java SpringBoot, to achieve the request for the Hubserving deployment.
    • Thanks Mejans for contributing the Occitan corpus and character set.
    • Thanks LKKlein for contributing a new deploying package with the Golang program language.
    • Thanks Evezerest, ninetailskim, edencfc, BeyondYourself and 1084667371 for contributing a new data annotation tool, i.e., PPOCRLabel。

    项目简介

    Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/PaddlePaddle/PaddleOCR

    发行版本 6

    PaddleOCRv2.6.0

    全部发行版

    贡献者 67

    全部贡献者

    开发语言

    • Python 79.1 %
    • C++ 17.6 %
    • Java 2.6 %
    • CMake 0.5 %
    • Makefile 0.2 %