README.md 14.1 KB
Newer Older
D
dyning 已提交
1
English | [简体中文](README_cn.md)
2

D
dyning 已提交
3 4
## Introduction
PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice.
D
dyning 已提交
5

D
dyning 已提交
6
**Recent updates**
L
licx 已提交
7
- 2020.8.16, Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
D
dyning 已提交
8
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
D
dyning 已提交
9
- 2020.7.15, Add mobile App demo , support both iOS and  Android  ( based on easyedge and Paddle Lite)
D
dyning 已提交
10
- 2020.7.15, Improve the  deployment ability, add the C + +  inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight OCR model are provided.
D
dyning 已提交
11 12
- 2020.7.15, Add several related datasets, data annotation and synthesis tools.
- [more](./doc/doc_en/update_en.md)
D
dyning 已提交
13

M
MissPenguin 已提交
14
## Features
D
dyning 已提交
15
- Ultra-lightweight OCR model, total model size is only 8.6M
D
dyning 已提交
16
    - Single model supports Chinese/English numbers combination recognition, vertical text recognition, long text recognition
D
dyning 已提交
17 18 19 20
    - Detection model DB (4.1M) + recognition model CRNN (4.5M)
- Various text detection algorithms: EAST, DB
- Various text recognition algorithms: Rosetta, CRNN, STAR-Net, RARE
- Support Linux, Windows, MacOS and other systems.
D
dyning 已提交
21

D
dyning 已提交
22
## Visualization
T
tink2123 已提交
23

D
dyning 已提交
24
![](doc/imgs_results/11.jpg)
L
LDOUBLEV 已提交
25

D
dyning 已提交
26
![](doc/imgs_results/img_10.jpg)
D
dyning 已提交
27

D
dyning 已提交
28
[More visualization](./doc/doc_en/visualization_en.md)
D
dyning 已提交
29

D
dyning 已提交
30
You can also quickly experience the ultra-lightweight OCR : [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
D
dyning 已提交
31

D
dyning 已提交
32 33 34
Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in the website to obtain the QR code for  installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)

 Also, you can scan the QR code blow to install the App (**Android support only**)
D
dyning 已提交
35 36 37 38 39

<div align="center">
<img src="./doc/ocr-android-easyedge.png"  width = "200" height = "200" />
</div>

D
dyning 已提交
40
- [**OCR Quick Start**](./doc/doc_en/quickstart_en.md)
D
dyning 已提交
41

D
dyning 已提交
42
<a name="Supported-Chinese-model-list"></a>
D
dyning 已提交
43

D
dyning 已提交
44
### Supported Models:
D
dyning 已提交
45

D
dyning 已提交
46
|Model Name|Description |Detection Model link|Recognition Model link| Support for space Recognition Model link|
D
dyning 已提交
47
|-|-|-|-|-|
D
dyning 已提交
48 49
|db_crnn_mobile|ultra-lightweight OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|db_crnn_server|General OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
D
dyning 已提交
50 51 52 53 54 55 56 57 58 59 60 61 62 63


## Tutorials
- [Installation](./doc/doc_en/installation_en.md)
- [Quick Start](./doc/doc_en/quickstart_en.md)
- Algorithm introduction
    - [Text Detection Algorithm](#TEXTDETECTIONALGORITHM)
    - [Text Recognition Algorithm](#TEXTRECOGNITIONALGORITHM)
    - [END-TO-END OCR Algorithm](#ENDENDOCRALGORITHM)
- Model training/evaluation
    - [Text Detection](./doc/doc_en/detection_en.md)
    - [Text Recognition](./doc/doc_en/recognition_en.md)
    - [Yml Configuration](./doc/doc_en/config_en.md)
    - [Tricks](./doc/doc_en/tricks_en.md)
D
dyning 已提交
64
- Deployment
D
dyning 已提交
65 66 67 68 69 70
    - [Python Inference](./doc/doc_en/inference_en.md)
    - [C++ Inference](./deploy/cpp_infer/readme_en.md)
    - [Serving](./doc/doc_en/serving_en.md)
    - [Mobile](./deploy/lite/readme_en.md)
    - Model Quantization and Compression (coming soon)
    - [Benchmark](./doc/doc_en/benchmark_en.md)
D
dyning 已提交
71
- Datasets
D
dyning 已提交
72 73 74 75 76
    - [General OCR Datasets(Chinese/English)](./doc/doc_en/datasets_en.md)
    - [HandWritten_OCR_Datasets(Chinese)](./doc/doc_en/handwritten_datasets_en.md)
    - [Various OCR Datasets(multilingual)](./doc/doc_en/vertical_and_multilingual_datasets_en.md)
    - [Data Annotation Tools](./doc/doc_en/data_annotation_en.md)
    - [Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md)
D
dyning 已提交
77
- [FAQ](#FAQ)
D
dyning 已提交
78 79 80 81
- Visualization
    - [Ultra-lightweight Chinese/English OCR Visualization](#UCOCRVIS)
    - [General Chinese/English OCR Visualization](#GeOCRVIS)
    - [Chinese/English OCR Visualization (Support Space Recognization )](#SpaceOCRVIS)
M
MissPenguin 已提交
82 83 84 85
- [Community](#Community)
- [References](./doc/doc_en/reference_en.md)
- [License](#LICENSE)
- [Contribution](#CONTRIBUTION)
D
dyning 已提交
86 87 88 89 90

<a name="TEXTDETECTIONALGORITHM"></a>
## Text Detection Algorithm

PaddleOCR open source text detection algorithms list:
T
tink2123 已提交
91
- [x]  EAST([paper](https://arxiv.org/abs/1704.03155))
T
fix url  
tink2123 已提交
92
- [x]  DB([paper](https://arxiv.org/abs/1911.08947))
L
licx 已提交
93
- [x]  SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research)
T
tink2123 已提交
94

D
dyning 已提交
95
On the ICDAR2015 dataset, the text detection result is as follows:
T
tink2123 已提交
96

D
dyning 已提交
97
|Model|Backbone|precision|recall|Hmean|Download link|
98
|-|-|-|-|-|-|
D
dyning 已提交
99 100 101 102
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
L
licx 已提交
103
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
L
LDOUBLEV 已提交
104

L
licx 已提交
105 106 107 108 109 110
On Total-Text dataset, the text detection result is as follows:

|Model|Backbone|precision|recall|Hmean|Download link|
|-|-|-|-|-|-|
|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|

D
dyning 已提交
111
For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
D
dyning 已提交
112
|Model|Backbone|Configuration file|Pre-trained model|
T
tink2123 已提交
113
|-|-|-|-|
D
dyning 已提交
114 115
|ultra-lightweight OCR model|MobileNetV3|det_mv3_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|General OCR model|ResNet50_vd|det_r50_vd_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
T
tink2123 已提交
116

D
dyning 已提交
117
* Note: For the training and evaluation of the above DB model, post-processing parameters box_thresh=0.6 and unclip_ratio=1.5 need to be set. If using different datasets and different models for training, these two parameters can be adjusted for better result.
T
tink2123 已提交
118

D
dyning 已提交
119
For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./doc/doc_en/detection_en.md)
T
tink2123 已提交
120

D
dyning 已提交
121 122
<a name="TEXTRECOGNITIONALGORITHM"></a>
## Text Recognition Algorithm
T
tink2123 已提交
123

D
dyning 已提交
124
PaddleOCR open-source text recognition algorithms list:
T
tink2123 已提交
125 126 127 128
- [x]  CRNN([paper](https://arxiv.org/abs/1507.05717))
- [x]  Rosetta([paper](https://arxiv.org/abs/1910.05085))
- [x]  STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
- [x]  RARE([paper](https://arxiv.org/abs/1603.03915v1))
L
licx 已提交
129
- [x]  SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research)
T
tink2123 已提交
130

D
dyning 已提交
131
Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow:
T
tink2123 已提交
132

D
dyning 已提交
133
|Model|Backbone|Avg Accuracy|Module combination|Download link|
D
dyning 已提交
134
|-|-|-|-|-|
D
dyning 已提交
135 136 137 138 139 140 141 142
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
L
licx 已提交
143 144 145 146 147
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|

**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](todo).

The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar).
D
dyning 已提交
148

D
dyning 已提交
149
We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w  traning data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
L
licx 已提交
150

D
dyning 已提交
151
|Model|Backbone|Configuration file|Pre-trained model|
T
tink2123 已提交
152
|-|-|-|-|
D
dyning 已提交
153 154
|ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)|
|General OCR model|Resnet34_vd|rec_chinese_common_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)|
T
tink2123 已提交
155

D
dyning 已提交
156
Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./doc/doc_en/recognition_en.md)
T
tink2123 已提交
157

D
dyning 已提交
158 159 160
<a name="ENDENDOCRALGORITHM"></a>
## END-TO-END OCR Algorithm
- [ ]  [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, comming soon)
T
tink2123 已提交
161

D
dyning 已提交
162
## Visualization
D
dyning 已提交
163

D
dyning 已提交
164 165
<a name="UCOCRVIS"></a>
### 1.Ultra-lightweight Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
T
tink2123 已提交
166

D
dyning 已提交
167
<div align="center">
D
dyning 已提交
168
    <img src="doc/imgs_results/1.jpg" width="800">
D
dyning 已提交
169
</div>
T
tink2123 已提交
170

D
dyning 已提交
171 172
<a name="GeOCRVIS"></a>
### 2. General Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
D
dyning 已提交
173 174 175 176

<div align="center">
    <img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
</div>
177

D
dyning 已提交
178 179
<a name="SpaceOCRVIS"></a>
### 3.Chinese/English OCR Visualization (Space_support) [more](./doc/doc_en/visualization_en.md)
T
tink2123 已提交
180

D
dyning 已提交
181 182 183
<div align="center">
    <img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
</div>
T
tink2123 已提交
184

D
dyning 已提交
185
<a name="FAQ"></a>
D
dyning 已提交
186

D
dyning 已提交
187
## FAQ
D
dyning 已提交
188 189 190 191 192 193 194 195 196
1. Error when using attention-based recognition model: KeyError: 'predict'

    The inference of recognition model based on attention loss is still being debugged. For Chinese text recognition, it is recommended to choose the recognition model based on CTC loss first. In practice, it is also found that the recognition model based on attention loss is not as effective as the one based on CTC loss.

2. About inference speed

    When there are a lot of texts in the picture, the prediction time will increase. You can use `--rec_batch_num` to set a smaller prediction batch size. The default value is 30, which can be changed to 10 or other values.

3. Service deployment and mobile deployment
T
tink2123 已提交
197

D
dyning 已提交
198
    It is expected that the service deployment based on Serving and the mobile deployment based on Paddle Lite will be released successively in mid-to-late June. Stay tuned for more updates.
M
MissPenguin 已提交
199

D
dyning 已提交
200
4. Release time of self-developed algorithm
T
tink2123 已提交
201

D
dyning 已提交
202
    Baidu Self-developed algorithms such as SAST, SRN and end2end PSL will be released in June or July. Please be patient.
M
MissPenguin 已提交
203

D
dyning 已提交
204
[more](./doc/doc_en/FAQ_en.md)
D
dyning 已提交
205

D
dyning 已提交
206
<a name="Community"></a>
M
MissPenguin 已提交
207
## Community
D
dyning 已提交
208
Scan  the QR code below with your wechat and completing the questionnaire, you can access to offical technical exchange group.
D
dyning 已提交
209

D
dyning 已提交
210 211 212
<div align="center">
<img src="./doc/joinus.jpg"  width = "200" height = "200" />
</div>
M
MissPenguin 已提交
213

D
dyning 已提交
214
<a name="LICENSE"></a>
M
MissPenguin 已提交
215
## License
D
dyning 已提交
216
This project is released under <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>
D
dyning 已提交
217

D
dyning 已提交
218
<a name="CONTRIBUTION"></a>
M
MissPenguin 已提交
219
## Contribution
D
dyning 已提交
220
We welcome all the contributions to PaddleOCR and appreciate for your feedback very much.
T
tink2123 已提交
221

D
dyning 已提交
222 223 224 225 226
- Many thanks to [Khanh Tran](https://github.com/xxxpsyduck) for contributing the English documentation.
- Many thanks to [zhangxin](https://github.com/ZhangXinNan) for contributing the new visualize function、add .gitgnore and discard set PYTHONPATH manually.
- Many thanks to [lyl120117](https://github.com/lyl120117) for contributing the code for printing the network structure.
- Thanks [xiangyubo](https://github.com/xiangyubo) for contributing the handwritten Chinese OCR datasets.
- Thanks [authorfu](https://github.com/authorfu) for contributing Android demo  and [xiadeye](https://github.com/xiadeye) contributing iOS demo, respectively.
littletomatodonkey's avatar
littletomatodonkey 已提交
227
- Thanks [BeyondYourself](https://github.com/BeyondYourself) for contributing many great suggestions and simplifying part of the code style.
L
LDOUBLEV 已提交
228
- Thanks [tangmq](https://github.com/tangmq/PaddleOCR) for contributing Dockerized deployment services to PaddleOCR and supporting the rapid release of callable Restful API services.