未验证 提交 3954d466 编写于 作者: D dyning 提交者: GitHub

Merge pull request #9 from tink2123/update_doc

Update doc
# PaddleOCR
OCR algorithms with PaddlePaddle (still under develop) # 简介
PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。
## 特性:
- 超轻量级模型
- (检测模型4.1M + 识别模型4.5M = 8.6M)
- 支持竖排文字识别
- (单模型同时支持横排和竖排文字识别)
- 支持长文本识别
- 支持中英文数字组合识别
- 提供训练代码
- 支持模型部署
## 文档教程
- [快速安装](./doc/installation.md)
- [文本识别模型训练/评估/预测](./doc/detection.md)
- [文本预测模型训练/评估/预测](./doc/recognition.md)
- [基于inference model预测](./doc/)
### **快速开始**
下载inference模型
```
# 创建inference模型保存目录
mkdir inference && cd inference && mkdir det && mkdir rec
# 下载检测inference模型/ 识别 inference 模型
wget -P ./inference https://paddleocr.bj.bcebos.com/inference.tar
```
实现文本检测、识别串联推理,预测$image_dir$指定的单张图像:
```
export PYTHONPATH=.
python tools/infer/predict_eval.py --image_dir="/Demo.jpg" --det_model_dir="./inference/det/" --rec_model_dir="./inference/rec/"
```
在执行预测时,通过参数det_model_dir以及rec_model_dir设置存储inference 模型的路径。
实现文本检测、识别串联推理,预测$image_dir$指指定文件夹下的所有图像:
```
python tools/infer/predict_eval.py --image_dir="/test_imgs/" --det_model_dir="./inference/det/" --rec_model_dir="./inference/rec/"
```
## 文本检测算法:
PaddleOCR开源的文本检测算法列表:
- [x] [EAST](https://arxiv.org/abs/1704.03155)
- [x] [DB](https://arxiv.org/abs/1911.08947)
- [ ] [SAST](https://arxiv.org/abs/1908.05498)
算法效果:
|模型|骨干网络|Hmean|
|-|-|-|
|EAST|[ResNet50_vd](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|85.85%|
|EAST|[MobileNetV3](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|79.08%|
|DB|[ResNet50_vd](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|83.30%|
|DB|[MobileNetV3](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|73.00%|
PaddleOCR文本检测算法的训练与使用请参考[文档](./doc/detection.md)
## 文本识别算法:
PaddleOCR开源的文本识别算法列表:
- [x] [CRNN](https://arxiv.org/abs/1507.05717)
- [x] [DTRB](https://arxiv.org/abs/1904.01906)
- [ ] [Rosetta](https://arxiv.org/abs/1910.05085)
- [ ] [STAR-Net](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)
- [ ] [RARE](https://arxiv.org/abs/1603.03915v1)
- [ ] [SRN]((https://arxiv.org/abs/2003.12294))(百度自研)
算法效果如下表所示,精度指标是在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上的评测结果的平均值。
|模型|骨干网络|ACC|
|-|-|-|
|Rosetta|[Resnet34_vd](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|80.24%|
|Rosetta|[MobileNetV3](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|78.16%|
|CRNN|[Resnet34_vd](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|82.20%|
|CRNN|[MobileNetV3](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|79.37%|
|STAR-Net|[Resnet34_vd](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|83.93%|
|STAR-Net|[MobileNetV3](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|81.56%|
|RARE|[Resnet34_vd](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|84.90%|
|RARE|[MobileNetV3](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|83.32%|
PaddleOCR文本识别算法的训练与使用请参考[文档](./doc/recognition.md)
## TODO
**端到端OCR算法**
PaddleOCR即将开源百度自研端对端OCR模型[End2End-PSL](https://arxiv.org/abs/1909.07808),敬请关注。
- [ ] End2End-PSL (comming soon)
# 参考文献
```
1. EAST:
@inproceedings{zhou2017east,
title={EAST: an efficient and accurate scene text detector},
author={Zhou, Xinyu and Yao, Cong and Wen, He and Wang, Yuzhi and Zhou, Shuchang and He, Weiran and Liang, Jiajun},
booktitle={Proceedings of the IEEE conference on Computer Vision and Pattern Recognition},
pages={5551--5560},
year={2017}
}
2. DB:
@article{liao2019real,
title={Real-time Scene Text Detection with Differentiable Binarization},
author={Liao, Minghui and Wan, Zhaoyi and Yao, Cong and Chen, Kai and Bai, Xiang},
journal={arXiv preprint arXiv:1911.08947},
year={2019}
}
3. DTRB:
@inproceedings{baek2019wrong,
title={What is wrong with scene text recognition model comparisons? dataset and model analysis},
author={Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={4715--4723},
year={2019}
}
4. SAST:
@inproceedings{wang2019single,
title={A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning},
author={Wang, Pengfei and Zhang, Chengquan and Qi, Fei and Huang, Zuming and En, Mengyi and Han, Junyu and Liu, Jingtuo and Ding, Errui and Shi, Guangming},
booktitle={Proceedings of the 27th ACM International Conference on Multimedia},
pages={1277--1285},
year={2019}
}
5. SRN:
@article{yu2020towards,
title={Towards Accurate Scene Text Recognition with Semantic Reasoning Networks},
author={Yu, Deli and Li, Xuan and Zhang, Chengquan and Han, Junyu and Liu, Jingtuo and Ding, Errui},
journal={arXiv preprint arXiv:2003.12294},
year={2020}
}
6. end2end-psl:
@inproceedings{sun2019chinese,
title={Chinese Street View Text: Large-scale Chinese Text Reading with Partially Supervised Learning},
author={Sun, Yipeng and Liu, Jiaming and Liu, Wei and Han, Junyu and Ding, Errui and Liu, Jingtuo},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={9086--9095},
year={2019}
}
```
Global:
algorithm: CRNN
use_gpu: true
epoch_num: 3000
log_smooth_window: 20
print_batch_step: 10
save_model_dir: ./output/rec_CRNN
save_epoch_step: 3
eval_batch_step: 2000
train_batch_size_per_card: 256
test_batch_size_per_card: 256
image_shape: [3, 32, 100]
max_text_length: 25
character_type: ch
character_dict_path: ./ppocr/utils/ppocr_keys_v1.txt
loss_type: ctc
reader_yml: ./configs/rec/rec_chinese_reader.yml
pretrain_weights: ./pretrain_models/CRNN/best_accuracy
checkpoints:
save_inference_dir:
Architecture:
function: ppocr.modeling.architectures.rec_model,RecModel
Backbone:
function: ppocr.modeling.backbones.rec_mobilenet_v3,MobileNetV3
scale: 0.5
model_name: small
Head:
function: ppocr.modeling.heads.rec_ctc_head,CTCPredict
encoder_type: rnn
SeqRNN:
hidden_size: 48
Loss:
function: ppocr.modeling.losses.rec_ctc_loss,CTCLoss
Optimizer:
function: ppocr.optimizer,AdamDecay
base_lr: 0.0005
beta1: 0.9
beta2: 0.999
TrainReader:
reader_function: ppocr.data.rec.dataset_traversal,SimpleReader
num_workers: 8
img_set_dir: ./train_data
label_file_path: ./train_data/rec_gt_train.txt
EvalReader:
reader_function: ppocr.data.rec.dataset_traversal,SimpleReader
img_set_dir: ./train_data
label_file_path: ./train_data/rec_gt_test.txt
TestReader:
reader_function: ppocr.data.rec.dataset_traversal,SimpleReader
infer_img: ./infer_img
...@@ -11,8 +11,8 @@ Global: ...@@ -11,8 +11,8 @@ Global:
test_batch_size_per_card: 256 test_batch_size_per_card: 256
image_shape: [3, 32, 100] image_shape: [3, 32, 100]
max_text_length: 25 max_text_length: 25
character_type: ch character_type: en
character_dict_path: ./ppocr/utils/ic15_dict.txt character_dict_path: /workspace/PaddleOCR/train_data/ic15_dict.txt
loss_type: ctc loss_type: ctc
reader_yml: ./configs/rec/rec_icdar15_reader.yml reader_yml: ./configs/rec/rec_icdar15_reader.yml
pretrain_weights: ./pretrain_models/CRNN/best_accuracy pretrain_weights: ./pretrain_models/CRNN/best_accuracy
...@@ -24,13 +24,13 @@ Architecture: ...@@ -24,13 +24,13 @@ Architecture:
Backbone: Backbone:
function: ppocr.modeling.backbones.rec_mobilenet_v3,MobileNetV3 function: ppocr.modeling.backbones.rec_mobilenet_v3,MobileNetV3
scale: 0.5 scale: 0.5
model_name: small model_name: large
Head: Head:
function: ppocr.modeling.heads.rec_ctc_head,CTCPredict function: ppocr.modeling.heads.rec_ctc_head,CTCPredict
encoder_type: rnn encoder_type: rnn
SeqRNN: SeqRNN:
hidden_size: 48 hidden_size: 96
Loss: Loss:
function: ppocr.modeling.losses.rec_ctc_loss,CTCLoss function: ppocr.modeling.losses.rec_ctc_loss,CTCLoss
......
# 简介
PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。
## 特性:
- 超轻量级模型
- (检测模型4.1M + 识别模型4.5M = 8.6M)
- 支持竖排文字识别
- (单模型同时支持横排和竖排文字识别)
- 支持长文本识别
- 支持中英文数字组合识别
- 提供训练代码
- 支持模型部署
## 文档教程
- [快速安装](./doc/installation.md)
- [文本识别模型训练/评估/预测](./doc/detection.md)
- [文本预测模型训练/评估/预测](./doc/recognition.md)
- [基于inference model预测](./doc/)
### **快速开始**
下载inference模型
```
# 创建inference模型保存目录
mkdir inference && cd inference && mkdir det && mkdir rec
# 下载检测inference模型
wget -P ./inference/det 检测inference模型链接
# 下载识别inference模型
wget -P ./inferencee/rec 识别inference模型链接
```
实现文本检测、识别串联推理,预测$image_dir$指定的单张图像:
```
export PYTHONPATH=.
python tools/infer/predict_eval.py --image_dir="/Demo.jpg" --det_model_dir="./inference/det/" --rec_model_dir="./inference/rec/"
```
在执行预测时,通过参数det_model_dir以及rec_model_dir设置存储inference 模型的路径。
实现文本检测、识别串联推理,预测$image_dir$指指定文件夹下的所有图像:
```
python tools/infer/predict_eval.py --image_dir="/test_imgs/" --det_model_dir="./inference/det/" --rec_model_dir="./inference/rec/"
```
## 文本检测算法:
PaddleOCR开源的文本检测算法列表:
- [x] [EAST](https://arxiv.org/abs/1704.03155)
- [x] [DB](https://arxiv.org/abs/1911.08947)
- [ ] [SAST](https://arxiv.org/abs/1908.05498)
算法效果:
|模型|骨干网络|Hmean|
|-|-|-|
|EAST|ResNet50_vd|85.85%|
|EAST|MobileNetV3|79.08%|
|DB|ResNet50_vd|83.30%|
|DB|MobileNetV3|73.00%|
PaddleOCR文本检测算法的训练与使用请参考[文档](./doc/detection.md)
## 文本识别算法:
PaddleOCR开源的文本识别算法列表:
- [x] [CRNN](https://arxiv.org/abs/1507.05717)
- [x] [DTRB](https://arxiv.org/abs/1904.01906)
- [ ] [Rosetta](https://arxiv.org/abs/1910.05085)
- [ ] [STAR-Net](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)
- [ ] [RARE](https://arxiv.org/abs/1603.03915v1)
- [ ] [SRN]((https://arxiv.org/abs/2003.12294))(百度自研)
算法效果如下表所示,精度指标是在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上的评测结果的平均值。
|模型|骨干网络|ACC|
|-|-|-|
|Rosetta|Resnet34_vd|80.24%|
|Rosetta|MobileNetV3|78.16%|
|CRNN|Resnet34_vd|82.20%|
|CRNN|MobileNetV3|79.37%|
|STAR-Net|Resnet34_vd|83.93%|
|STAR-Net|MobileNetV3|81.56%|
|RARE|Resnet34_vd|84.90%|
|RARE|MobileNetV3|83.32%|
PaddleOCR文本识别算法的训练与使用请参考[文档](./doc/recognition.md)
## TODO
**端到端OCR算法**
PaddleOCR即将开源百度自研端对端OCR模型[End2End-PSL](https://arxiv.org/abs/1909.07808),敬请关注。
- [ ] End2End-PSL (comming soon)
# 参考文献
```
1. EAST:
@inproceedings{zhou2017east,
title={EAST: an efficient and accurate scene text detector},
author={Zhou, Xinyu and Yao, Cong and Wen, He and Wang, Yuzhi and Zhou, Shuchang and He, Weiran and Liang, Jiajun},
booktitle={Proceedings of the IEEE conference on Computer Vision and Pattern Recognition},
pages={5551--5560},
year={2017}
}
2. DB:
@article{liao2019real,
title={Real-time Scene Text Detection with Differentiable Binarization},
author={Liao, Minghui and Wan, Zhaoyi and Yao, Cong and Chen, Kai and Bai, Xiang},
journal={arXiv preprint arXiv:1911.08947},
year={2019}
}
3. DTRB:
@inproceedings{baek2019wrong,
title={What is wrong with scene text recognition model comparisons? dataset and model analysis},
author={Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={4715--4723},
year={2019}
}
4. SAST:
@inproceedings{wang2019single,
title={A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning},
author={Wang, Pengfei and Zhang, Chengquan and Qi, Fei and Huang, Zuming and En, Mengyi and Han, Junyu and Liu, Jingtuo and Ding, Errui and Shi, Guangming},
booktitle={Proceedings of the 27th ACM International Conference on Multimedia},
pages={1277--1285},
year={2019}
}
5. SRN:
@article{yu2020towards,
title={Towards Accurate Scene Text Recognition with Semantic Reasoning Networks},
author={Yu, Deli and Li, Xuan and Zhang, Chengquan and Han, Junyu and Liu, Jingtuo and Ding, Errui},
journal={arXiv preprint arXiv:2003.12294},
year={2020}
}
6. end2end-psl:
@inproceedings{sun2019chinese,
title={Chinese Street View Text: Large-scale Chinese Text Reading with Partially Supervised Learning},
author={Sun, Yipeng and Liu, Jiaming and Liu, Wei and Han, Junyu and Ding, Errui and Liu, Jingtuo},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={9086--9095},
year={2019}
}
```
# 可选参数列表
以下列表可以通过`--help`查看
| FLAG | 支持脚本 | 用途 | 默认值 | 备注 |
| :----------------------: | :------------: | :---------------: | :--------------: | :-----------------: |
| -c | ALL | 指定配置文件 | None | **配置模块说明请参考 参数介绍** |
| -o | ALL | 设置配置文件里的参数内容 | None | 使用-o配置相较于-c选择的配置文件具有更高的优先级。例如:`-o Global.use_gpu=false` |
## 配置文件 Global 参数介绍
| 字段 | 用途 | 默认值 | 备注 |
| :----------------------: | :---------------------: | :--------------: | :--------------------: |
| algorithm | 设置算法 | CRNN | 选择模型,支持模型请参考[简介]() |
| use_gpu | 设置代码运行场所 | true | \ |
| epoch_num | 最大训练epoch数 | 3000 | \ |
| log_smooth_window | 滑动窗口大小 | 20 | \ |
| print_batch_step | 设置打印log间隔 | 10 | \ |
| save_model_dir | 设置模型保存路径 | output/rec_CRNN | \ |
| save_epoch_step | 设置模型保存间隔 | 3 | \ |
| eval_batch_step | 设置模型评估间隔 | 2000 | \ |
|train_batch_size_per_card | 设置训练时单卡batch size | 256 | \ |
| test_batch_size_per_card | 设置评估时单卡batch size | 256 | \ |
| image_shape | 设置输入图片尺寸 | [3, 32, 100] | \ |
| max_text_length | 设置文本最大长度 | 25 | \ |
| character_type | 设置字符类型 | ch | en/ch, en时将使用默认dict,ch时使用自定义dict|
| character_dict_path | 设置字典路径 | ./ppocr/utils/ic15_dict.txt | \ |
| loss_type | 设置 loss 类型 | ctc | 支持两种loss: ctc / attention |
| reader_yml | 设置reader配置文件 | ./configs/rec/rec_icdar15_reader.yml | \ |
| pretrain_weights | 加载预训练模型路径 | ./pretrain_models/CRNN/best_accuracy | \ |
| checkpoints | 加载模型参数路径 | None | 用于中断后重新训练 |
| save_inference_dir | inference model 保存路径 | None | 用于保存inference model |
...@@ -8,8 +8,8 @@ icdar2015数据集可以从[官网](https://rrc.cvc.uab.es/?ch=4&com=downloads) ...@@ -8,8 +8,8 @@ icdar2015数据集可以从[官网](https://rrc.cvc.uab.es/?ch=4&com=downloads)
将下载到的数据集解压到工作目录下,假设解压在/PaddleOCR/train_data/ 下。另外,PaddleOCR将零散的标注文件整理成单独的标注文件 将下载到的数据集解压到工作目录下,假设解压在/PaddleOCR/train_data/ 下。另外,PaddleOCR将零散的标注文件整理成单独的标注文件
,您可以通过wget的方式进行下载。 ,您可以通过wget的方式进行下载。
``` ```
wget -P /PaddleOCR/train_data/ 训练标注文件链接 wget -P /PaddleOCR/train_data/ https://paddleocr.bj.bcebos.com/dataset%2Ftrain_icdar2015_label.txt
wget -P /PaddleOCR/train_data/ 测试标注文件链接 wget -P /PaddleOCR/train_data/ https://paddleocr.bj.bcebos.com/dataset%2Ftest_icdar2015_label.txt
``` ```
解压数据集和下载标注文件后,/PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是: 解压数据集和下载标注文件后,/PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是:
...@@ -38,9 +38,9 @@ $transcription$表示当前文本框的文字,在文本检测任务中并不 ...@@ -38,9 +38,9 @@ $transcription$表示当前文本框的文字,在文本检测任务中并不
``` ```
cd PaddleOCR/ cd PaddleOCR/
# 下载MobileNetV3的预训练模型 # 下载MobileNetV3的预训练模型
wget -P /PaddleOCR/pretrain_models/ 模型链接 wget -P /PaddleOCR/pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_5_pretrained.tar
# 下载ResNet50的预训练模型 # 下载ResNet50的预训练模型
wget -P /PaddleOCR/pretrain_models/ 模型链接 wget -P /PaddleOCR/pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar
``` ```
**启动训练** **启动训练**
...@@ -49,7 +49,7 @@ python3 tools/train.py -c configs/det/det_db_mv3.yml ...@@ -49,7 +49,7 @@ python3 tools/train.py -c configs/det/det_db_mv3.yml
``` ```
上述指令中,通过-c 选择训练使用configs/det/det_db_mv3.yml配置文件。 上述指令中,通过-c 选择训练使用configs/det/det_db_mv3.yml配置文件。
有关配置文件的详细解释,请参考[链接]()。 有关配置文件的详细解释,请参考[链接](./doc/config.md)
您也可以通过-o参数在不需要修改yml文件的情况下,改变训练的参数,比如,调整训练的学习率为0.0001 您也可以通过-o参数在不需要修改yml文件的情况下,改变训练的参数,比如,调整训练的学习率为0.0001
``` ```
......
...@@ -64,7 +64,7 @@ class CharacterOps(object): ...@@ -64,7 +64,7 @@ class CharacterOps(object):
[sum(text_lengths)] = [text_index_0 + text_index_1 + ... + text_index_(n - 1)] [sum(text_lengths)] = [text_index_0 + text_index_1 + ... + text_index_(n - 1)]
length: length of each text. [batch_size] length: length of each text. [batch_size]
""" """
if self.character_type == "en" or text.encode( 'UTF-8' ).isalpha(): if self.character_type == "en":
text = text.lower() text = text.lower()
text_list = [] text_list = []
......
0
1
2
3
4
5
6
7
8
9
a a
b b
c c
...@@ -24,13 +34,3 @@ w ...@@ -24,13 +34,3 @@ w
x x
y y
z z
0
1
2
3
4
5
6
7
8
9
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册