customize_en.md 2.5 KB
Newer Older
X
xxxpsyduck 已提交
1
# HOW TO MAKE YOUR OWN LIGHTWEIGHT OCR MODEL?
K
Khanh Tran 已提交
2 3 4

The process of making a customized ultra-lightweight OCR models can be divided into three steps: training text detection model, training text recognition model, and concatenate the predictions from previous steps.

X
xxxpsyduck 已提交
5
## STEP1: TRAIN TEXT DETECTION MODEL
K
Khanh Tran 已提交
6 7 8

PaddleOCR provides two text detection algorithms: EAST and DB. Both support MobileNetV3 and ResNet50_vd backbone networks, select the corresponding configuration file as needed and start training. For example, to train with MobileNetV3 as the backbone network for DB detection model :
```
W
WenmuZhou 已提交
9
python3 tools/train.py -c configs/det/det_mv3_db.yml 2>&1 | tee det_db.log
K
Khanh Tran 已提交
10
```
11
For more details about data preparation and training tutorials, refer to the documentation [Text detection model training/evaluation/prediction](./detection_en.md)
K
Khanh Tran 已提交
12

X
xxxpsyduck 已提交
13
## STEP2: TRAIN TEXT RECOGNITION MODEL
K
Khanh Tran 已提交
14 15 16

PaddleOCR provides four text recognition algorithms: CRNN, Rosetta, STAR-Net, and RARE. They all support two backbone networks: MobileNetV3 and ResNet34_vd, select the corresponding configuration files as needed to start training. For example, to train a CRNN recognition model that uses MobileNetV3 as the backbone network:
```
W
WenmuZhou 已提交
17
python3 tools/train.py -c configs/rec/rec_chinese_lite_train.yml 2>&1 | tee rec_ch_lite.log
K
Khanh Tran 已提交
18
```
19
For more details about data preparation and training tutorials, refer to the documentation [Text recognition model training/evaluation/prediction](./recognition_en.md)
K
Khanh Tran 已提交
20

X
xxxpsyduck 已提交
21
## STEP3: CONCATENATE PREDICTIONS
K
Khanh Tran 已提交
22 23 24 25 26 27 28 29

PaddleOCR provides a concatenation tool for detection and recognition models, which can connect any trained detection model and any recognition model into a two-stage text recognition system. The input image goes through four main stages: text detection, text rectification, text recognition, and score filtering to output the text position and recognition results, and at the same time, you can choose to visualize the results.

When performing prediction, you need to specify the path of a single image or a image folder through the parameter `image_dir`, the parameter `det_model_dir` specifies the path of detection model, and the parameter `rec_model_dir` specifies the path of recogniton model. The visualized results are saved to the `./inference_results` folder by default.

```
python3 tools/infer/predict_system.py --image_dir="./doc/imgs/11.jpg" --det_model_dir="./inference/det/"  --rec_model_dir="./inference/rec/"
```
30
For more details about text detection and recognition concatenation, please refer to the document [Inference](./inference_en.md)