-[3. Model Training / Evaluation / Prediction](#3)
-[3.1 Training](#3-1)
-[3.2 Evaluation](#3-2)
-[3.3 Prediction](#3-3)
-[4. Inference Deployment](#4)
-[4.1 Python Reasoning](#4-1)
-[4.2 C++ Reasoning](#4-2)
-[4.3 Serving service deployment](#4-3)
-[4.4 More inference deployments](#4-4)
-[4. Inference and Deployment](#4)
-[4.1 Python Inference](#4-1)
-[4.2 C++ Inference](#4-2)
-[4.3 Serving](#4-3)
-[4.4 More](#4-4)
-[5. FAQ](#5)
<aname="1"></a>
## 1. Introduction to the algorithm
## 1. Introduction
Paper information:
> [Robust Scene Text Recognition with Automatic Rectification](https://arxiv.org/abs/1603.03915v2)
...
...
@@ -30,11 +30,11 @@ Using MJSynth and SynthText two text recognition datasets for training, and eval
<aname="2"></a>
## 2. Environment configuration
## 2. Environment
Please refer to [Operating Environment Preparation](./environment_en.md) to configure the PaddleOCR operating environment, and refer to [Project Clone](./clone_en.md) to clone the project code.
<aname="3"></a>
## 3. Model training, evaluation, prediction
## 3. Model Training / Evaluation / Prediction
Please refer to [Text Recognition Training Tutorial](./recognition_en.md). PaddleOCR modularizes the code, and training different recognition models only requires **changing the configuration file**. Take the backbone network based on Resnet34_vd as an example:
First, convert the model saved during the RARE text recognition training process into an inference model. Take the model trained on the MJSynth and SynthText text recognition datasets based on the Resnet34_vd backbone network as an example ([Model download address](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/rec_r34_vd_tps_bilstm_att_v2.0_train.tar) ), which can be converted using the following command:
-[3. Model Training / Evaluation / Prediction](#3)
-[3.1 Training](#3-1)
-[3.2 Evaluation](#3-2)
-[3.3 Prediction](#3-3)
-[4. Inference Deployment](#4)
-[4.1 Python Reasoning](#4-1)
-[4.2 C++ Reasoning](#4-2)
-[4.3 Serving service deployment](#4-3)
-[4.4 More inference deployments](#4-4)
-[4. Inference and Deployment](#4)
-[4.1 Python Inference](#4-1)
-[4.2 C++ Inference](#4-2)
-[4.3 Serving](#4-3)
-[4.4 More](#4-4)
-[5. FAQ](#5)
<aname="1"></a>
## 1. Introduction to the algorithm
## 1. Introduction
Paper information:
> [Rosetta: Large Scale System for Text Detection and Recognition in Images](https://arxiv.org/abs/1910.05085)
...
...
@@ -30,12 +30,12 @@ Using MJSynth and SynthText two text recognition datasets for training, and eval
<aname="2"></a>
## 2. Environment configuration
## 2. Environment
Please refer to [Operating Environment Preparation](./environment_en.md) to configure the PaddleOCR operating environment, and refer to [Project Clone](./clone_en.md) to clone the project code.
<aname="3"></a>
## 3. Model training, evaluation, prediction
## 3. Model Training / Evaluation / Prediction
Please refer to [Text Recognition Training Tutorial](./recognition_en.md). PaddleOCR modularizes the code, and training different recognition models only requires **changing the configuration file**. Take the backbone network based on Resnet34_vd as an example:
...
...
@@ -65,10 +65,10 @@ python3 tools/infer_rec.py -c configs/rec/rec_r34_vd_none_none_ctc.yml -o Global
````
<aname="4"></a>
## 4. Inference Deployment
## 4. Inference and Deployment
<aname="4-1"></a>
### 4.1 Python Reasoning
### 4.1 Python Inference
First, convert the model saved during the Rosetta text recognition training process into an inference model. Take the model trained on the MJSynth and SynthText text recognition datasets based on the Resnet34_vd backbone network as an example ( [Model download address](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/rec_r34_vd_none_none_ctc_v2.0_train.tar) ), which can be converted using the following command: