`Style-Text` is an improvement of the SRNet network proposed in Baidu's self-developed text editing algorithm "Editing Text in the Wild". It is different from the commonly used GAN methods. This tool decomposes the text synthesis task into three sub-modules to improve the effect of synthetic data: text style transfer module, background extraction module and fusion module.
### Contents
-[1. Introduction](#Introduction)
-[2. Preparation](#Preparation)
-[3. Demo](#Demo)
-[4. Advanced Usage](#Advanced_Usage)
-[5. Code Structure](#Code_structure)
The following figure shows some example results. In addition, the actual `nameplate text recognition` scene and `the Korean text recognition` scene verify the effectiveness of the synthesis tool, as follows.
<aname="Introduction"></a>
### Introduction
<divalign="center">
<imgsrc="doc/images/3.png"width="800">
</div>
<divalign="center">
<imgsrc="doc/images/1.png"width="600">
</div>
The Style-Text data synthesis tool is a tool based on Baidu's self-developed text editing algorithm "Editing Text in the Wild" [https://arxiv.org/abs/1908.03047](https://arxiv.org/abs/1908.03047).
Different from the commonly used GAN-based data synthesis tools, the main framework of Style-Text includes:
* (1) Text foreground style transfer module.
* (2) Background extraction module.
* (3) Fusion module.
After these three steps, you can quickly realize the image text style transfer. The following figure is som results of the data synthesis tool.
<divalign="center">
<imgsrc="doc/images/2.png"width="1000">
</div>
<aname="Preparation"></a>
#### Preparation
1. Please refer the [QUICK INSTALLATION](../doc/doc_en/installation_en.md) to install PaddlePaddle. Python3 environment is strongly recommended.
You can dowload models [here](https://paddleocr.bj.bcebos.com/dygraph_v2.0/style_text/style_text_models.zip). If you save the model files in other folders, please edit the three model paths in `configs/config.yml`:
If you save the model in another location, please modify the address of the model file in `configs/config.yml`, and you need to modify these three configurations at the same time:
```
bg_generator:
...
...
@@ -29,14 +59,15 @@ fusion_generator:
pretrain: style_text_models/fusion_generator
```
<aname="Demo"></a>
### Demo
#### Synthesis single image
#### Demo
1. You can run `tools/synth_image` and generate the demo image.
1. You can use the following commands to run a demo:
```bash
python -m tools.synth_image -c configs/config.yml
```python
python3-mtools.synth_image-cconfigs/config.yml
```
2. The results are `fake_bg.jpg`, `fake_text.jpg` and `fake_fusion.jpg` as shown in the figure above. Above them:
4. We also provide `batch_synth_images` mothod, that can combine corpus and pictures in pairs to generate a batch of data.
### Advanced Usage
#### Components
`Style Text Rec` mainly contains the following components:
*`style_samplers`: It can sample `Style Input` from a dataset. Now, We only provide `DatasetSampler`.
*`corpus_generators`: It can generate corpus. Now, wo only provide two `corpus_generators`:
*`EnNumCorpus`: It can generate a random string according to a given length, including uppercase and lowercase English letters, numbers and spaces.
*`FileCorpus`: It can read a text file and randomly return the words in it.
*`text_drawers`: It can generate `Text Input`(text picture in standard font according to the input corpus). Note that when using, you have to modify the language information according to the corpus.
*`predictors`: It can call the deep learning model to generate new data based on the `Style Input` and `Text Input`.
*`writers`: It can write the generated pictures(`fake_bg.jpg`, `fake_text.jpg` and `fake_fusion.jpg`) and label information to the disk.
*`synthesisers`: It can call the all modules to complete the work.
### Generate Dataset
#### Batch synthesis
Before the start, you need to prepare some data as material.
First, you should have the style reference data for synthesis tasks, which are generally used as datasets for OCR recognition tasks.
...
...
@@ -88,6 +100,12 @@ First, you should have the style reference data for synthesis tasks, which are g
*`language`: The language of the corpus. Needed if method is not `EnNumCorpus`.
*`corpus_file`: The corpus file path. Needed if method is not `EnNumCorpus`.
We provide a general dataset constaining Chinese, English and Korean (50,000 images in all) for your trial ([download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/style_text/chkoen_5w.tar)), some examples are given below :
<divalign="center">
<imgsrc="doc/images/5.png"width="800">
</div>
2. You can run the following command to start synthesis task:
``` bash
...
...
@@ -101,6 +119,65 @@ First, you should have the style reference data for synthesis tasks, which are g
After completing the above operations, you can get the synthetic data set for OCR recognition. Next, please complete the training by refering to [OCR Recognition Document](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/recognition. md#%E5%90%AF%E5%8A%A8%E8%AE%AD%E7%BB%83).
\ No newline at end of file
<aname="Advanced_Usage"></a>
### Advanced Usage
We take two scenes as examples, which are metal surface English number recognition and general Korean recognition, to illustrate practical cases of using StyleText to synthesize data to improve text recognition. The following figure shows some examples of real scene images and composite images:
<divalign="center">
<imgsrc="doc/images/6.png"width="800">
</div>
After adding the above synthetic data for training, the accuracy of the recognition model is improved, which is shown in the following table:
| Scenario | Characters | Raw Data | Test Data | Only Use Raw Data</br>Recognition Accuracy | New Synthetic Data | Simultaneous Use of Synthetic Data</br>Recognition Accuracy | Index Improvement |