diff --git a/PPOCRLabel/README.md b/PPOCRLabel/README.md index 624e9324a8573074a169200894b10d161a820c7c..f72ae96679d9ca3ba1585c49ce8762cd73e97d24 100644 --- a/PPOCRLabel/README.md +++ b/PPOCRLabel/README.md @@ -1,21 +1,27 @@ +English | [简体中文](README_ch.md) + # PPOCRLabel -PPOCRLabel是一款适用于OCR领域的半自动化图形标注工具,使用python3和pyqt5编写,支持矩形框标注和四点标注模式,导出格式可直接用于PPOCR检测和识别模型的训练。 +PPOCRLabel is a semi-automatic graphic annotation tool suitable for OCR field. It is written in python3 and pyqt5, supporting rectangular box annotation and four-point annotation modes. Annotations can be directly used for the training of PPOCR detection and recognition models. + + + +## Installation - +### 1. Install PaddleOCR -## 安装 +Refer to [PaddleOCR installation document](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/installation.md) to prepare PaddleOCR -### 1. 安装PaddleOCR -参考[PaddleOCR安装文档](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/installation.md)准备好PaddleOCR +### 2. Install PPOCRLabel -### 2. 安装PPOCRLabel #### Windows + Anaconda +Download and install [Anaconda](https://www.anaconda.com/download/#download) (Python 3+) + ``` pip install pyqt5 -cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 -python PPOCRLabel.py --lang ch +cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder +python PPOCRLabel.py ``` #### Ubuntu Linux @@ -23,78 +29,97 @@ python PPOCRLabel.py --lang ch ``` pip3 install pyqt5 pip3 install trash-cli -cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 -python3 PPOCRLabel.py --lang ch +cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder +python3 PPOCRLabel.py ``` #### macOS ``` pip3 install pyqt5 -pip3 uninstall opencv-python # 由于mac版本的opencv与pyqt有冲突,需先手动卸载opencv -pip3 install opencv-contrib-python-headless # 安装headless版本的open-cv -cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 -python3 PPOCRLabel.py --lang ch +pip3 uninstall opencv-python # Uninstall opencv manually as it conflicts with pyqt +pip3 install opencv-contrib-python-headless # Install the headless version of opencv +cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder +python3 PPOCRLabel.py ``` -## 使用 +## Usage + +### Steps + +1. Build and launch using the instructions above. + +2. Click 'Open Dir' in Menu/File to select the folder of the picture.[1] + +3. Click 'Auto recognition', use PPOCR model to automatically annotate images which marked with 'X' [2]before the file name. + +4. Create Box: + + 4.1 Click 'Create RectBox' or press 'W' in English keyboard mode to draw a new rectangle detection box. Click and release left mouse to select a region to annotate the text area. -### 操作步骤 + 4.2 Press 'P' to enter four-point labeling mode which enables you to create any four-point shape by clicking four points with the left mouse button in succession and DOUBLE CLICK the left mouse as the signal of labeling completion. -1. 安装与运行:使用上述命令安装与运行程序。 -2. 打开文件夹:在菜单栏点击 “文件” - "打开目录" 选择待标记图片的文件夹[1]. -3. 自动标注:点击 ”自动标注“,使用PPOCR超轻量模型对图片文件名前图片状态[2]为 “X” 的图片进行自动标注。 -4. 手动标注:点击 “矩形标注”(推荐直接在英文模式下点击键盘中的 “W”),用户可对当前图片中模型未检出的部分进行手动绘制标记框。点击键盘P,则使用四点标注模式(或点击“编辑” - “四点标注”),用户依次点击4个点后,双击左键表示标注完成。 -5. 标记框绘制完成后,用户点击 “确认”,检测框会先被预分配一个 “待识别” 标签。 -6. 重新识别:将图片中的所有检测画绘制/调整完成后,点击 “重新识别”,PPOCR模型会对当前图片中的**所有检测框**重新识别[3]。 -7. 内容更改:双击识别结果,对不准确的识别结果进行手动更改。 -8. 确认标记:点击 “确认”,图片状态切换为 “√”,跳转至下一张(此时不会直接将结果写入文件)。 -9. 删除:点击 “删除图像”,图片将会被删除至回收站。 -10. 保存结果:用户可以通过菜单中“文件-保存标记结果”手动保存,同时程序也会在用户每确认10张图片后自动保存一次。手动确认过的标记将会被存放在所打开图片文件夹下的*Label.txt*中。在菜单栏点击 “文件” - "保存识别结果"后,会将此类图片的识别训练数据保存在*crop_img*文件夹下,识别标签保存在*rec_gt.txt*中[4]。 +5. After the marking frame is drawn, the user clicks "OK", and the detection frame will be pre-assigned a "TEMPORARY" label. -### 注意 +6. Click 're-Recognition', model will rewrite ALL recognition results in ALL detection box[3]. -[1] PPOCRLabel以文件夹为基本标记单位,打开待标记的图片文件夹后,不会在窗口栏中显示图片,而是在点击 "选择文件夹" 之后直接将文件夹下的图片导入到程序中。 +7. Double click the result in 'recognition result' list to manually change inaccurate recognition results. -[2] 图片状态表示本张图片用户是否手动保存过,未手动保存过即为 “X”,手动保存过为 “√”。点击 “自动标注”按钮后,PPOCRLabel不会对状态为 “√” 的图片重新标注。 +8. Click "Check", the image status will switch to "√",then the program automatically jump to the next(The results will not be written directly to the file at this time). -[3] 点击“重新识别”后,模型会对图片中的识别结果进行覆盖。因此如果在此之前手动更改过识别结果,有可能在重新识别后产生变动。 +9. Click "Delete Image" and the image will be deleted to the recycle bin. -[4] PPOCRLabel产生的文件放置于标记图片文件夹下,包括一下几种,请勿手动更改其中内容,否则会引起程序出现异常。 +10. Labeling result: the user can save manually through the menu "File - Save Label", while the program will also save automatically after every 10 images confirmed by the user.the manually checked label will be stored in *Label.txt* under the opened picture folder. + Click "PaddleOCR"-"Save Recognition Results" in the menu bar, the recognition training data of such pictures will be saved in the *crop_img* folder, and the recognition label will be saved in *rec_gt.txt*[4]. -| 文件名 | 说明 | +### Note + +[1] PPOCRLabel uses the opened folder as the project. After opening the image folder, the picture will not be displayed in the dialog. Instead, the pictures under the folder will be directly imported into the program after clicking "Open Dir". + +[2] The image status indicates whether the user has saved the image manually. If it has not been saved manually it is "X", otherwise it is "√", PPOCRLabel will not relabel pictures with a status of "√". + +[3] After clicking "Re-recognize", the model will overwrite ALL recognition results in the picture. +Therefore, if the recognition result has been manually changed before, it may change after re-recognition. + +[4] The files produced by PPOCRLabel can be found under the opened picture folder including the following, please do not manually change the contents, otherwise it will cause the program to be abnormal. + +| File name | Description | | :-----------: | :----------------------------------------------------------: | -| Label.txt | 检测标签,可直接用于PPOCR检测模型训练。用户每保存10张检测结果后,程序会进行自动写入。当用户关闭应用程序或切换文件路径后同样会进行写入。 | -| fileState.txt | 图片状态标记文件,保存当前文件夹下已经被用户手动确认过的图片名称。 | -| Cache.cach | 缓存文件,保存模型自动识别的结果。 | -| rec_gt.txt | 识别标签。可直接用于PPOCR识别模型训练。需用户手动点击菜单栏“文件” - "保存识别结果"后产生。 | -| crop_img | 识别数据。按照检测框切割后的图片。与rec_gt.txt同时产生。 | +| Label.txt | The detection label file can be directly used for PPOCR detection model training. After the user saves 10 label results, the file will be automatically saved. It will also be written when the user closes the application or changes the file folder. | +| fileState.txt | The picture status file save the image in the current folder that has been manually confirmed by the user. | +| Cache.cach | Cache files to save the results of model recognition. | +| rec_gt.txt | The recognition label file, which can be directly used for PPOCR identification model training, is generated after the user clicks on the menu bar "File"-"Save recognition result". | +| crop_img | The recognition data, generated at the same time with *rec_gt.txt* | + +## Explanation + +### Built-in Model + +- Default model: PPOCRLabel uses the Chinese and English ultra-lightweight OCR model in PaddleOCR by default, supports Chinese, English and number recognition, and multiple language detection. + +- Model language switching: Changing the built-in model language is supportable by clicking "PaddleOCR"-"Choose OCR Model" in the menu bar. Currently supported languages​include French, German, Korean, and Japanese. + For specific model download links, please refer to [PaddleOCR Model List](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/models_list_en.md#multilingual-recognition-modelupdating) -## 说明 -### 内置模型 +- Custom model: The model trained by users can be replaced by modifying PPOCRLabel.py in [PaddleOCR class instantiation](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110) referring [Custom Model Code](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/whl_en.md#use-custom-model) - - 默认模型:PPOCRLabel默认使用PaddleOCR中的中英文超轻量OCR模型,支持中英文与数字识别,多种语言检测。 +### Export partial recognition results - - 模型语言切换:用户可通过菜单栏中 "PaddleOCR" - "选择模型" 切换内置模型语言,目前支持的语言包括法文、德文、韩文、日文。具体模型下载链接可参考[PaddleOCR模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/models_list.md). +For some data that are difficult to recognize, the recognition results will not be exported by **unchecking** the corresponding tags in the recognition results checkbox. - - 自定义模型:用户可根据[自定义模型代码使用](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/whl.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%A8%A1%E5%9E%8B),通过修改PPOCRLabel.py中针对[PaddleOCR类的实例化](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110)替换成自己训练的模型。 +*Note: The status of the checkboxes in the recognition results still needs to be saved manually by clicking Save Button.* -### 导出部分识别结果 +### Error message -针对部分难以识别的数据,通过在识别结果的复选框中**取消勾选**相应的标记,其识别结果不会被导出。 +- If paddleocr is installed with whl, it has a higher priority than calling PaddleOCR class with paddleocr.py, which may cause an exception if whl package is not updated. -*注意:识别结果中的复选框状态仍需用户手动点击保存后才能保留* +- For Linux users, if you get an error starting with **objc[XXXXX]** when opening the software, it proves that your opencv version is too high. It is recommended to install version 4.2: -### 错误提示 -- 如果同时使用whl包安装了paddleocr,其优先级大于通过paddleocr.py调用PaddleOCR类,whl包未更新时会导致程序异常。 -- PPOCRLabel**不支持对中文文件名**的图片进行自动标注。 -- 针对Linux用户::如果您在打开软件过程中出现**objc[XXXXX]**开头的错误,证明您的opencv版本太高,建议安装4.2版本: - ``` - pip install opencv-python==4.2.0.32 - ``` -- 如果出现''Missing string id '开头的错误,需要重新编译资源: - ``` - pyrcc5 -o libs/resources.py resources.qrc - ``` -### 参考资料 + ``` + pip install opencv-python==4.2.0.32 + ``` +- If you get an error starting with **Missing string id **,you need to recompile resources: + ``` + pyrcc5 -o libs/resources.py resources.qrc + ``` +### Related 1.[Tzutalin. LabelImg. Git code (2015)](https://github.com/tzutalin/labelImg) diff --git a/PPOCRLabel/README_ch.md b/PPOCRLabel/README_ch.md new file mode 100644 index 0000000000000000000000000000000000000000..334cb2860848dd7def22c537b0e10cd5a9435289 --- /dev/null +++ b/PPOCRLabel/README_ch.md @@ -0,0 +1,102 @@ +[English](README.md) | 简体中文 + +# PPOCRLabel + +PPOCRLabel是一款适用于OCR领域的半自动化图形标注工具,使用python3和pyqt5编写,支持矩形框标注和四点标注模式,导出格式可直接用于PPOCR检测和识别模型的训练。 + + + +## 安装 + +### 1. 安装PaddleOCR +参考[PaddleOCR安装文档](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/installation.md)准备好PaddleOCR + +### 2. 安装PPOCRLabel +#### Windows + Anaconda + +``` +pip install pyqt5 +cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 +python PPOCRLabel.py --lang ch +``` + +#### Ubuntu Linux + +``` +pip3 install pyqt5 +pip3 install trash-cli +cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 +python3 PPOCRLabel.py --lang ch +``` + +#### macOS +``` +pip3 install pyqt5 +pip3 uninstall opencv-python # 由于mac版本的opencv与pyqt有冲突,需先手动卸载opencv +pip3 install opencv-contrib-python-headless # 安装headless版本的open-cv +cd ./PPOCRLabel # 将目录切换到PPOCRLabel文件夹下 +python3 PPOCRLabel.py --lang ch +``` + +## 使用 + +### 操作步骤 + +1. 安装与运行:使用上述命令安装与运行程序。 +2. 打开文件夹:在菜单栏点击 “文件” - "打开目录" 选择待标记图片的文件夹[1]. +3. 自动标注:点击 ”自动标注“,使用PPOCR超轻量模型对图片文件名前图片状态[2]为 “X” 的图片进行自动标注。 +4. 手动标注:点击 “矩形标注”(推荐直接在英文模式下点击键盘中的 “W”),用户可对当前图片中模型未检出的部分进行手动绘制标记框。点击键盘P,则使用四点标注模式(或点击“编辑” - “四点标注”),用户依次点击4个点后,双击左键表示标注完成。 +5. 标记框绘制完成后,用户点击 “确认”,检测框会先被预分配一个 “待识别” 标签。 +6. 重新识别:将图片中的所有检测画绘制/调整完成后,点击 “重新识别”,PPOCR模型会对当前图片中的**所有检测框**重新识别[3]。 +7. 内容更改:双击识别结果,对不准确的识别结果进行手动更改。 +8. 确认标记:点击 “确认”,图片状态切换为 “√”,跳转至下一张(此时不会直接将结果写入文件)。 +9. 删除:点击 “删除图像”,图片将会被删除至回收站。 +10. 保存结果:用户可以通过菜单中“文件-保存标记结果”手动保存,同时程序也会在用户每确认10张图片后自动保存一次。手动确认过的标记将会被存放在所打开图片文件夹下的*Label.txt*中。在菜单栏点击 “文件” - "保存识别结果"后,会将此类图片的识别训练数据保存在*crop_img*文件夹下,识别标签保存在*rec_gt.txt*中[4]。 + +### 注意 + +[1] PPOCRLabel以文件夹为基本标记单位,打开待标记的图片文件夹后,不会在窗口栏中显示图片,而是在点击 "选择文件夹" 之后直接将文件夹下的图片导入到程序中。 + +[2] 图片状态表示本张图片用户是否手动保存过,未手动保存过即为 “X”,手动保存过为 “√”。点击 “自动标注”按钮后,PPOCRLabel不会对状态为 “√” 的图片重新标注。 + +[3] 点击“重新识别”后,模型会对图片中的识别结果进行覆盖。因此如果在此之前手动更改过识别结果,有可能在重新识别后产生变动。 + +[4] PPOCRLabel产生的文件放置于标记图片文件夹下,包括一下几种,请勿手动更改其中内容,否则会引起程序出现异常。 + +| 文件名 | 说明 | +| :-----------: | :----------------------------------------------------------: | +| Label.txt | 检测标签,可直接用于PPOCR检测模型训练。用户每保存10张检测结果后,程序会进行自动写入。当用户关闭应用程序或切换文件路径后同样会进行写入。 | +| fileState.txt | 图片状态标记文件,保存当前文件夹下已经被用户手动确认过的图片名称。 | +| Cache.cach | 缓存文件,保存模型自动识别的结果。 | +| rec_gt.txt | 识别标签。可直接用于PPOCR识别模型训练。需用户手动点击菜单栏“文件” - "保存识别结果"后产生。 | +| crop_img | 识别数据。按照检测框切割后的图片。与rec_gt.txt同时产生。 | + +## 说明 +### 内置模型 + + - 默认模型:PPOCRLabel默认使用PaddleOCR中的中英文超轻量OCR模型,支持中英文与数字识别,多种语言检测。 + + - 模型语言切换:用户可通过菜单栏中 "PaddleOCR" - "选择模型" 切换内置模型语言,目前支持的语言包括法文、德文、韩文、日文。具体模型下载链接可参考[PaddleOCR模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/models_list.md). + + - 自定义模型:用户可根据[自定义模型代码使用](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/whl.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%A8%A1%E5%9E%8B),通过修改PPOCRLabel.py中针对[PaddleOCR类的实例化](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110)替换成自己训练的模型。 + +### 导出部分识别结果 + +针对部分难以识别的数据,通过在识别结果的复选框中**取消勾选**相应的标记,其识别结果不会被导出。 + +*注意:识别结果中的复选框状态仍需用户手动点击保存后才能保留* + +### 错误提示 +- 如果同时使用whl包安装了paddleocr,其优先级大于通过paddleocr.py调用PaddleOCR类,whl包未更新时会导致程序异常。 +- PPOCRLabel**不支持对中文文件名**的图片进行自动标注。 +- 针对Linux用户::如果您在打开软件过程中出现**objc[XXXXX]**开头的错误,证明您的opencv版本太高,建议安装4.2版本: + ``` + pip install opencv-python==4.2.0.32 + ``` +- 如果出现''Missing string id '开头的错误,需要重新编译资源: + ``` + pyrcc5 -o libs/resources.py resources.qrc + ``` +### 参考资料 + +1.[Tzutalin. LabelImg. Git code (2015)](https://github.com/tzutalin/labelImg) diff --git a/PPOCRLabel/README_en.md b/PPOCRLabel/README_en.md deleted file mode 100644 index 42ded6b0eacb643469eb6869fa6ff5dddf85f9b7..0000000000000000000000000000000000000000 --- a/PPOCRLabel/README_en.md +++ /dev/null @@ -1,123 +0,0 @@ -# PPOCRLabel - -PPOCRLabel is a semi-automatic graphic annotation tool suitable for OCR field. It is written in python3 and pyqt5, supporting rectangular box annotation and four-point annotation modes. Annotations can be directly used for the training of PPOCR detection and recognition models. - - - -## Installation - -### 1. Install PaddleOCR - -Refer to [PaddleOCR installation document](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/installation.md) to prepare PaddleOCR - -### 2. Install PPOCRLabel - -#### Windows + Anaconda - -Download and install [Anaconda](https://www.anaconda.com/download/#download) (Python 3+) - -``` -pip install pyqt5 -cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder -python PPOCRLabel.py -``` - -#### Ubuntu Linux - -``` -pip3 install pyqt5 -pip3 install trash-cli -cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder -python3 PPOCRLabel.py -``` - -#### macOS -``` -pip3 install pyqt5 -pip3 uninstall opencv-python # Uninstall opencv manually as it conflicts with pyqt -pip3 install opencv-contrib-python-headless # Install the headless version of opencv -cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder -python3 PPOCRLabel.py -``` - -## Usage - -### Steps - -1. Build and launch using the instructions above. - -2. Click 'Open Dir' in Menu/File to select the folder of the picture.[1] - -3. Click 'Auto recognition', use PPOCR model to automatically annotate images which marked with 'X' [2]before the file name. - -4. Create Box: - - 4.1 Click 'Create RectBox' or press 'W' in English keyboard mode to draw a new rectangle detection box. Click and release left mouse to select a region to annotate the text area. - - 4.2 Press 'P' to enter four-point labeling mode which enables you to create any four-point shape by clicking four points with the left mouse button in succession and DOUBLE CLICK the left mouse as the signal of labeling completion. - -5. After the marking frame is drawn, the user clicks "OK", and the detection frame will be pre-assigned a "TEMPORARY" label. - -6. Click 're-Recognition', model will rewrite ALL recognition results in ALL detection box[3]. - -7. Double click the result in 'recognition result' list to manually change inaccurate recognition results. - -8. Click "Check", the image status will switch to "√",then the program automatically jump to the next(The results will not be written directly to the file at this time). - -9. Click "Delete Image" and the image will be deleted to the recycle bin. - -10. Labeling result: the user can save manually through the menu "File - Save Label", while the program will also save automatically after every 10 images confirmed by the user.the manually checked label will be stored in *Label.txt* under the opened picture folder. - Click "PaddleOCR"-"Save Recognition Results" in the menu bar, the recognition training data of such pictures will be saved in the *crop_img* folder, and the recognition label will be saved in *rec_gt.txt*[4]. - -### Note - -[1] PPOCRLabel uses the opened folder as the project. After opening the image folder, the picture will not be displayed in the dialog. Instead, the pictures under the folder will be directly imported into the program after clicking "Open Dir". - -[2] The image status indicates whether the user has saved the image manually. If it has not been saved manually it is "X", otherwise it is "√", PPOCRLabel will not relabel pictures with a status of "√". - -[3] After clicking "Re-recognize", the model will overwrite ALL recognition results in the picture. -Therefore, if the recognition result has been manually changed before, it may change after re-recognition. - -[4] The files produced by PPOCRLabel can be found under the opened picture folder including the following, please do not manually change the contents, otherwise it will cause the program to be abnormal. - -| File name | Description | -| :-----------: | :----------------------------------------------------------: | -| Label.txt | The detection label file can be directly used for PPOCR detection model training. After the user saves 10 label results, the file will be automatically saved. It will also be written when the user closes the application or changes the file folder. | -| fileState.txt | The picture status file save the image in the current folder that has been manually confirmed by the user. | -| Cache.cach | Cache files to save the results of model recognition. | -| rec_gt.txt | The recognition label file, which can be directly used for PPOCR identification model training, is generated after the user clicks on the menu bar "File"-"Save recognition result". | -| crop_img | The recognition data, generated at the same time with *rec_gt.txt* | - -## Explanation - -### Built-in Model - -- Default model: PPOCRLabel uses the Chinese and English ultra-lightweight OCR model in PaddleOCR by default, supports Chinese, English and number recognition, and multiple language detection. - -- Model language switching: Changing the built-in model language is supportable by clicking "PaddleOCR"-"Choose OCR Model" in the menu bar. Currently supported languages​include French, German, Korean, and Japanese. - For specific model download links, please refer to [PaddleOCR Model List](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/models_list_en.md#multilingual-recognition-modelupdating) - -- Custom model: The model trained by users can be replaced by modifying PPOCRLabel.py in [PaddleOCR class instantiation](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/PPOCRLabel/PPOCRLabel.py#L110) referring [Custom Model Code](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/whl_en.md#use-custom-model) - -### Export partial recognition results - -For some data that are difficult to recognize, the recognition results will not be exported by **unchecking** the corresponding tags in the recognition results checkbox. - -*Note: The status of the checkboxes in the recognition results still needs to be saved manually by clicking Save Button.* - -### Error message - -- If paddleocr is installed with whl, it has a higher priority than calling PaddleOCR class with paddleocr.py, which may cause an exception if whl package is not updated. - -- For Linux users, if you get an error starting with **objc[XXXXX]** when opening the software, it proves that your opencv version is too high. It is recommended to install version 4.2: - - ``` - pip install opencv-python==4.2.0.32 - ``` -- If you get an error starting with **Missing string id **,you need to recompile resources: - ``` - pyrcc5 -o libs/resources.py resources.qrc - ``` -### Related - -1.[Tzutalin. LabelImg. Git code (2015)](https://github.com/tzutalin/labelImg) diff --git a/PPOCRLabel/libs/autoDialog.py b/PPOCRLabel/libs/autoDialog.py index daefe6e63af2651ff1604c47762b6a6bba4cb001..fe5ec9bcbd71a82d9e4fa5635d1f32e0ddd4b4ca 100644 --- a/PPOCRLabel/libs/autoDialog.py +++ b/PPOCRLabel/libs/autoDialog.py @@ -41,11 +41,14 @@ class Worker(QThread): print('Can not recognise file is : ', Imgpath) pass else: + strs = '' for res in self.result_dic: chars = res[1][0] cond = res[1][1] posi = res[0] - self.listValue.emit("Transcription: " + chars + " Probability: " + str(cond) + " Location: " + json.dumps(posi)) + strs += "Transcription: " + chars + " Probability: " + str( + cond) + " Location: " + json.dumps(posi) + '\n' + self.listValue.emit(strs) self.mainThread.result_dic = self.result_dic self.mainThread.filePath = Imgpath # 保存 diff --git a/README.md b/README.md index c72395d2f85f9f2844b895a12bb51a783a679f63..bc75fe9ee68a093c53a7a26a1eefd12740e0f6ab 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,9 @@ English | [简体中文](README_ch.md) PaddleOCR aims to create multilingual, awesome, leading, and practical OCR tools that help users train better models and apply them into practice. **Recent updates** -- 2020.11.25 Update a new data annotation tool, i.e., [PPOCRLabel](./PPOCRLabel/README_en.md), which is helpful to improve the labeling efficiency. Moreover, the labeling results can be used in training of the PP-OCR system directly. +- 2020.12.15 update Data synthesis tool, i.e., [Style-Text](./StyleText/README.md),easy to synthesize a large number of images which are similar to the target scene image. +- 2020.12.15 Release the branch of the release/2.0-rc1, support both the dynamic graph development (more convenient for training and debugging) and the static graph deployment (higher prediction efficiency). +- 2020.11.25 Update a new data annotation tool, i.e., [PPOCRLabel](./PPOCRLabel/README.md), which is helpful to improve the labeling efficiency. Moreover, the labeling results can be used in training of the PP-OCR system directly. - 2020.9.22 Update the PP-OCR technical article, https://arxiv.org/abs/2009.09941 - 2020.9.19 Update the ultra lightweight compressed ppocr_mobile_slim series models, the overall model size is 3.5M (see [PP-OCR Pipeline](#PP-OCR-Pipeline)), suitable for mobile deployment. [Model Downloads](#Supported-Chinese-model-list) - 2020.9.17 Update the ultra lightweight ppocr_mobile series and general ppocr_server series Chinese and English ocr models, which are comparable to commercial effects. [Model Downloads](#Supported-Chinese-model-list) @@ -15,11 +17,13 @@ PaddleOCR aims to create multilingual, awesome, leading, and practical OCR tools ## Features - PPOCR series of high-quality pre-trained models, comparable to commercial effects - - Ultra lightweight ppocr_mobile series models: detection (2.6M) + direction classifier (0.9M) + recognition (4.6M) = 8.1M - - General ppocr_server series models: detection (47.2M) + direction classifier (0.9M) + recognition (107M) = 155.1M - - Ultra lightweight compression ppocr_mobile_slim series models: detection (1.4M) + direction classifier (0.5M) + recognition (1.6M) = 3.5M -- Support Chinese, English, and digit recognition, vertical text recognition, and long text recognition -- Support multi-language recognition: Korean, Japanese, German, French + - Ultra lightweight ppocr_mobile series models: detection (3.0M) + direction classifier (1.4M) + recognition (5.0M) = 9.4M + - General ppocr_server series models: detection (47.1M) + direction classifier (1.4M) + recognition (94.9M) = 143.4M + - Support Chinese, English, and digit recognition, vertical text recognition, and long text recognition + - Support multi-language recognition: Korean, Japanese, German, French +- rich toolkits related to the OCR areas + - Semi-automatic data annotation tool, i.e., PPOCRLabel: support fast and efficient data annotation + - Data synthesis tool, i.e., Style-Text: easy to synthesize a large number of images which are similar to the target scene image - Support user-defined training, provides rich predictive inference deployment solutions - Support PIP installation, easy to use - Support Linux, Windows, MacOS and other systems @@ -63,8 +67,8 @@ Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Andr | Model introduction | Model name | Recommended scene | Detection model | Direction classifier | Recognition model | | ------------------------------------------------------------ | ---------------------------- | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| Chinese and English ultra-lightweight OCR model (8.1M) | ch_ppocr_mobile_v2.0_xx | Mobile & server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_pre.tar) | -| Chinese and English general OCR model (143M) | ch_ppocr_server_v2.0_xx | Server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_traingit.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_pre.tar) | +| Chinese and English ultra-lightweight OCR model (9.4M) | ch_ppocr_mobile_v2.0_xx | Mobile & server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_pre.tar) | +| Chinese and English general OCR model (143.4M) | ch_ppocr_server_v2.0_xx | Server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_traingit.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_pre.tar) | For more model downloads (including multiple languages), please refer to [PP-OCR v2.0 series model downloads](./doc/doc_en/models_list_en.md). @@ -90,13 +94,12 @@ For a new language request, please refer to [Guideline for new language_requests - [C++ Inference](./deploy/cpp_infer/readme_en.md) - [Serving](./deploy/hubserving/readme_en.md) - [Mobile](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/lite/readme_en.md) - - [Model Quantization](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README_en.md) - - [Model Compression](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/README_en.md) - [Benchmark](./doc/doc_en/benchmark_en.md) - Data Annotation and Synthesis - - [Semi-automatic Annotation Tool](./PPOCRLabel/README_en.md) - - [Data Annotation Tools](./doc/doc_en/data_annotation_en.md) - - [Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md) + - [Semi-automatic Annotation Tool: PPOCRLabel](./PPOCRLabel/README.md) + - [Data Synthesis Tool: Style_Edit](./StyleTextRec/README.md) + - [Other Data Annotation Tools](./doc/doc_en/data_annotation_en.md) + - [Other Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md) - Datasets - [General OCR Datasets(Chinese/English)](./doc/doc_en/datasets_en.md) - [HandWritten_OCR_Datasets(Chinese)](./doc/doc_en/handwritten_datasets_en.md) @@ -109,10 +112,6 @@ For a new language request, please refer to [Guideline for new language_requests - [License](#LICENSE) - [Contribution](#CONTRIBUTION) -***Note: The dynamic graphs branch is still under development. -Currently, only dynamic graph training, python-end prediction, and C++ prediction are supported. -If you need mobile-end deployment cases or quantitative demo, -please use the static graph branch.*** @@ -153,10 +152,10 @@ PP-OCR is a practical ultra-lightweight OCR system. It is mainly composed of thr If you want to request a new language support, a PR with 2 following files are needed: -1. In folder [ppocr/utils/dict](https://github.com/PaddlePaddle/PaddleOCR/tree/develop/ppocr/utils/dict), +1. In folder [ppocr/utils/dict](./ppocr/utils/dict), it is necessary to submit the dict text to this path and name it with `{language}_dict.txt` that contains a list of all characters. Please see the format example from other files in that folder. -2. In folder [ppocr/utils/corpus](https://github.com/PaddlePaddle/PaddleOCR/tree/develop/ppocr/utils/corpus), +2. In folder [ppocr/utils/corpus](./ppocr/utils/corpus), it is necessary to submit the corpus to this path and name it with `{language}_corpus.txt` that contains a list of words in your language. Maybe, 50000 words per language is necessary at least. Of course, the more, the better. diff --git a/README_ch.md b/README_ch.md index 7ed48165fcdb0122cba82d0a42a853d5b7fc28a8..c738524a3617f113352fc4519939e138a1bccaf9 100644 --- a/README_ch.md +++ b/README_ch.md @@ -4,8 +4,10 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。 **近期更新** +- 2020.12.15 更新数据合成工具[Style-Text](./StyleText/README_ch.md),可以批量合成大量与目标场景类似的图像,在多个场景验证,效果明显提升。 +- 2020.12.15 发布release/2.0-rc1分支,支持动态图开发(训练调试更方便),静态图部署(预测效率更高)。 - 2020.12.07 [FAQ](./doc/doc_ch/FAQ.md)新增5个高频问题,总数124个,并且计划以后每周一都会更新,欢迎大家持续关注。 -- 2020.11.25 更新半自动标注工具[PPOCRLabel](./PPOCRLabel/README.md),辅助开发者高效完成标注任务,输出格式与PP-OCR训练任务完美衔接。 +- 2020.11.25 更新半自动标注工具[PPOCRLabel](./PPOCRLabel/README_ch.md),辅助开发者高效完成标注任务,输出格式与PP-OCR训练任务完美衔接。 - 2020.9.22 更新PP-OCR技术文章,https://arxiv.org/abs/2009.09941 - 2020.9.19 更新超轻量压缩ppocr_mobile_slim系列模型,整体模型3.5M(详见[PP-OCR Pipeline](#PP-OCR)),适合在移动端部署使用。[模型下载](#模型下载) - 2020.9.17 更新超轻量ppocr_mobile系列和通用ppocr_server系列中英文ocr模型,媲美商业效果。[模型下载](#模型下载) @@ -19,11 +21,13 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力 ## 特性 - PPOCR系列高质量预训练模型,准确的识别效果 - - 超轻量ppocr_mobile移动端系列:检测(2.6M)+方向分类器(0.9M)+ 识别(4.6M)= 8.1M - - 通用ppocr_server系列:检测(47.2M)+方向分类器(0.9M)+ 识别(107M)= 155.1M - - 超轻量压缩ppocr_mobile_slim系列:检测(1.4M)+方向分类器(0.5M)+ 识别(1.6M)= 3.5M -- 支持中英文数字组合识别、竖排文本识别、长文本识别 -- 支持多语言识别:韩语、日语、德语、法语 + - 超轻量ppocr_mobile移动端系列:检测(3.0M)+方向分类器(1.4M)+ 识别(5.0M)= 9.4M + - 通用ppocr_server系列:检测(47.1M)+方向分类器(1.4M)+ 识别(94.9M)= 143.4M + - 支持中英文数字组合识别、竖排文本识别、长文本识别 + - 支持多语言识别:韩语、日语、德语、法语 +- 丰富易用的OCR相关工具组件 + - 半自动数据标注工具PPOCRLabel:支持快速高效的数据标注 + - 数据合成工具Style-Text:批量合成大量与目标场景类似的图像 - 支持用户自定义训练,提供丰富的预测推理部署方案 - 支持PIP快速安装使用 - 可运行于Linux、Windows、MacOS等多种系统 @@ -54,8 +58,8 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力 | 模型简介 | 模型名称 |推荐场景 | 检测模型 | 方向分类器 | 识别模型 | | ------------ | --------------- | ----------------|---- | ---------- | -------- | -| 中英文超轻量OCR模型(8.1M) | ch_ppocr_mobile_v2.0_xx |移动端&服务器端|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_pre.tar) | -| 中英文通用OCR模型(143M) |ch_ppocr_server_v2.0_xx|服务器端 |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_pre.tar) | +| 中英文超轻量OCR模型(9.4M) | ch_ppocr_mobile_v2.0_xx |移动端&服务器端|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_pre.tar) | +| 中英文通用OCR模型(143.4M) |ch_ppocr_server_v2.0_xx|服务器端 |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_pre.tar) | 更多模型下载(包括多语言),可以参考[PP-OCR v2.0 系列模型下载](./doc/doc_ch/models_list.md) @@ -78,27 +82,26 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力 - [基于C++预测引擎推理](./deploy/cpp_infer/readme.md) - [服务化部署](./deploy/hubserving/readme.md) - [端侧部署](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/lite/readme.md) - - [模型量化](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README.md) - - [模型裁剪](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/README.md) - [Benchmark](./doc/doc_ch/benchmark.md) - 数据集 - [通用中英文OCR数据集](./doc/doc_ch/datasets.md) - [手写中文OCR数据集](./doc/doc_ch/handwritten_datasets.md) - [垂类多语言OCR数据集](./doc/doc_ch/vertical_and_multilingual_datasets.md) - - [常用数据标注工具](./doc/doc_ch/data_annotation.md) - - [常用数据合成工具](./doc/doc_ch/data_synthesis.md) +- 数据标注与合成 + - [半自动标注工具PPOCRLabel](./PPOCRLabel/README_ch.md) + - [数据合成工具Style-Text](./StyleTextRec/README_ch.md) + - [其它数据标注工具](./doc/doc_ch/data_annotation.md) + - [其它数据合成工具](./doc/doc_ch/data_synthesis.md) - [效果展示](#效果展示) - FAQ - [【精选】OCR精选10个问题](./doc/doc_ch/FAQ.md) - - [【理论篇】OCR通用21个问题](./doc/doc_ch/FAQ.md) - - [【实战篇】PaddleOCR实战53个问题](./doc/doc_ch/FAQ.md) + - [【理论篇】OCR通用30个问题](./doc/doc_ch/FAQ.md) + - [【实战篇】PaddleOCR实战84个问题](./doc/doc_ch/FAQ.md) - [技术交流群](#欢迎加入PaddleOCR技术交流群) - [参考文献](./doc/doc_ch/reference.md) - [许可证书](#许可证书) - [贡献代码](#贡献代码) -***注意:动态图端侧部署仍在开发中,目前仅支持动态图训练、python端预测,C++预测, -如果您有需要移动端部署案例或者量化裁剪,请切换到静态图分支;*** ## PP-OCR Pipline diff --git a/StyleText/README_ch.md b/StyleText/README_ch.md index 4c5802b9ebeda2a8501206746dbdf5300607cfe1..c2b6add7918896edddcfa55bb5e83ba2e9ca1588 100644 --- a/StyleText/README_ch.md +++ b/StyleText/README_ch.md @@ -5,6 +5,7 @@ - [二、环境配置](#环境配置) - [三、快速上手](#快速上手) - [四、应用案例](#应用案例) +- [五、代码结构](#代码结构) ### 一、工具简介 @@ -54,38 +55,43 @@ fusion_generator: ### 三、快速上手 #### 合成单张图 - -1. 运行tools/synth_image,生成示例图片: - +输入一张风格图和一段文字语料,运行tools/synth_image,合成单张图片,结果图像保存在当前目录下: ```python -python3 -m tools.synth_image -c configs/config.yml +python3 -m tools.synth_image -c configs/config.yml --style_image examples/style_images/2.jpg --text_corpus PaddleOCR --language en ``` +* 注意:语言选项和语料相对应,目前该工具只支持英文、简体中文和韩语。 + +例如,输入如下图片和语料"PaddleOCR": + +
+ +
-2. 运行后,会生成`fake_busion.jpg`,即为最终结果。 +生成合成数据`fake_fusion.jpg`:
-除此之外,程序还会生成并保存中间结果: - * `fake_bg.jpg`:为风格参考图去掉文字后的背景; - * `fake_text.jpg`:是用提供的字符串,仿照风格参考图中文字的风格,生成在灰色背景上的文字图片。 -3. 如果您想尝试其他风格图像和文字的效果,可以添加style_image,text_corpus和language参数: -```python -python3 -m tools.synth_image -c configs/config.yml --style_image examples/style_images/2.jpg --text_corpus PaddleOCR --language en -``` - * 注意:语言选项和语料相对应,目前我们支持英文、简体中文和韩语。 +除此之外,程序还会生成并保存中间结果`fake_bg.jpg`:为风格参考图去掉文字后的背景; + +
+ +
-4. 在`tools/synth_image.py`中,我们还提供了一个`batch_synth_images`方法,可以两两组合语料和图片,批量生成一批数据。 +`fake_text.jpg`:是用提供的字符串,仿照风格参考图中文字的风格,生成在灰色背景上的文字图片。 + +
+ +
#### 批量合成 +在实际应用场景中,经常需要批量合成图片,补充到训练集中。StyleText可以使用一批风格图片和语料,批量合成数据。合成过程如下: -在开始合成数据前,需要准备一些素材。 - -首先,需要风格图片作为合成图片的参考依据,这些数据可以是用作训练OCR识别模型的数据集。本例中使用带有标注文件的数据集作为风格图片. +1. 在`configs/dataset_config.yml`中配置目标场景风格图像和语料的路径,具体如下: -1. 在`configs/dataset_config.yml`中配置输入数据路径。 + * `Global`: + * `output_dir:`:保存合成数据的目录。 * `StyleSampler`: - * `method`:使用的风格图片采样方法; * `image_home`:风格图片目录; * `label_file`:风格图片路径列表文件,如果所用数据集有label,则label_file为label文件路径; * `with_label`:标志`label_file`是否为label文件。 @@ -94,23 +100,20 @@ python3 -m tools.synth_image -c configs/config.yml --style_image examples/style_ * `language`:语料的语种; * `corpus_file`: 语料文件路径。 - 我们提供了一批中英韩5w通用数据供您试用 ([下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/style_text/chkoen_5w.tar) ),下面给出了一些示例: + StyleText也提供了一批中英韩5万张通用场景数据用作文本风格图像,便于合成场景丰富的文本图像,下图给出了一些示例。 + + 中英韩5万张通用场景数据: [下载地址](https://paddleocr.bj.bcebos.com/dygraph_v2.0/style_text/chkoen_5w.tar) +
+ 2. 运行`tools/synth_dataset`合成数据: ``` bash python -m tools.synth_dataset -c configs/dataset_config.yml ``` -3. 如果您想使用并行方式来快速合成数据,可以通过启动多个进程,在启动时需要指定不同的`tag`(`-t`),如下所示: - - ```bash - python3 -m tools.synth_dataset -t 0 -c configs/dataset_config.yml - python3 -m tools.synth_dataset -t 1 -c configs/dataset_config.yml - ``` - ### 四、应用案例 下面以金属表面英文数字识别和通用韩语识别两个场景为例,说明使用StyleText合成数据,来提升文本识别效果的实际案例。下图给出了一些真实场景图像和合成图像的示例: @@ -127,7 +130,8 @@ python3 -m tools.synth_image -c configs/config.yml --style_image examples/style_ | 随机背景 | 韩语 | 5631 | 1230 | 0.3012 | 100000 | 0.5057 | 20% | -### 项目结构 + +### 五、代码结构 ``` style_text_rec |-- arch diff --git a/StyleText/doc/images/7.jpg b/StyleText/doc/images/7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..60a4e0ee6ae3d42cc43c43747d72a837bc170f9d Binary files /dev/null and b/StyleText/doc/images/7.jpg differ diff --git a/StyleText/doc/images/8.jpg b/StyleText/doc/images/8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fbed5a7bb5368090e612933bba8f57ec1a74a4c4 Binary files /dev/null and b/StyleText/doc/images/8.jpg differ diff --git a/configs/det/det_r50_vd_sast_icdar15.yml b/configs/det/det_r50_vd_sast_icdar15.yml old mode 100644 new mode 100755 index 7ca93cecec3dbf7152c1d509c4d8ca614dec388f..a989bc8fc754ca88e3bff2de2a6db1060301fdd5 --- a/configs/det/det_r50_vd_sast_icdar15.yml +++ b/configs/det/det_r50_vd_sast_icdar15.yml @@ -61,8 +61,8 @@ Train: dataset: name: SimpleDataSet data_dir: ./train_data/ - label_file_path: [./train_data/art_latin_icdar_14pt/train_no_tt_test/train_label_json.txt, ./train_data/total_text_icdar_14pt/train_label_json.txt] - data_ratio_list: [0.5, 0.5] + label_file_list: [./train_data/icdar2013/train_label_json.txt, ./train_data/icdar2015/train_label_json.txt, ./train_data/icdar17_mlt_latin/train_label_json.txt, ./train_data/coco_text_icdar_4pts/train_label_json.txt] + ratio_list: [0.1, 0.45, 0.3, 0.15] transforms: - DecodeImage: # load image img_mode: BGR diff --git a/configs/det/det_r50_vd_sast_totaltext.yml b/configs/det/det_r50_vd_sast_totaltext.yml old mode 100644 new mode 100755 index a9a037c8bd940d04d08d0dae3a90139ead68cdba..257ecf2490bdde6280cf4b20bb66f2457b4b833b --- a/configs/det/det_r50_vd_sast_totaltext.yml +++ b/configs/det/det_r50_vd_sast_totaltext.yml @@ -60,8 +60,8 @@ Metric: Train: dataset: name: SimpleDataSet - label_file_list: [./train_data/icdar2013/train_label_json.txt, ./train_data/icdar2015/train_label_json.txt, ./train_data/icdar17_mlt_latin/train_label_json.txt, ./train_data/coco_text_icdar_4pts/train_label_json.txt] - ratio_list: [0.1, 0.45, 0.3, 0.15] + label_file_path: [./train_data/art_latin_icdar_14pt/train_no_tt_test/train_label_json.txt, ./train_data/total_text_icdar_14pt/train_label_json.txt] + data_ratio_list: [0.5, 0.5] transforms: - DecodeImage: # load image img_mode: BGR diff --git a/doc/doc_ch/algorithm_overview.md b/doc/doc_ch/algorithm_overview.md index 997bc1f1978b93e45472b5698341fa91dd9e7e84..a23bfcb112d54719298709d5e253f609ec9dea74 100755 --- a/doc/doc_ch/algorithm_overview.md +++ b/doc/doc_ch/algorithm_overview.md @@ -21,7 +21,7 @@ PaddleOCR开源的文本检测算法列表: |EAST|MobileNetV3|78.24%|79.15%|78.69%|[下载链接](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_east_v2.0_train.tar)| |DB|ResNet50_vd|86.41%|78.72%|82.38%|[下载链接](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| |DB|MobileNetV3|77.29%|73.08%|75.12%|[下载链接](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| -|SAST|ResNet50_vd|91.83%|81.80%|86.52%|[下载链接](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_icdar15_v2.0_train.tar))| +|SAST|ResNet50_vd|91.83%|81.80%|86.52%|[下载链接](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_icdar15_v2.0_train.tar)| 在Total-text文本检测公开数据集上,算法效果如下: @@ -40,7 +40,7 @@ PaddleOCR文本检测算法的训练和使用请参考文档教程中[模型训 PaddleOCR基于动态图开源的文本识别算法列表: - [x] CRNN([paper](https://arxiv.org/abs/1507.05717) )(ppocr推荐) - [x] Rosetta([paper](https://arxiv.org/abs/1910.05085)) -- [ ] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) +- [ ] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) coming soon - [ ] RARE([paper](https://arxiv.org/abs/1603.03915v1)) coming soon - [ ] SRN([paper](https://arxiv.org/abs/2003.12294)) coming soon diff --git a/doc/doc_ch/models_list.md b/doc/doc_ch/models_list.md index 285c9899d867c0365c98be8ca9844aa555389356..f25c642e6d9b7d96fca315c31dce986efefd1960 100644 --- a/doc/doc_ch/models_list.md +++ b/doc/doc_ch/models_list.md @@ -66,8 +66,3 @@ PaddleOCR提供的可下载模型包括`推理模型`、`训练模型`、`预训 | --- | --- | --- | --- | --- | |ch_ppocr_mobile_slim_v2.0_cls|slim量化版模型|[cls_mv3.yml](../../configs/cls/cls_mv3.yml)| |[推理模型 (coming soon)](link) / [训练模型](link) / [slim模型](link) | |ch_ppocr_mobile_v2.0_cls|原始模型|[cls_mv3.yml](../../configs/cls/cls_mv3.yml)|1.38M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | - - -## OCR模型列表(V2.0,2020年12月15日更新) - -[2.0系列模型地址](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/models_list.md) diff --git a/doc/doc_en/algorithm_overview_en.md b/doc/doc_en/algorithm_overview_en.md index 6e4140e22813c0965009ffdef2eefddcec77489d..7f1afd027b9b56ad9a1f7a10f3f6b1fc34587252 100755 --- a/doc/doc_en/algorithm_overview_en.md +++ b/doc/doc_en/algorithm_overview_en.md @@ -23,7 +23,7 @@ On the ICDAR2015 dataset, the text detection result is as follows: |EAST|MobileNetV3|78.24%|79.15%|78.69%|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_east_v2.0_train.tar)| |DB|ResNet50_vd|86.41%|78.72%|82.38%|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| |DB|MobileNetV3|77.29%|73.08%|75.12%|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| -|SAST|ResNet50_vd|91.83%|81.80%|86.52%|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_icdar15_v2.0_train.tar))| +|SAST|ResNet50_vd|91.83%|81.80%|86.52%|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_sast_icdar15_v2.0_train.tar)| On Total-Text dataset, the text detection result is as follows: @@ -33,7 +33,7 @@ On Total-Text dataset, the text detection result is as follows: **Note:** Additional data, like icdar2013, icdar2017, COCO-Text, ArT, was added to the model training of SAST. Download English public dataset in organized format used by PaddleOCR from [Baidu Drive](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (download code: 2bpi). -For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./doc/doc_en/detection_en.md) +For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./detection_en.md) ### 2. Text Recognition Algorithm @@ -41,7 +41,7 @@ For the training guide and use of PaddleOCR text detection algorithms, please re PaddleOCR open-source text recognition algorithms list: - [x] CRNN([paper](https://arxiv.org/abs/1507.05717)) - [x] Rosetta([paper](https://arxiv.org/abs/1910.05085)) -- [ ] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) +- [ ] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) coming soon - [ ] RARE([paper](https://arxiv.org/abs/1603.03915v1)) coming soon - [ ] SRN([paper](https://arxiv.org/abs/2003.12294) )(Baidu Self-Research) coming soon @@ -54,4 +54,4 @@ Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation r |CRNN|Resnet34_vd|82.76%|rec_r34_vd_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/rec_r34_vd_none_bilstm_ctc_v2.0_train.tar)| |CRNN|MobileNetV3|79.97%|rec_mv3_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/rec_mv3_none_bilstm_ctc_v2.0_train.tar)| -Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./doc/doc_en/recognition_en.md) +Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./recognition_en.md) diff --git a/doc/doc_en/models_list_en.md b/doc/doc_en/models_list_en.md index f92820232e140ec255ca1fe2a62e85280263dc67..dc760fd7cba0d891e3b6844ef0530f8dbbd384cc 100644 --- a/doc/doc_en/models_list_en.md +++ b/doc/doc_en/models_list_en.md @@ -65,7 +65,3 @@ The downloadable models provided by PaddleOCR include `inference model`, `traine |ch_ppocr_mobile_slim_v2.0_cls|Slim quantized model|[cls_mv3.yml](../../configs/cls/cls_mv3.yml)| |[inference model (coming soon)](link) / [trained model](link) / [slim model](link) | |ch_ppocr_mobile_v2.0_cls|Original model|[cls_mv3.yml](../../configs/cls/cls_mv3.yml)|1.38M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | - -## OCR model list (V2.0,updated on 2020.12.15) - -[2.0 series model address](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/models_list.md) diff --git a/doc/doc_en/visualization_en.md b/doc/doc_en/visualization_en.md index 732ca9ea6240f3b00d6c902b24296c53fe6af245..f9c455e5b3510a9f262c6bf59b8adfbaef3fa01d 100644 --- a/doc/doc_en/visualization_en.md +++ b/doc/doc_en/visualization_en.md @@ -29,6 +29,6 @@ ## (multilingual)_ppocr_mobile_2.0
- - + +