diff --git a/doc/doc_ch/dataset/ocr_datasets.md b/doc/doc_ch/dataset/ocr_datasets.md index 7d71880e878ab00b1edc03b32543909b5e98c84c..1a60d0b338abd24b2014cdc40646a2da85abbe73 100644 --- a/doc/doc_ch/dataset/ocr_datasets.md +++ b/doc/doc_ch/dataset/ocr_datasets.md @@ -1,12 +1,13 @@ ## OCR数据集 - [1. 文本检测](#1) + - [1.1 ICDAR 2015](#11) - [2. 文本识别](#2) 这里整理了OCR中常用的公开数据集,持续更新中,欢迎各位小伙伴贡献数据集~ -#### 1. 文本检测 +### 1. 文本检测 | 数据集名称 |图片下载地址| PPOCR标注下载地址 | |---|---|---| @@ -14,8 +15,37 @@ | ctw1500 |https://paddleocr.bj.bcebos.com/dataset/ctw1500.zip| 图片下载地址中已包含 | | total text |https://paddleocr.bj.bcebos.com/dataset/total_text.tar| 图片下载地址中已包含 | + +#### 1.1 ICDAR 2015 +icdar2015 数据集包含1000张训练图像和500张测试图像。icdar2015数据集可以从上表中链接下载,首次下载需注册。 +注册完成登陆后,下载下图中红色框标出的部分,其中, `Training Set Images`下载的内容保存在`icdar_c4_train_imgs`文件夹下,`Test Set Images` 下载的内容保存早`ch4_test_images`文件夹下 + +
+ +
+ +将下载到的数据集解压到工作目录下,假设解压在 PaddleOCR/train_data/下。然后从上表中下载转换好的标注文件。 + +PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支持的数据格式。 数据转换工具在 `ppocr/utils/gen_label.py`, 这里以训练集为例: + +``` +# 将官网下载的标签文件转换为 train_icdar2015_label.txt +python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \ + --input_path="/path/to/ch4_training_localization_transcription_gt" \ + --output_label="/path/to/train_icdar2015_label.txt" +``` + +解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,按照如下方式组织icdar2015数据集: +``` +/PaddleOCR/train_data/icdar2015/text_localization/ + └─ icdar_c4_train_imgs/ icdar 2015 数据集的训练数据 + └─ ch4_test_images/ icdar 2015 数据集的测试数据 + └─ train_icdar2015_label.txt icdar 2015 数据集的训练标注 + └─ test_icdar2015_label.txt icdar 2015 数据集的测试标注 +``` + -#### 2. 文本识别 +### 2. 文本识别 | 数据集名称 | 图片下载地址 | PPOCR标注下载地址 | |---|---|---------------------------------------------------------------------| diff --git a/doc/doc_ch/detection.md b/doc/doc_ch/detection.md index 9dc910c5cdc5fcca522dfa418bb34591a46faf26..8c4a9100a740af639d2d92388156a5570b131a16 100644 --- a/doc/doc_ch/detection.md +++ b/doc/doc_ch/detection.md @@ -1,10 +1,12 @@ - +shu # 文字检测 本节以icdar2015数据集为例,介绍PaddleOCR中检测模型训练、评估、测试的使用方式。 - [1. 准备数据和模型](#1--------) * [1.1 数据准备](#11-----) + * [1.1 数据准备](#111-----) + * [1.1 数据准备](#112-----) * [1.2 下载预训练模型](#12--------) - [2. 开始训练](#2-----) * [2.1 启动训练](#21-----) @@ -26,42 +28,15 @@ ## 1.1 数据准备 -icdar2015 TextLocalization数据集是文本检测的数据集,包含1000张训练图像和500张测试图像。 -icdar2015数据集可以从[官网](https://rrc.cvc.uab.es/?ch=4&com=downloads)下载到,首次下载需注册。 -注册完成登陆后,下载下图中红色框标出的部分,其中, `Training Set Images`下载的内容保存为`icdar_c4_train_imgs`文件夹下,`Test Set Images` 下载的内容保存为`ch4_test_images`文件夹下 - -
- -
+ +### 1.1.1 公开数据集 -将下载到的数据集解压到工作目录下,假设解压在 PaddleOCR/train_data/下。另外,PaddleOCR将零散的标注文件整理成单独的标注文件 -,您可以通过wget的方式进行下载。 -```shell -# 在PaddleOCR路径下 -cd PaddleOCR/ -wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/train_icdar2015_label.txt -wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/test_icdar2015_label.txt -``` +公开数据集可参考 [ocr_datasets](./dataset/ocr_datasets.md) 进行下载和准备。 -PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支持的数据格式。 数据转换工具在 `ppocr/utils/gen_label.py`, 这里以训练集为例: + +### 1.1.2 自定义数据集 -``` -# 将官网下载的标签文件转换为 train_icdar2015_label.txt -python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \ - --input_path="/path/to/ch4_training_localization_transcription_gt" \ - --output_label="/path/to/train_icdar2015_label.txt" -``` - -解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,按照如下方式组织icdar2015数据集: -``` -/PaddleOCR/train_data/icdar2015/text_localization/ - └─ icdar_c4_train_imgs/ icdar数据集的训练数据 - └─ ch4_test_images/ icdar数据集的测试数据 - └─ train_icdar2015_label.txt icdar数据集的训练标注 - └─ test_icdar2015_label.txt icdar数据集的测试标注 -``` - -提供的标注文件格式如下,中间用"\t"分隔: +PaddleOCR 文本检测算法支持的标注文件格式如下,中间用"\t"分隔: ``` " 图像文件名 json.dumps编码的图像标注信息" ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]]}, {...}] @@ -69,7 +44,7 @@ ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], json.dumps编码前的图像标注信息是包含多个字典的list,字典中的 `points` 表示文本框的四个点的坐标(x, y),从左上角的点开始顺时针排列。 `transcription` 表示当前文本框的文字,**当其内容为“###”时,表示该文本框无效,在训练时会跳过。** -如果您想在其他数据集上训练,可以按照上述形式构建标注文件。 +如果您想在我们未提供的数据集上训练,可以按照上述形式构建标注文件。 ## 1.2 下载预训练模型 @@ -178,7 +153,7 @@ args1: args1 ## 2.4 混合精度训练 如果您想进一步加快训练速度,可以使用[自动混合精度训练](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/01_paddle2.0_introduction/basic_concept/amp_cn.html), 以单机单卡为例,命令如下: - + ```shell python3 tools/train.py -c configs/det/det_mv3_db.yml \ -o Global.pretrained_model=./pretrain_models/MobileNetV3_large_x0_5_pretrained \ @@ -197,7 +172,7 @@ python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1 **注意:** 采用多机多卡训练时,需要替换上面命令中的ips值为您机器的地址,机器之间需要能够相互ping通。另外,训练时需要在多个机器上分别启动命令。查看机器ip地址的命令为`ifconfig`。 - + ## 2.6 知识蒸馏训练 @@ -211,12 +186,12 @@ PaddleOCR支持了基于知识蒸馏的检测模型训练过程,更多内容 ## 2.7 其他训练环境 - Windows GPU/CPU - + - macOS - + - Linux DCU - - + + # 3. 模型评估与预测 diff --git a/doc/doc_en/dataset/ocr_datasets_en.md b/doc/doc_en/dataset/ocr_datasets_en.md index ddddd3abd761a830ede8497df924e34db8853162..30c5c0b787656be9d3a9b5c85e4e4d4e92bfadf7 100644 --- a/doc/doc_en/dataset/ocr_datasets_en.md +++ b/doc/doc_en/dataset/ocr_datasets_en.md @@ -1,6 +1,7 @@ ## OCR datasets - [1. text detection](#1) + - [1.1 ICDAR 2015](#11) - [2. text recognition](#2) Here is a list of public datasets commonly used in OCR, which are being continuously updated. Welcome to contribute datasets~ @@ -14,6 +15,38 @@ Here is a list of public datasets commonly used in OCR, which are being continuo | ctw1500 | https://paddleocr.bj.bcebos.com/dataset/ctw1500.zip | Included in the downloaded image zip | | total text | https://paddleocr.bj.bcebos.com/dataset/total_text.tar | Included in the downloaded image zip | + +#### 1.1 ICDAR 2015 + +The icdar2015 dataset contains train set which has 1000 images obtained with wearable cameras and test set which has 500 images obtained with wearable cameras. The icdar2015 dataset can be downloaded from the link in the table above. Registration is required for downloading. + + +After registering and logging in, download the part marked in the red box in the figure below. And, the content downloaded by `Training Set Images` should be saved as the folder `icdar_c4_train_imgs`, and the content downloaded by `Test Set Images` is saved as the folder `ch4_test_images` + +
+ +
+ +Decompress the downloaded dataset to the working directory, assuming it is decompressed under PaddleOCR/train_data/. Then download the PPOCR format annotation file from the table above. + +PaddleOCR also provides a data format conversion script, which can convert the official website label to the PPOCR format. The data conversion tool is in `ppocr/utils/gen_label.py`, here is the training set as an example: +``` +# Convert the label file downloaded from the official website to train_icdar2015_label.txt +python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \ + --input_path="/path/to/ch4_training_localization_transcription_gt" \ + --output_label="/path/to/train_icdar2015_label.txt" +``` + +After decompressing the data set and downloading the annotation file, PaddleOCR/train_data/ has two folders and two files, which are: +``` +/PaddleOCR/train_data/icdar2015/text_localization/ + └─ icdar_c4_train_imgs/ Training data of icdar dataset + └─ ch4_test_images/ Testing data of icdar dataset + └─ train_icdar2015_label.txt Training annotation of icdar dataset + └─ test_icdar2015_label.txt Test annotation of icdar dataset +``` + + #### 2. text recognition diff --git a/doc/doc_en/detection_en.md b/doc/doc_en/detection_en.md index 618e20fb5e2a9a7afd67bb7d15646971b88365ee..87f0855559467de089358e9474a948a8b0d345e4 100644 --- a/doc/doc_en/detection_en.md +++ b/doc/doc_en/detection_en.md @@ -4,6 +4,8 @@ This section uses the icdar2015 dataset as an example to introduce the training, - [1. Data and Weights Preparation](#1-data-and-weights-preparatio) * [1.1 Data Preparation](#11-data-preparation) + * [1.1.1 Public dataset](#111-public-dataset) + * [1.1.2 Custom dataset](#112-custom-dataset) * [1.2 Download Pre-trained Model](#12-download-pretrained-model) - [2. Training](#2-training) * [2.1 Start Training](#21-start-training) @@ -20,33 +22,12 @@ This section uses the icdar2015 dataset as an example to introduce the training, ### 1.1 Data Preparation -The icdar2015 dataset contains train set which has 1000 images obtained with wearable cameras and test set which has 500 images obtained with wearable cameras. The icdar2015 can be obtained from [official website](https://rrc.cvc.uab.es/?ch=4&com=downloads). Registration is required for downloading. +### 1.1.1 Public dataset +Public datasets can be downloaded and prepared by referring to [ocr_datasets](./dataset/ocr_datasets_en.md). +### 1.1.2 Custom dataset -After registering and logging in, download the part marked in the red box in the figure below. And, the content downloaded by `Training Set Images` should be saved as the folder `icdar_c4_train_imgs`, and the content downloaded by `Test Set Images` is saved as the folder `ch4_test_images` - -
- -
- -Decompress the downloaded dataset to the working directory, assuming it is decompressed under PaddleOCR/train_data/. In addition, PaddleOCR organizes many scattered annotation files into two separate annotation files for train and test respectively, which can be downloaded by wget: -```shell -# Under the PaddleOCR path -cd PaddleOCR/ -wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/train_icdar2015_label.txt -wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/test_icdar2015_label.txt -``` - -After decompressing the data set and downloading the annotation file, PaddleOCR/train_data/ has two folders and two files, which are: -``` -/PaddleOCR/train_data/icdar2015/text_localization/ - └─ icdar_c4_train_imgs/ Training data of icdar dataset - └─ ch4_test_images/ Testing data of icdar dataset - └─ train_icdar2015_label.txt Training annotation of icdar dataset - └─ test_icdar2015_label.txt Test annotation of icdar dataset -``` - -The provided annotation file format is as follow, separated by "\t": +The annotation file formats supported by the PaddleOCR text detection algorithm are as follows, separated by "\t": ``` " Image file name Image annotation information encoded by json.dumps" ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]]}, {...}]