未验证 提交 122454f2 编写于 作者: Z zhoujun 提交者: GitHub

Merge pull request #7307 from WenmuZhou/table_pr

add eg of TEDS
...@@ -7,7 +7,7 @@ English | [简体中文](README_ch.md) ...@@ -7,7 +7,7 @@ English | [简体中文](README_ch.md)
- [3. Result](#3-result) - [3. Result](#3-result)
- [4. How to use](#4-how-to-use) - [4. How to use](#4-how-to-use)
- [4.1 Quick start](#41-quick-start) - [4.1 Quick start](#41-quick-start)
- [4.2 Train](#42-train) - [4.2 Training, Evaluation and Inference](#42-training-evaluation-and-inference)
- [4.3 Calculate TEDS](#43-calculate-teds) - [4.3 Calculate TEDS](#43-calculate-teds)
- [5. Reference](#5-reference) - [5. Reference](#5-reference)
...@@ -51,6 +51,8 @@ The performance indicators are explained as follows: ...@@ -51,6 +51,8 @@ The performance indicators are explained as follows:
### 4.1 Quick start ### 4.1 Quick start
PP-Structure currently provides table recognition models in both Chinese and English. For the model link, see [models_list](../docs/models_list.md). The following takes the Chinese table recognition model as an example to introduce how to recognize a table.
Use the following commands to quickly complete the identification of a table. Use the following commands to quickly complete the identification of a table.
```python ```python
...@@ -79,7 +81,11 @@ python3.7 table/predict_table.py \ ...@@ -79,7 +81,11 @@ python3.7 table/predict_table.py \
After the operation is completed, the excel table of each image will be saved to the directory specified by the output field, and an html file will be produced in the directory to visually view the cell coordinates and the recognized table. After the operation is completed, the excel table of each image will be saved to the directory specified by the output field, and an html file will be produced in the directory to visually view the cell coordinates and the recognized table.
### 4.2 Train **NOTE**
1. If you want to use the English table recognition model, you need to download the English text detection and recognition model and the English table recognition model in [models_list](../docs/models_list_en.md), and replace `table_structure_dict_ch.txt` with `table_structure_dict.txt`.
2. To use the TableRec-RARE model, you need to replace `table_structure_dict_ch.txt` with `table_structure_dict.txt`, and add parameter `--merge_no_span_structure=False`
### 4.2 Training, Evaluation and Inference
The training, evaluation and inference process of the text detection model can be referred to [detection](../../doc/doc_en/detection_en.md) The training, evaluation and inference process of the text detection model can be referred to [detection](../../doc/doc_en/detection_en.md)
...@@ -114,9 +120,35 @@ python3 table/eval_table.py \ ...@@ -114,9 +120,35 @@ python3 table/eval_table.py \
--gt_path=path/to/gt.txt --gt_path=path/to/gt.txt
``` ```
If the PubLatNet eval dataset is used, it will be output Evaluate on the PubLatNet dataset using the English model
```bash
cd PaddleOCR/ppstructure
# Download the model
mkdir inference && cd inference
# Download the text detection model trained on the PubTabNet dataset and unzip it
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar && tar xf en_ppocr_mobile_v2.0_table_det_infer.tar
# Download the text recognition model trained on the PubTabNet dataset and unzip it
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar && tar xf en_ppocr_mobile_v2.0_table_rec_infer.tar
# Download the table recognition model trained on the PubTabNet dataset and unzip it
wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf en_ppstructure_mobile_v2.0_SLANet_infer.tar
cd ..
python3 table/eval_table.py \
--det_model_dir=inference/en_ppocr_mobile_v2.0_table_det_infer \
--rec_model_dir=inference/en_ppocr_mobile_v2.0_table_rec_infer \
--table_model_dir=inference/en_ppstructure_mobile_v2.0_SLANet_infer \
--image_dir=train_data/table/pubtabnet/val/ \
--rec_char_dict_path=../ppocr/utils/dict/table_dict.txt \
--table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
--det_limit_side_len=736 \
--det_limit_type=min \
--gt_path=path/to/gt.txt
```
output is
```bash ```bash
teds: 94.98 teds: 95.89
``` ```
## 5. Reference ## 5. Reference
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
- [3. 效果演示](#3-效果演示) - [3. 效果演示](#3-效果演示)
- [4. 使用](#4-使用) - [4. 使用](#4-使用)
- [4.1 快速开始](#41-快速开始) - [4.1 快速开始](#41-快速开始)
- [4.2 训练](#42-训练) - [4.2 模型训练、评估与推理](#42-模型训练评估与推理)
- [4.3 计算TEDS](#43-计算teds) - [4.3 计算TEDS](#43-计算teds)
- [5. Reference](#5-reference) - [5. Reference](#5-reference)
...@@ -57,6 +57,8 @@ ...@@ -57,6 +57,8 @@
### 4.1 快速开始 ### 4.1 快速开始
PP-Structure目前提供了中英文两种语言的表格识别模型,模型链接见 [models_list](../docs/models_list.md)。下面以中文表格识别模型为例,介绍如何识别一张表格。
使用如下命令即可快速完成一张表格的识别。 使用如下命令即可快速完成一张表格的识别。
```python ```python
cd PaddleOCR/ppstructure cd PaddleOCR/ppstructure
...@@ -67,7 +69,7 @@ mkdir inference && cd inference ...@@ -67,7 +69,7 @@ mkdir inference && cd inference
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar && tar xf ch_PP-OCRv3_det_infer.tar wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar && tar xf ch_PP-OCRv3_det_infer.tar
# 下载PP-OCRv3文本识别模型并解压 # 下载PP-OCRv3文本识别模型并解压
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar && tar xf ch_PP-OCRv3_rec_infer.tar wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar && tar xf ch_PP-OCRv3_rec_infer.tar
# 下载PP-Structurev2表格识别模型并解压 # 下载PP-Structurev2中文表格识别模型并解压
wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf ch_ppstructure_mobile_v2.0_SLANet_infer.tar wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf ch_ppstructure_mobile_v2.0_SLANet_infer.tar
cd .. cd ..
# 执行表格识别 # 执行表格识别
...@@ -82,7 +84,11 @@ python table/predict_table.py \ ...@@ -82,7 +84,11 @@ python table/predict_table.py \
``` ```
运行完成后,每张图片的excel表格会保存到output字段指定的目录下,同时在该目录下回生产一个html文件,用于可视化查看单元格坐标和识别的表格。 运行完成后,每张图片的excel表格会保存到output字段指定的目录下,同时在该目录下回生产一个html文件,用于可视化查看单元格坐标和识别的表格。
### 4.2 训练 **NOTE**
1. 如果想使用英文模型,需要在 [models_list](../docs/models_list.md) 中下载英文文字检测识别模型和英文表格识别模型,同时替换`table_structure_dict_ch.txt``table_structure_dict.txt`即可。
2. 如需使用TableRec-RARE模型,需要替换`table_structure_dict_ch.txt``table_structure_dict.txt`,同时参数`--merge_no_span_structure=False`
### 4.2 模型训练、评估与推理
文本检测模型的训练、评估和推理流程可参考 [detection](../../doc/doc_ch/detection.md) 文本检测模型的训练、评估和推理流程可参考 [detection](../../doc/doc_ch/detection.md)
...@@ -117,9 +123,36 @@ python3 table/eval_table.py \ ...@@ -117,9 +123,36 @@ python3 table/eval_table.py \
--det_limit_type=min \ --det_limit_type=min \
--gt_path=path/to/gt.txt --gt_path=path/to/gt.txt
``` ```
如使用PubLatNet评估数据集,将会输出
如使用英文表格识别模型在PubLatNet数据集上进行评估
```bash
cd PaddleOCR/ppstructure
# 下载模型
mkdir inference && cd inference
# 下载基于PubTabNet数据集训练的文本检测模型并解压
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar && tar xf en_ppocr_mobile_v2.0_table_det_infer.tar
# 下载基于PubTabNet数据集训练的文本识别模型并解压
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar && tar xf en_ppocr_mobile_v2.0_table_rec_infer.tar
# 下载基于PubTabNet数据集训练的表格识别模型并解压
wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf en_ppstructure_mobile_v2.0_SLANet_infer.tar
cd ..
python3 table/eval_table.py \
--det_model_dir=inference/en_ppocr_mobile_v2.0_table_det_infer \
--rec_model_dir=inference/en_ppocr_mobile_v2.0_table_rec_infer \
--table_model_dir=inference/en_ppstructure_mobile_v2.0_SLANet_infer \
--image_dir=train_data/table/pubtabnet/val/ \
--rec_char_dict_path=../ppocr/utils/dict/table_dict.txt \
--table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
--det_limit_side_len=736 \
--det_limit_type=min \
--gt_path=path/to/gt.txt
```
将会输出
```bash ```bash
teds: 94.98 teds: 95.89
``` ```
## 5. Reference ## 5. Reference
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册