quickstart_en.md 9.8 KB
Newer Older
文幕地方's avatar
文幕地方 已提交
1 2
# PP-Structure Quick Start

文幕地方's avatar
文幕地方 已提交
3 4 5
- [1. Install package](#1-install-package)
- [2. Use](#2-use)
  - [2.1 Use by command line](#21-use-by-command-line)
6 7 8 9 10
    - [2.1.1 image orientation + layout analysis + table recognition](#211-image-orientation--layout-analysis--table-recognition)
    - [2.1.2 layout analysis + table recognition](#212-layout-analysis--table-recognition)
    - [2.1.3 layout analysis](#213-layout-analysis)
    - [2.1.4 table recognition](#214-table-recognition)
    - [2.1.5 DocVQA](#215-docvqa)
文幕地方's avatar
文幕地方 已提交
11
  - [2.2 Use by code](#22-use-by-code)
12 13 14 15 16
    - [2.2.1 image orientation + layout analysis + table recognition](#221-image-orientation--layout-analysis--table-recognition)
    - [2.2.2 layout analysis + table recognition](#222-layout-analysis--table-recognition)
    - [2.2.3 layout analysis](#223-layout-analysis)
    - [2.2.4 table recognition](#224-table-recognition)
    - [2.2.5 DocVQA](#225-docvqa)
文幕地方's avatar
文幕地方 已提交
17 18 19 20
  - [2.3 Result description](#23-result-description)
    - [2.3.1 layout analysis + table recognition](#231-layout-analysis--table-recognition)
    - [2.3.2 DocVQA](#232-docvqa)
  - [2.4 Parameter Description](#24-parameter-description)
M
update  
MissPenguin 已提交
21 22 23


<a name="1"></a>
文幕地方's avatar
文幕地方 已提交
24
## 1. Install package
M
update  
MissPenguin 已提交
25 26

```bash
文幕地方's avatar
文幕地方 已提交
27
# Install paddleocr, version 2.5+ is recommended
文幕地方's avatar
文幕地方 已提交
28
pip3 install "paddleocr>=2.5"
文幕地方's avatar
文幕地方 已提交
29
# Install the DocVQA dependency package paddlenlp (if you do not use the DocVQA, you can skip it)
M
update  
MissPenguin 已提交
30 31 32 33 34
pip install paddlenlp

```

<a name="2"></a>
文幕地方's avatar
文幕地方 已提交
35
## 2. Use
M
update  
MissPenguin 已提交
36 37

<a name="21"></a>
文幕地方's avatar
文幕地方 已提交
38
### 2.1 Use by command line
39

M
update  
MissPenguin 已提交
40
<a name="211"></a>
41
#### 2.1.1 image orientation + layout analysis + table recognition
M
update  
MissPenguin 已提交
42
```bash
43
paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/1.png --type=structure --image_orientation=true
M
update  
MissPenguin 已提交
44 45 46
```

<a name="212"></a>
47
#### 2.1.2 layout analysis + table recognition
48
```bash
49
paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/1.png --type=structure
50 51 52
```

<a name="213"></a>
53
#### 2.1.3 layout analysis
54
```bash
55
paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/1.png --type=structure --table=false --ocr=false
56 57 58
```

<a name="214"></a>
59 60 61 62 63 64 65
#### 2.1.4 table recognition
```bash
paddleocr --image_dir=PaddleOCR/ppstructure/docs/table/table.jpg --type=structure --layout=false
```

<a name="215"></a>
#### 2.1.5 DocVQA
M
update  
MissPenguin 已提交
66

文幕地方's avatar
文幕地方 已提交
67
Please refer to: [Documentation Visual Q&A](../vqa/README.md) .
M
update  
MissPenguin 已提交
68 69

<a name="22"></a>
文幕地方's avatar
文幕地方 已提交
70
### 2.2 Use by code
M
update  
MissPenguin 已提交
71 72

<a name="221"></a>
73
#### 2.2.1 image orientation + layout analysis + table recognition
M
update  
MissPenguin 已提交
74 75 76 77 78 79

```python
import os
import cv2
from paddleocr import PPStructure,draw_structure_result,save_structure_res

80
table_engine = PPStructure(show_log=True, image_orientation=True)
M
update  
MissPenguin 已提交
81

82 83
save_folder = './output'
img_path = 'PaddleOCR/ppstructure/docs/table/1.png'
M
update  
MissPenguin 已提交
84 85 86 87 88 89 90 91 92 93
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

from PIL import Image

94
font_path = 'PaddleOCR/doc/fonts/simfang.ttf' # PaddleOCR下提供字体包
M
update  
MissPenguin 已提交
95 96 97 98 99 100 101
image = Image.open(img_path).convert('RGB')
im_show = draw_structure_result(image, result,font_path=font_path)
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

<a name="222"></a>
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131
#### 2.2.2 layout analysis + table recognition

```python
import os
import cv2
from paddleocr import PPStructure,draw_structure_result,save_structure_res

table_engine = PPStructure(show_log=True)

save_folder = './output'
img_path = 'PaddleOCR/ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

from PIL import Image

font_path = 'PaddleOCR/doc/fonts/simfang.ttf' # PaddleOCR下提供字体包
image = Image.open(img_path).convert('RGB')
im_show = draw_structure_result(image, result,font_path=font_path)
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

<a name="223"></a>
#### 2.2.3 layout analysis
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150

```python
import os
import cv2
from paddleocr import PPStructure,save_structure_res

table_engine = PPStructure(table=False, ocr=False, show_log=True)

save_folder = './output'
img_path = 'PaddleOCR/ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)
```

151 152
<a name="224"></a>
#### 2.2.4 table recognition
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171

```python
import os
import cv2
from paddleocr import PPStructure,save_structure_res

table_engine = PPStructure(layout=False, show_log=True)

save_folder = './output'
img_path = 'PaddleOCR/ppstructure/docs/table/table.jpg'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)
```

172 173
<a name="225"></a>
#### 2.2.5 DocVQA
M
update  
MissPenguin 已提交
174

文幕地方's avatar
文幕地方 已提交
175
Please refer to: [Documentation Visual Q&A](../vqa/README.md) .
M
update  
MissPenguin 已提交
176 177

<a name="23"></a>
文幕地方's avatar
文幕地方 已提交
178 179 180
### 2.3 Result description

The return of PP-Structure is a list of dicts, the example is as follows:
M
update  
MissPenguin 已提交
181 182

<a name="231"></a>
文幕地方's avatar
文幕地方 已提交
183
#### 2.3.1 layout analysis + table recognition
M
update  
MissPenguin 已提交
184 185 186 187 188 189 190 191 192
```shell
[
  {   'type': 'Text',
      'bbox': [34, 432, 345, 462],
      'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]],
                [('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent  ', 0.465441)])
  }
]
```
文幕地方's avatar
文幕地方 已提交
193
Each field in dict is described as follows:
M
update  
MissPenguin 已提交
194

195 196
| field | description  |
| --- |---|
文幕地方's avatar
文幕地方 已提交
197 198 199
|type| Type of image area.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
|bbox| The coordinates of the image area in the original image, respectively [upper left corner x, upper left corner y, lower right corner x, lower right corner y].                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
|res| OCR or table recognition result of the image area. <br> table: a dict with field descriptions as follows: <br>&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; `html`: html str of table.<br>&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; In the code usage mode, set return_ocr_result_in_table=True whrn call can get the detection and recognition results of each text in the table area, corresponding to the following fields: <br>&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; `boxes`: text detection boxes.<br>&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; `rec_res`: text recognition results.<br> OCR: A tuple containing the detection boxes and recognition results of each single text. |
M
update  
MissPenguin 已提交
200

文幕地方's avatar
文幕地方 已提交
201
After the recognition is completed, each image will have a directory with the same name under the directory specified by the `output` field. Each table in the image will be stored as an excel, and the picture area will be cropped and saved. The filename of  excel and picture is their coordinates in the image.
M
update  
MissPenguin 已提交
202 203 204
  ```
  /output/table/1/
    └─ res.txt
文幕地方's avatar
文幕地方 已提交
205 206 207
    └─ [454, 360, 824, 658].xlsx        table recognition result
    └─ [16, 2, 828, 305].jpg            picture in Image
    └─ [17, 361, 404, 711].xlsx        table recognition result
M
update  
MissPenguin 已提交
208 209 210 211 212
  ```

<a name="232"></a>
#### 2.3.2 DocVQA

文幕地方's avatar
文幕地方 已提交
213
Please refer to: [Documentation Visual Q&A](../vqa/README.md) .
M
update  
MissPenguin 已提交
214 215

<a name="24"></a>
文幕地方's avatar
文幕地方 已提交
216 217
### 2.4 Parameter Description

218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239
| field | description | default |
|---|---|---|
| output | result save path | ./output/table |
| table_max_len | long side of the image resize in table structure model | 488 |
| table_model_dir | Table structure model inference model path| None |
| table_char_dict_path | The dictionary path of table structure model | ../ppocr/utils/dict/table_structure_dict.txt  |
| merge_no_span_structure | In the table recognition model, whether to merge '\<td>' and '\</td>' | False |
| layout_model_dir  | Layout analysis model inference model path| None |
| layout_dict_path  | The dictionary path of layout analysis model| ../ppocr/utils/dict/layout_publaynet_dict.txt |
| layout_score_threshold  | The box threshold path of layout analysis model| 0.5|
| layout_nms_threshold  | The nms threshold path of layout analysis model| 0.5|
| vqa_algorithm  | vqa model algorithm| LayoutXLM|
| ser_model_dir  | Ser model inference model path| None|
| ser_dict_path  | The dictionary path of Ser model| ../train_data/XFUND/class_list_xfun.txt|
| mode | structure or vqa  | structure   |
| image_orientation | Whether to perform image orientation classification in forward  | False   |
| layout | Whether to perform layout analysis in forward  | True   |
| table  | Whether to perform table recognition in forward  | True   |
| ocr    | Whether to perform ocr for non-table areas in layout analysis. When layout is False, it will be automatically set to False| True |
| recovery    | Whether to perform layout recovery in forward| False |
| structure_version |  Structure version, optional PP-structure and PP-structurev2  | PP-structure |

文幕地方's avatar
文幕地方 已提交
240
Most of the parameters are consistent with the PaddleOCR whl package, see [whl package documentation](../../doc/doc_en/whl.md)