未验证 提交 3e2c7306 编写于 作者: D Daniel Yang 提交者: GitHub

Merge pull request #3509 from grasswolfs/update_ppstructure_0802

update_ppstructure_readme
# PPStructure English | [简体中文](README_ch.md)
PPStructure is an OCR toolkit for complex layout analysis. It can divide document data in the form of pictures into **text, table, title, picture and list** 5 types of areas, and extract the table area as excel # PP-Structure
## 1. Quick start
### install PP-Structure is an OCR toolkit that can be used for complex documents analysis. The main features are as follows:
**install PaddlePaddle2.0** - Support the layout analysis of documents, divide the documents into 5 types of areas **text, title, table, image and list** (conjunction with Layout-Parser)
- Support to extract the texts from the text, title, picture and list areas (used in conjunction with PP-OCR)
- Support to extract excel files from the table areas
- Support python whl package and command line usage, easy to use
- Support custom training for layout analysis and table structure tasks
- The total model size is only about 18.6M (continuous optimization)
```bash ## 1. Visualization
pip3 install --upgrade pip
# If you have cuda9 or cuda10 installed on your machine, please run the following command to install <img src="../doc/table/ppstructure.GIF" width="100%"/>
python3 -m pip install paddlepaddle-gpu==2.0.0 -i https://mirror.baidu.com/pypi/simple
# If you only have cpu on your machine, please run the following command to install
python3 -m pip install paddlepaddle==2.0.0 -i https://mirror.baidu.com/pypi/simple
For more version requirements, please refer to the instructions in the [installation document](https://www.paddlepaddle.org.cn/install/quick) . ## 2. Installation
```
### 2.1 Install requirements
**Clone PaddleOCR repo** - **(1) Install PaddlePaddle**
```bash ```bash
# Recommend pip3 install --upgrade pip
git clone https://github.com/PaddlePaddle/PaddleOCR
# GPU
python3 -m pip install paddlepaddle-gpu==2.1.2 -i https://mirror.baidu.com/pypi/simple
# If you cannot pull successfully due to network problems, you can also choose to use the code hosting on the cloud: # CPU
git clone https://gitee.com/paddlepaddle/PaddleOCR python3 -m pip install paddlepaddle==2.1.2 -i https://mirror.baidu.com/pypi/simple
# Note: The cloud-hosting code may not be able to synchronize the update with this GitHub project in real time. There might be a delay of 3-5 days. Please give priority to the recommended method. # For more,refer[Installation](https://www.paddlepaddle.org.cn/install/quick)。
``` ```
**install paddleocr** - **(2) Install Layout-Parser**
install by pypi
```bash ```bash
cd PaddleOCR pip3 install -U premailer paddleocr https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
pip install "paddleocr>=2.2" # # Recommend to use version 2.2
``` ```
build own whl package and install ### 2.2 Install PaddleOCR(including PP-OCR and PP-Structure)
- **(1) PIP install PaddleOCR whl package(inference only)**
```bash ```bash
python3 setup.py bdist_wheel pip install "paddleocr>=2.0.6"
pip3 install dist/paddleocr-x.x.x-py3-none-any.whl # x.x.x is the version of paddleocr
``` ```
**install layoutparser**
```sh - **(2) Clone PaddleOCR(Inference+training)**
pip3 install -U premailer https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
```bash
git clone https://github.com/PaddlePaddle/PaddleOCR
``` ```
### 1.2 Use
#### 1.2.1 Use by command line ## 3 Quick Start
### 3.1 Use by command line
```bash ```bash
paddleocr --image_dir=../doc/table/1.png --type=structure paddleocr --image_dir=../doc/table/1.png --type=structure
``` ```
#### 1.2.2 Use by code ### 3.2 Use by python API
```python ```python
import os import os
...@@ -85,28 +90,28 @@ im_show = draw_structure_result(image, result,font_path=font_path) ...@@ -85,28 +90,28 @@ im_show = draw_structure_result(image, result,font_path=font_path)
im_show = Image.fromarray(im_show) im_show = Image.fromarray(im_show)
im_show.save('result.jpg') im_show.save('result.jpg')
``` ```
#### 1.2.3 返回结果说明 ### 3.3 Return results
The return result of PPStructure is a list composed of a dict, an example is as follows The return result of PPStructure is a list composed of a dict, an example is as follows
```shell ```shell
[ [
{ 'type': 'Text', { 'type': 'Text',
'bbox': [34, 432, 345, 462], 'bbox': [34, 432, 345, 462],
'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]], 'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]],
[('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent ', 0.465441)]) [('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent ', 0.465441)])
} }
] ]
``` ```
The description of each field in dict is as follows The description of each field in dict is as follows
| Parameter | Description | | Parameter | Description |
| --------------- | -------------| | --------------- | -------------|
|type|Type of image area| |type|Type of image area|
|bbox|The coordinates of the image area in the original image, respectively [left upper x, left upper y, right bottom x, right bottom y]| |bbox|The coordinates of the image area in the original image, respectively [left upper x, left upper y, right bottom x, right bottom y]|
|res|OCR or table recognition result of image area。<br> Table: HTML string of the table; <br> OCR: A tuple containing the detection coordinates and recognition results of each single line of text| |res|OCR or table recognition result of image area。<br> Table: HTML string of the table; <br> OCR: A tuple containing the detection coordinates and recognition results of each single line of text|
#### 1.2.4 Parameter Description: ### 3.4 Parameter Description:
| Parameter | Description | Default value | | Parameter | Description | Default value |
| --------------- | ---------------------------------------- | ------------------------------------------- | | --------------- | ---------------------------------------- | ------------------------------------------- |
...@@ -119,24 +124,24 @@ Most of the parameters are consistent with the paddleocr whl package, see [doc o ...@@ -119,24 +124,24 @@ Most of the parameters are consistent with the paddleocr whl package, see [doc o
After running, each image will have a directory with the same name under the directory specified in the output field. Each table in the picture will be stored as an excel and figure area will be cropped and saved, the excel and image file name will be the coordinates of the table in the image. After running, each image will have a directory with the same name under the directory specified in the output field. Each table in the picture will be stored as an excel and figure area will be cropped and saved, the excel and image file name will be the coordinates of the table in the image.
## 2. PPStructure Pipeline ## 4. PPStructure Pipeline
the process is as follows the process is as follows
![pipeline](../doc/table/pipeline_en.jpg) ![pipeline](../doc/table/pipeline_en.jpg)
In PPStructure, the image will be analyzed by layoutparser first. In the layout analysis, the area in the image will be classified, including **text, title, image, list and table** 5 categories. For the first 4 types of areas, directly use the PP-OCR to complete the text detection and recognition. The table area will be converted to an excel file of the same table style via Table OCR. In PPStructure, the image will be analyzed by layoutparser first. In the layout analysis, the area in the image will be classified, including **text, title, image, list and table** 5 categories. For the first 4 types of areas, directly use the PP-OCR to complete the text detection and recognition. The table area will be converted to an excel file of the same table style via Table OCR.
### 2.1 LayoutParser ### 4.1 LayoutParser
Layout analysis divides the document data into regions, including the use of Python scripts for layout analysis tools, extraction of special category detection boxes, performance indicators, and custom training layout analysis models. For details, please refer to [document](layout/README_en.md). Layout analysis divides the document data into regions, including the use of Python scripts for layout analysis tools, extraction of special category detection boxes, performance indicators, and custom training layout analysis models. For details, please refer to [document](layout/README_en.md).
### 2.2 Table Structure ### 4.2 Table Structure
Table OCR converts table image into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed, please refer to [document](table/README.md) Table OCR converts table image into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed, please refer to [document](table/README.md)
## 3. Predictive by inference engine ## 5. Predictive by inference engine
Use the following commands to complete the inference. Use the following commands to complete the inference.
```python ```python
cd PaddleOCR/ppstructure cd PaddleOCR/ppstructure
...@@ -160,4 +165,4 @@ After running, each image will have a directory with the same name under the dir ...@@ -160,4 +165,4 @@ After running, each image will have a directory with the same name under the dir
|model name|description|config|model size|download| |model name|description|config|model size|download|
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
|en_ppocr_mobile_v2.0_table_structure|Table structure prediction for English table scenarios|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) | |en_ppocr_mobile_v2.0_table_structure|Table structure prediction for English table scenarios|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) |
\ No newline at end of file
# PPStructure [English](README.md) | 简体中文
PaddleStructure是一个用于复杂版面分析的OCR工具包,其能够对图片形式的文档数据划分**文字、表格、标题、图片以及列表**5类区域,并将表格区域提取为excel # PP-Structure
## 1. 快速开始 PP-Structure是一个可用于复杂文档结构分析和处理的OCR工具包,主要特性如下:
- 支持对图片形式的文档进行版面分析,可以划分**文字、标题、表格、图片以及列表**5类区域(与Layout-Parser联合使用)
- 支持文字、标题、图片以及列表区域提取为文字字段(与PP-OCR联合使用)
- 支持表格区域进行结构化分析,最终结果输出Excel文件
- 支持python whl包和命令行两种方式,简单易用
- 支持版面分析和表格结构化两类任务自定义训练
- 总模型大小仅有18.6M(持续优化)
### 1.1 安装 ## 1. 效果展示
**安装PaddlePaddle2.0** <img src="../doc/table/ppstructure.GIF" width="100%"/>
```bash
pip3 install --upgrade pip
# 如果您的机器安装的是CUDA9或CUDA10,请运行以下命令安装
python3 -m pip install paddlepaddle-gpu==2.0.0 -i https://mirror.baidu.com/pypi/simple
# 如果您的机器是CPU,请运行以下命令安装 ## 2. 安装
python3 -m pip install paddlepaddle==2.0.0 -i https://mirror.baidu.com/pypi/simple
# 更多的版本需求,请参照[安装文档](https://www.paddlepaddle.org.cn/install/quick)中的说明进行操作。 ### 2.1 安装依赖
```
**克隆PaddleOCR repo代码** - **(1) 安装PaddlePaddle**
```bash ```bash
【推荐】git clone https://github.com/PaddlePaddle/PaddleOCR pip3 install --upgrade pip
如果因为网络问题无法pull成功,也可选择使用码云上的托管:
git clone https://gitee.com/paddlepaddle/PaddleOCR # GPU安装
python3 -m pip install paddlepaddle-gpu==2.1.2 -i https://mirror.baidu.com/pypi/simple
注:码云托管代码可能无法实时同步本github项目更新,存在3~5天延时,请优先使用推荐方式。 # CPU安装
python3 -m pip install paddlepaddle==2.1.2 -i https://mirror.baidu.com/pypi/simple
# 更多需求,请参照[安装文档](https://www.paddlepaddle.org.cn/install/quick)中的说明进行操作。
``` ```
**安装 paddleocr** - **(2) 安装 Layout-Parser**
pip安装
```bash ```bash
cd PaddleOCR pip3 install -U premailer paddleocr https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
pip install "paddleocr>=2.0.1" # 推荐使用2.0.1+版本
``` ```
本地构建并安装 ### 2.2 安装PaddleOCR(包含PP-OCR和PP-Structure)
- **(1) PIP快速安装PaddleOCR whl包(仅预测)**
```bash ```bash
python3 setup.py bdist_wheel pip install "paddleocr>=2.0.6" # 推荐使用2.0.1+版本
pip3 install dist/paddleocr-x.x.x-py3-none-any.whl # x.x.x是paddleocr的版本号
``` ```
**安装 layoutparser** - **(2) 完整克隆PaddleOCR源码(预测+训练)**
```sh
pip3 install -U premailer paddleocr https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl ```bash
【推荐】git clone https://github.com/PaddlePaddle/PaddleOCR
#如果因为网络问题无法pull成功,也可选择使用码云上的托管:
git clone https://gitee.com/paddlepaddle/PaddleOCR
#注:码云托管代码可能无法实时同步本github项目更新,存在3~5天延时,请优先使用推荐方式。
``` ```
### 1.2 PPStructure whl包使用
#### 1.2.1 命令行使用 ## 3 PP-Structure 快速开始
### 3.1 命令行使用(默认参数,极简)
```bash ```bash
paddleocr --image_dir=../doc/table/1.png --type=structure paddleocr --image_dir=../doc/table/1.png --type=structure
``` ```
#### 1.2.2 Python脚本使用 ### 3.2 Python脚本使用(自定义参数,灵活)
```python ```python
import os import os
...@@ -89,28 +96,28 @@ im_show = Image.fromarray(im_show) ...@@ -89,28 +96,28 @@ im_show = Image.fromarray(im_show)
im_show.save('result.jpg') im_show.save('result.jpg')
``` ```
#### 1.2.3 返回结果说明 ### 3.3 返回结果说明
PaddleStructure 的返回结果为一个dict组成的list,示例如下 PaddleStructure 的返回结果为一个dict组成的list,示例如下
```shell ```shell
[ [
{ 'type': 'Text', { 'type': 'Text',
'bbox': [34, 432, 345, 462], 'bbox': [34, 432, 345, 462],
'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]], 'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]],
[('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent ', 0.465441)]) [('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent ', 0.465441)])
} }
] ]
``` ```
dict 里各个字段说明如下 dict 里各个字段说明如下
| 字段 | 说明 | | 字段 | 说明 |
| --------------- | -------------| | --------------- | -------------|
|type|图片区域的类型| |type|图片区域的类型|
|bbox|图片区域的在原图的坐标,分别[左上角x,左上角y,右下角x,右下角y]| |bbox|图片区域的在原图的坐标,分别[左上角x,左上角y,右下角x,右下角y]|
|res|图片区域的OCR或表格识别结果。<br> 表格: 表格的HTML字符串; <br> OCR: 一个包含各个单行文字的检测坐标和识别结果的元组| |res|图片区域的OCR或表格识别结果。<br> 表格: 表格的HTML字符串; <br> OCR: 一个包含各个单行文字的检测坐标和识别结果的元组|
#### 1.2.4 参数说明 ### 3.4 参数说明
| 字段 | 说明 | 默认值 | | 字段 | 说明 | 默认值 |
| --------------- | ---------------------------------------- | ------------------------------------------- | | --------------- | ---------------------------------------- | ------------------------------------------- |
...@@ -124,22 +131,21 @@ dict 里各个字段说明如下 ...@@ -124,22 +131,21 @@ dict 里各个字段说明如下
运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,图片区域会被裁剪之后保存下来,excel文件和图片名名为表格在图片里的坐标。 运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,图片区域会被裁剪之后保存下来,excel文件和图片名名为表格在图片里的坐标。
## 2. PPStructure Pipeline ## 4. PP-Structure Pipeline介绍
流程如下
![pipeline](../doc/table/pipeline.jpg) ![pipeline](../doc/table/pipeline.jpg)
在PPStructure中,图片会先经由layoutparser进行版面分析,在版面分析中,会对图片里的区域进行分类,包括**文字、标题、图片、列表和表格**5类。对于前4类区域,直接使用PP-OCR完成对应区域文字检测与识别。对于表格类区域,经过Table OCR处理后,表格图片转换为相同表格样式的Excel文件。 在PP-Structure中,图片会先经由Layout-Parser进行版面分析,在版面分析中,会对图片里的区域进行分类,包括**文字、标题、图片、列表和表格**5类。对于前4类区域,直接使用PP-OCR完成对应区域文字检测与识别。对于表格类区域,经过表格结构化处理后,表格图片转换为相同表格样式的Excel文件。
### 2.1 版面分析 ### 4.1 版面分析
版面分析对文档数据进行区域分类,其中包括版面分析工具的Python脚本使用、提取指定类别检测框、性能指标以及自定义训练版面分析模型,详细内容可以参考[文档](layout/README.md) 版面分析对文档数据进行区域分类,其中包括版面分析工具的Python脚本使用、提取指定类别检测框、性能指标以及自定义训练版面分析模型,详细内容可以参考[文档](layout/README_ch.md)
### 2.2 表格结构化 ### 4.2 表格结构化
Table OCR将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md) 表格结构化将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md)
## 3. 预测引擎推理 ## 5. 预测引擎推理(与whl包效果相同)
使用如下命令即可完成预测引擎的推理 使用如下命令即可完成预测引擎的推理
...@@ -164,4 +170,4 @@ python3 table/predict_system.py --det_model_dir=inference/ch_ppocr_mobile_v2.0_d ...@@ -164,4 +170,4 @@ python3 table/predict_system.py --det_model_dir=inference/ch_ppocr_mobile_v2.0_d
|模型名称|模型简介|配置文件|推理模型大小|下载地址| |模型名称|模型简介|配置文件|推理模型大小|下载地址|
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
|en_ppocr_mobile_v2.0_table_structure|英文表格场景的表格结构预测|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) | |en_ppocr_mobile_v2.0_table_structure|英文表格场景的表格结构预测|[table_mv3.yml](../configs/table/table_mv3.yml)|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) |
\ No newline at end of file
# 版面分析使用说明 English | [简体中文](README_ch.md)
[1. 安装whl包](#安装whl包)
[2. 使用](#使用) # Getting Started
[3. 后处理](#后处理) [1. Install whl package](#Install whl package)
[4. 指标](#指标) [2. Quick Start](#Quick Start)
[5. 训练版面分析模型](#训练版面分析模型) [3. PostProcess](#PostProcess)
<a name="安装whl包"></a> [4. Results](#Results)
## 1. 安装whl包 [5. Training](#Training)
<a name="Install whl package"></a>
## 1. Install whl package
```bash ```bash
pip install -U https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
pip install -U layoutparser-0.0.0-py3-none-any.whl
``` ```
<a name="使用"></a> <a name="Quick Start"></a>
## 2. 使用 ## 2. Quick Start
使用layoutparser识别给定文档的布局: Use LayoutParser to identify the layout of a given document:
```python ```python
import cv2 import cv2
...@@ -29,41 +33,40 @@ import layoutparser as lp ...@@ -29,41 +33,40 @@ import layoutparser as lp
image = cv2.imread("doc/table/layout.jpg") image = cv2.imread("doc/table/layout.jpg")
image = image[..., ::-1] image = image[..., ::-1]
# 加载模型 # load model
model = lp.PaddleDetectionLayoutModel(config_path="lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config", model = lp.PaddleDetectionLayoutModel(config_path="lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config",
threshold=0.5, threshold=0.5,
label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},
enforce_cpu=False, enforce_cpu=False,
enable_mkldnn=True) enable_mkldnn=True)
# 检测 # detect
layout = model.detect(image) layout = model.detect(image)
# 显示结果 # show result
show_img = lp.draw_box(image, layout, box_width=3, show_element_type=True) show_img = lp.draw_box(image, layout, box_width=3, show_element_type=True)
show_img.show() show_img.show()
``` ```
下图展示了结果,不同颜色的检测框表示不同的类别,并通过`show_element_type`在框的左上角显示具体类别: The following figure shows the result, with different colored detection boxes representing different categories and displaying specific categories in the upper left corner of the box with `show_element_type`
<div align="center"> <div align="center">
<img src="../../doc/table/result_all.jpg" width = "600" /> <img src="../../doc/table/result_all.jpg" width = "600" />
</div> </div>
`PaddleDetectionLayoutModel`parameters are described as follows:
`PaddleDetectionLayoutModel`函数参数说明如下:
| parameter | description | default | remark |
| 参数 | 含义 | 默认值 | 备注 | | :------------: | :------------------------------------------------------: | :---------: | :----------------------------------------------------------: |
| :------------: | :-------------------------: | :---------: | :----------------------------------------------------------: | | config_path | model config path | None | Specify config_ path will automatically download the model (only for the first time,the model will exist and will not be downloaded again) |
| config_path | 模型配置路径 | None | 指定config_path会自动下载模型(仅第一次,之后模型存在,不会再下载) | | model_path | model path | None | local model path, config_ path and model_ path must be set to one, cannot be none at the same time |
| model_path | 模型路径 | None | 本地模型路径,config_path和model_path必须设置一个,不能同时为None | | threshold | threshold of prediction score | 0.5 | \ |
| threshold | 预测得分的阈值 | 0.5 | \ | | input_shape | picture size of reshape | [3,640,640] | \ |
| input_shape | reshape之后图片尺寸 | [3,640,640] | \ | | batch_size | testing batch size | 1 | \ |
| batch_size | 测试batch size | 1 | \ | | label_map | category mapping table | None | Setting config_ path, it can be none, and the label is automatically obtained according to the dataset name_ map |
| label_map | 类别映射表 | None | 设置config_path时,可以为None,根据数据集名称自动获取label_map | | enforce_cpu | whether to use CPU | False | False to use GPU, and True to force the use of CPU |
| enforce_cpu | 代码是否使用CPU运行 | False | 设置为False表示使用GPU,True表示强制使用CPU | | enforce_mkldnn | whether mkldnn acceleration is enabled in CPU prediction | True | \ |
| enforce_mkldnn | CPU预测中是否开启MKLDNN加速 | True | \ | | thread_num | the number of CPU threads | 10 | \ |
| thread_num | 设置CPU线程数 | 10 | \ |
The following model configurations and label maps are currently supported, which you can use by modifying '--config_path' and '--label_map' to detect different types of content:
目前支持以下几种模型配置和label map,您可以通过修改 `--config_path``--label_map`使用这些模型,从而检测不同类型的内容:
| dataset | config_path | label_map | | dataset | config_path | label_map |
| ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------------------------------------------- | | ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------------------------------------------- |
...@@ -71,26 +74,26 @@ show_img.show() ...@@ -71,26 +74,26 @@ show_img.show()
| TableBank latex | lp://TableBank/ppyolov2_r50vd_dcn_365e_tableBank_latex/config | {0:"Table"} | | TableBank latex | lp://TableBank/ppyolov2_r50vd_dcn_365e_tableBank_latex/config | {0:"Table"} |
| [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) | lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config | {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"} | | [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) | lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config | {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"} |
* TableBank word和TableBank latex分别在word文档、latex文档数据集训练; * TableBank word and TableBank latex are trained on datasets of word documents and latex documents respectively;
* 下载的TableBank数据集里同时包含word和latex。 * Download TableBank dataset contains both word and latex。
<a name="后处理"></a> <a name="PostProcess"></a>
## 3. 后处理 ## 3. PostProcess
版面分析检测包含多个类别,如果只想获取指定类别(如"Text"类别)的检测框、可以使用下述代码: Layout parser contains multiple categories, if you only want to get the detection box for a specific category (such as the "Text" category), you can use the following code:
```python ```python
# 接上面代码 # follow the above code
# 首先过滤特定文本类型的区域 # filter areas for a specific text type
text_blocks = lp.Layout([b for b in layout if b.type=='Text']) text_blocks = lp.Layout([b for b in layout if b.type=='Text'])
figure_blocks = lp.Layout([b for b in layout if b.type=='Figure']) figure_blocks = lp.Layout([b for b in layout if b.type=='Figure'])
# 因为在图像区域内可能检测到文本区域,所以只需要删除它们 # text areas may be detected within the image area, delete these areas
text_blocks = lp.Layout([b for b in text_blocks \ text_blocks = lp.Layout([b for b in text_blocks \
if not any(b.is_in(b_fig) for b_fig in figure_blocks)]) if not any(b.is_in(b_fig) for b_fig in figure_blocks)])
# 对文本区域排序并分配id # sort text areas and assign ID
h, w = image.shape[:2] h, w = image.shape[:2]
left_interval = lp.Interval(0, w/2*1.05, axis='x').put_on_canvas(image) left_interval = lp.Interval(0, w/2*1.05, axis='x').put_on_canvas(image)
...@@ -101,40 +104,38 @@ left_blocks.sort(key = lambda b:b.coordinates[1]) ...@@ -101,40 +104,38 @@ left_blocks.sort(key = lambda b:b.coordinates[1])
right_blocks = [b for b in text_blocks if b not in left_blocks] right_blocks = [b for b in text_blocks if b not in left_blocks]
right_blocks.sort(key = lambda b:b.coordinates[1]) right_blocks.sort(key = lambda b:b.coordinates[1])
# 最终合并两个列表,并按顺序添加索引 # the two lists are merged and the indexes are added in order
text_blocks = lp.Layout([b.set(id = idx) for idx, b in enumerate(left_blocks + right_blocks)]) text_blocks = lp.Layout([b.set(id = idx) for idx, b in enumerate(left_blocks + right_blocks)])
# 显示结果 # display result
show_img = lp.draw_box(image, text_blocks, show_img = lp.draw_box(image, text_blocks,
box_width=3, box_width=3,
show_element_id=True) show_element_id=True)
show_img.show() show_img.show()
``` ```
显示只有"Text"类别的结果 Displays results with only the "Text" category
<div align="center"> <div align="center">
<img src="../../doc/table/result_text.jpg" width = "600" /> <img src="../../doc/table/result_text.jpg" width = "600" />
</div> </div>
<a name="Results"></a>
<a name="指标"></a> ## 4. Results
## 4. 指标
| Dataset | mAP | CPU time cost | GPU time cost | | Dataset | mAP | CPU time cost | GPU time cost |
| --------- | ---- | ------------- | ------------- | | --------- | ---- | ------------- | ------------- |
| PubLayNet | 93.6 | 1713.7ms | 66.6ms | | PubLayNet | 93.6 | 1713.7ms | 66.6ms |
| TableBank | 96.2 | 1968.4ms | 65.1ms | | TableBank | 96.2 | 1968.4ms | 65.1ms |
**Envrionment:** **Envrionment:**
**CPU:** Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz,24core
**GPU:** a single NVIDIA Tesla P40 **CPU:** Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz,24core
<a name="训练版面分析模型"></a> **GPU:** a single NVIDIA Tesla P40
## 5. 训练版面分析模型 <a name="Training"></a>
上述模型基于[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) 训练,如果您想训练自己的版面分析模型,请参考:[train_layoutparser_model](train_layoutparser_model.md) ## 5. Training
The above model is based on PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) ,if you want to train your own layout parser model,please refer to:[train_layoutparser_model](train_layoutparser_model.md)
# Getting Started [English](README.md) | 简体中文
[1. Install whl package](#Install whl package) # 版面分析使用说明
[2. Quick Start](#Quick Start) [1. 安装whl包](#安装whl包)
[3. PostProcess](#PostProcess) [2. 使用](#使用)
[4. Results](#Results) [3. 后处理](#后处理)
[5. Training](#Training) [4. 指标](#指标)
<a name="Install whl package"></a> [5. 训练版面分析模型](#训练版面分析模型)
## 1. Install whl package <a name="安装whl包"></a>
## 1. 安装whl包
```bash ```bash
wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl pip install -U https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
pip install -U layoutparser-0.0.0-py3-none-any.whl
``` ```
<a name="Quick Start"></a> <a name="使用"></a>
## 2. Quick Start ## 2. 使用
Use LayoutParser to identify the layout of a given document: 使用layoutparser识别给定文档的布局:
```python ```python
import cv2 import cv2
...@@ -30,40 +31,41 @@ import layoutparser as lp ...@@ -30,40 +31,41 @@ import layoutparser as lp
image = cv2.imread("doc/table/layout.jpg") image = cv2.imread("doc/table/layout.jpg")
image = image[..., ::-1] image = image[..., ::-1]
# load model # 加载模型
model = lp.PaddleDetectionLayoutModel(config_path="lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config", model = lp.PaddleDetectionLayoutModel(config_path="lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config",
threshold=0.5, threshold=0.5,
label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},
enforce_cpu=False, enforce_cpu=False,
enable_mkldnn=True) enable_mkldnn=True)
# detect # 检测
layout = model.detect(image) layout = model.detect(image)
# show result # 显示结果
show_img = lp.draw_box(image, layout, box_width=3, show_element_type=True) show_img = lp.draw_box(image, layout, box_width=3, show_element_type=True)
show_img.show() show_img.show()
``` ```
The following figure shows the result, with different colored detection boxes representing different categories and displaying specific categories in the upper left corner of the box with `show_element_type` 下图展示了结果,不同颜色的检测框表示不同的类别,并通过`show_element_type`在框的左上角显示具体类别:
<div align="center"> <div align="center">
<img src="../../doc/table/result_all.jpg" width = "600" /> <img src="../../doc/table/result_all.jpg" width = "600" />
</div> </div>
`PaddleDetectionLayoutModel`parameters are described as follows:
`PaddleDetectionLayoutModel`函数参数说明如下:
| parameter | description | default | remark |
| :------------: | :------------------------------------------------------: | :---------: | :----------------------------------------------------------: | | 参数 | 含义 | 默认值 | 备注 |
| config_path | model config path | None | Specify config_ path will automatically download the model (only for the first time,the model will exist and will not be downloaded again) | | :------------: | :-------------------------: | :---------: | :----------------------------------------------------------: |
| model_path | model path | None | local model path, config_ path and model_ path must be set to one, cannot be none at the same time | | config_path | 模型配置路径 | None | 指定config_path会自动下载模型(仅第一次,之后模型存在,不会再下载) |
| threshold | threshold of prediction score | 0.5 | \ | | model_path | 模型路径 | None | 本地模型路径,config_path和model_path必须设置一个,不能同时为None |
| input_shape | picture size of reshape | [3,640,640] | \ | | threshold | 预测得分的阈值 | 0.5 | \ |
| batch_size | testing batch size | 1 | \ | | input_shape | reshape之后图片尺寸 | [3,640,640] | \ |
| label_map | category mapping table | None | Setting config_ path, it can be none, and the label is automatically obtained according to the dataset name_ map | | batch_size | 测试batch size | 1 | \ |
| enforce_cpu | whether to use CPU | False | False to use GPU, and True to force the use of CPU | | label_map | 类别映射表 | None | 设置config_path时,可以为None,根据数据集名称自动获取label_map |
| enforce_mkldnn | whether mkldnn acceleration is enabled in CPU prediction | True | \ | | enforce_cpu | 代码是否使用CPU运行 | False | 设置为False表示使用GPU,True表示强制使用CPU |
| thread_num | the number of CPU threads | 10 | \ | | enforce_mkldnn | CPU预测中是否开启MKLDNN加速 | True | \ |
| thread_num | 设置CPU线程数 | 10 | \ |
The following model configurations and label maps are currently supported, which you can use by modifying '--config_path' and '--label_map' to detect different types of content:
目前支持以下几种模型配置和label map,您可以通过修改 `--config_path``--label_map`使用这些模型,从而检测不同类型的内容:
| dataset | config_path | label_map | | dataset | config_path | label_map |
| ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------------------------------------------- | | ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------------------------------------------- |
...@@ -71,26 +73,26 @@ The following model configurations and label maps are currently supported, which ...@@ -71,26 +73,26 @@ The following model configurations and label maps are currently supported, which
| TableBank latex | lp://TableBank/ppyolov2_r50vd_dcn_365e_tableBank_latex/config | {0:"Table"} | | TableBank latex | lp://TableBank/ppyolov2_r50vd_dcn_365e_tableBank_latex/config | {0:"Table"} |
| [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) | lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config | {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"} | | [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) | lp://PubLayNet/ppyolov2_r50vd_dcn_365e_publaynet/config | {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"} |
* TableBank word and TableBank latex are trained on datasets of word documents and latex documents respectively; * TableBank word和TableBank latex分别在word文档、latex文档数据集训练;
* Download TableBank dataset contains both word and latex。 * 下载的TableBank数据集里同时包含word和latex。
<a name="PostProcess"></a> <a name="后处理"></a>
## 3. PostProcess ## 3. 后处理
Layout parser contains multiple categories, if you only want to get the detection box for a specific category (such as the "Text" category), you can use the following code: 版面分析检测包含多个类别,如果只想获取指定类别(如"Text"类别)的检测框、可以使用下述代码:
```python ```python
# follow the above code # 接上面代码
# filter areas for a specific text type # 首先过滤特定文本类型的区域
text_blocks = lp.Layout([b for b in layout if b.type=='Text']) text_blocks = lp.Layout([b for b in layout if b.type=='Text'])
figure_blocks = lp.Layout([b for b in layout if b.type=='Figure']) figure_blocks = lp.Layout([b for b in layout if b.type=='Figure'])
# text areas may be detected within the image area, delete these areas # 因为在图像区域内可能检测到文本区域,所以只需要删除它们
text_blocks = lp.Layout([b for b in text_blocks \ text_blocks = lp.Layout([b for b in text_blocks \
if not any(b.is_in(b_fig) for b_fig in figure_blocks)]) if not any(b.is_in(b_fig) for b_fig in figure_blocks)])
# sort text areas and assign ID # 对文本区域排序并分配id
h, w = image.shape[:2] h, w = image.shape[:2]
left_interval = lp.Interval(0, w/2*1.05, axis='x').put_on_canvas(image) left_interval = lp.Interval(0, w/2*1.05, axis='x').put_on_canvas(image)
...@@ -101,39 +103,39 @@ left_blocks.sort(key = lambda b:b.coordinates[1]) ...@@ -101,39 +103,39 @@ left_blocks.sort(key = lambda b:b.coordinates[1])
right_blocks = [b for b in text_blocks if b not in left_blocks] right_blocks = [b for b in text_blocks if b not in left_blocks]
right_blocks.sort(key = lambda b:b.coordinates[1]) right_blocks.sort(key = lambda b:b.coordinates[1])
# the two lists are merged and the indexes are added in order # 最终合并两个列表,并按顺序添加索引
text_blocks = lp.Layout([b.set(id = idx) for idx, b in enumerate(left_blocks + right_blocks)]) text_blocks = lp.Layout([b.set(id = idx) for idx, b in enumerate(left_blocks + right_blocks)])
# display result # 显示结果
show_img = lp.draw_box(image, text_blocks, show_img = lp.draw_box(image, text_blocks,
box_width=3, box_width=3,
show_element_id=True) show_element_id=True)
show_img.show() show_img.show()
``` ```
Displays results with only the "Text" category 显示只有"Text"类别的结果
<div align="center"> <div align="center">
<img src="../../doc/table/result_text.jpg" width = "600" /> <img src="../../doc/table/result_text.jpg" width = "600" />
</div> </div>
<a name="Results"></a>
## 4. Results <a name="指标"></a>
## 4. 指标
| Dataset | mAP | CPU time cost | GPU time cost | | Dataset | mAP | CPU time cost | GPU time cost |
| --------- | ---- | ------------- | ------------- | | --------- | ---- | ------------- | ------------- |
| PubLayNet | 93.6 | 1713.7ms | 66.6ms | | PubLayNet | 93.6 | 1713.7ms | 66.6ms |
| TableBank | 96.2 | 1968.4ms | 65.1ms | | TableBank | 96.2 | 1968.4ms | 65.1ms |
**Envrionment:** **Envrionment:**
**CPU:** Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz,24core
**GPU:** a single NVIDIA Tesla P40 **CPU:** Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz,24core
<a name="Training"></a> **GPU:** a single NVIDIA Tesla P40
## 5. Training <a name="训练版面分析模型"></a>
The above model is based on PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) ,if you want to train your own layout parser model,please refer to:[train_layoutparser_model](train_layoutparser_model_en.md) ## 5. 训练版面分析模型
上述模型基于[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) 训练,如果您想训练自己的版面分析模型,请参考:[train_layoutparser_model](train_layoutparser_model_ch.md)
# 训练版面分析 # Training layout-parse
[1. 安装](#安装) [1. Installation](#Installation)
[1.1 环境要求](#环境要求) [1.1 Requirements](#Requirements)
[1.2 安装PaddleDetection](#安装PaddleDetection) [1.2 Install PaddleDetection](#Install PaddleDetection)
[2. 准备数据](#准备数据) [2. Data preparation](#Data preparation)
[3. 配置文件改动和说明](#配置文件改动和说明) [3. Configuration](#Configuration)
[4. PaddleDetection训练](#训练) [4. Training](#Training)
[5. PaddleDetection预测](#预测) [5. Prediction](#Prediction)
[6. 预测部署](#预测部署) [6. Deployment](#Deployment)
[6.1 模型导出](#模型导出) [6.1 Export model](#Export model)
[6.2 layout parser预测](#layout_parser预测) [6.2 Inference](#Inference)
<a name="安装"></a> <a name="Installation"></a>
## 1. 安装 ## 1. Installation
<a name="环境要求"></a> <a name="Requirements"></a>
### 1.1 环境要求 ### 1.1 Requirements
- PaddlePaddle 2.1 - PaddlePaddle 2.1
- OS 64 bit - OS 64 bit
...@@ -35,56 +35,56 @@ ...@@ -35,56 +35,56 @@
- CUDA >= 10.1 - CUDA >= 10.1
- cuDNN >= 7.6 - cuDNN >= 7.6
<a name="安装PaddleDetection"></a> <a name="Install PaddleDetection"></a>
### 1.2 安装PaddleDetection ### 1.2 Install PaddleDetection
```bash ```bash
# 克隆PaddleDetection仓库 # Clone PaddleDetection repository
cd <path/to/clone/PaddleDetection> cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection cd PaddleDetection
# 安装其他依赖 # Install other dependencies
pip install -r requirements.txt pip install -r requirements.txt
``` ```
更多安装教程,请参考: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md) For more installation tutorials, please refer to: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)
<a name="数据准备"></a> <a name="Data preparation"></a>
## 2. 准备数据 ## 2. Data preparation
下载 [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) 数据集: Download the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset
```bash ```bash
cd PaddleDetection/dataset/ cd PaddleDetection/dataset/
mkdir publaynet mkdir publaynet
# 执行命令,下载 # execute the command,download PubLayNet
wget -O publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz?_ga=2.104193024.1076900768.1622560733-649911202.1622560733 wget -O publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz?_ga=2.104193024.1076900768.1622560733-649911202.1622560733
# 解压 # unpack
tar -xvf publaynet.tar.gz tar -xvf publaynet.tar.gz
``` ```
解压之后PubLayNet目录结构 PubLayNet directory structure after decompressing
| File or Folder | Description | num | | File or Folder | Description | num |
| :------------- | :----------------------------------------------- | ------- | | :------------- | :----------------------------------------------- | ------- |
| `train/` | Images in the training subset | 335,703 | | `train/` | Images in the training subset | 335,703 |
| `val/` | Images in the validation subset | 11,245 | | `val/` | Images in the validation subset | 11,245 |
| `test/` | Images in the testing subset | 11,405 | | `test/` | Images in the testing subset | 11,405 |
| `train.json` | Annotations for training images | 1 | | `train.json` | Annotations for training images | 1 |
| `val.json` | Annotations for validation images | 1 | | `val.json` | Annotations for validation images | 1 |
| `LICENSE.txt` | Plaintext version of the CDLA-Permissive license | 1 | | `LICENSE.txt` | Plaintext version of the CDLA-Permissive license | 1 |
| `README.txt` | Text file with the file names and description | 1 | | `README.txt` | Text file with the file names and description | 1 |
如果使用其它数据集,请参考[准备训练数据](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) For other datasets,please refer to [the PrepareDataSet]((https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) )
<a name="配置文件改动和说明"></a> <a name="Configuration"></a>
## 3. 配置文件改动和说明 ## 3. Configuration
我们使用 `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml`配置进行训练,配置文件摘要如下: We use the `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml` configuration for training,the configuration file is as follows
```bash ```bash
_BASE_: [ _BASE_: [
...@@ -98,96 +98,96 @@ _BASE_: [ ...@@ -98,96 +98,96 @@ _BASE_: [
snapshot_epoch: 8 snapshot_epoch: 8
weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final
``` ```
从中可以看到 `ppyolov2_r50vd_dcn_365e_coco.yml` 配置需要依赖其他的配置文件,在该例子中需要依赖: The `ppyolov2_r50vd_dcn_365e_coco.yml` configuration depends on other configuration files, in this case:
- coco_detection.yml:主要说明了训练数据和验证数据的路径 - coco_detection.yml:mainly explains the path of training data and verification data
- runtime.yml:主要说明了公共的运行参数,比如是否使用GPU、每多少个epoch存储checkpoint等 - runtime.yml:mainly describes the common parameters, such as whether to use the GPU and how many epoch to save model etc.
- optimizer_365e.yml:主要说明了学习率和优化器的配置 - optimizer_365e.yml:mainly explains the learning rate and optimizer configuration
- ppyolov2_r50vd_dcn.yml:主要说明模型和主干网络的情况 - ppyolov2_r50vd_dcn.yml:mainly describes the model and the network
- ppyolov2_reader.yml:主要说明数据读取器配置,如batch size,并发加载子进程数等,同时包含读取后预处理操作,如resize、数据增强等等 - ppyolov2_reader.yml:mainly describes the configuration of data readers, such as batch size and number of concurrent loading child processes, and also includes post preprocessing, such as resize and data augmention etc.
根据实际情况,修改上述文件,比如数据集路径、batch size等。 Modify the preceding files, such as the dataset path and batch size etc.
<a name="训练"></a> <a name="Training"></a>
## 4. PaddleDetection训练 ## 4. Training
PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求 PaddleDetection provides single-card/multi-card training mode to meet various training needs of users:
* GPU 单卡训练 * GPU single card training
```bash ```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令 export CUDA_VISIBLE_DEVICES=0 #Don't need to run this command on Windows and Mac
python tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml python tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml
``` ```
* GPU多卡训练 * GPU multi-card training
```bash ```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval
``` ```
--eval:表示边训练边验证 --eval: training while verifying
* 模型恢复训练 * Model recovery training
在日常训练过程中,有的用户由于一些原因导致训练中断,用户可以使用-r的命令恢复训练: During the daily training, if training is interrupted due to some reasons, you can use the -r command to resume the training:
```bash ```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval -r output/ppyolov2_r50vd_dcn_365e_coco/10000 python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval -r output/ppyolov2_r50vd_dcn_365e_coco/10000
``` ```
注意:如果遇到 "`Out of memory error`" 问题, 尝试在 `ppyolov2_reader.yml` 文件中调小`batch_size` Note: If you encounter "`Out of memory error`" , try reducing `batch_size` in the `ppyolov2_reader.yml` file
<a name="预测"></a> prediction<a name="Prediction"></a>
## 5. PaddleDetection预测 ## 5. Prediction
设置参数,使用PaddleDetection预测 Set parameters and use PaddleDetection to predict
```bash ```bash
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=0
python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer_img=images/paper-image.jpg --output_dir=infer_output/ --draw_threshold=0.5 -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final --use_vdl=Ture python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer_img=images/paper-image.jpg --output_dir=infer_output/ --draw_threshold=0.5 -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final --use_vdl=Ture
``` ```
`--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果 `keep_top_k`表示设置输出目标的最大数量,默认值为100,用户可以根据自己的实际情况进行设定 `--draw_threshold` is an optional parameter. According to the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659), different threshold will produce different results, ` keep_top_k ` represent the maximum amount of output target, the default value is 10. You can set different value according to your own actual situation
<a name="预测部署"></a> <a name="Deployment"></a>
## 6. 预测部署 ## 6. Deployment
在layout parser中使用自己训练好的模型。 Use your trained model in Layout Parser
<a name="模型导出"></a> <a name="Export model"></a>
### 6.1 模型导出 ### 6.1 Export model
在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。 在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型。 n the process of model training, the model file saved contains the process of forward prediction and back propagation. In the actual industrial deployment, there is no need for back propagation. Therefore, the model should be translated into the model format required by the deployment. The `tools/export_model.py` script is provided in PaddleDetection to export the model.
导出模型名称默认是`model.*`,layout parser代码模型名称是`inference.*`, 所以修改[PaddleDetection/ppdet/engine/trainer.py ](https://github.com/PaddlePaddle/PaddleDetection/blob/b87a1ea86fa18ce69e44a17ad1b49c1326f19ff9/ppdet/engine/trainer.py#L512) (点开链接查看详细代码行),将`model`改为`inference`即可。 The exported model name defaults to `model.*`, Layout Parser's code model is `inference.*`, So change [PaddleDetection/ppdet/engine/trainer. Py ](https://github.com/PaddlePaddle/PaddleDetection/blob/b87a1ea86fa18ce69e44a17ad1b49c1326f19ff9/ppdet/engine/trainer.py# L512) (click on the link to see the detailed line of code), change 'model' to 'inference'.
执行导出模型脚本: Execute the script to export model:
```bash ```bash
python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --output_dir=./inference -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final.pdparams python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --output_dir=./inference -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final.pdparams
``` ```
预测模型会导出到`inference/ppyolov2_r50vd_dcn_365e_coco`目录下,分别为`infer_cfg.yml`(预测不需要), `inference.pdiparams`, `inference.pdiparams.info`,`inference.pdmodel` The prediction model is exported to `inference/ppyolov2_r50vd_dcn_365e_coco` ,including:`infer_cfg.yml`(prediction not required), `inference.pdiparams`, `inference.pdiparams.info`,`inference.pdmodel`
更多模型导出教程,请参考[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md) More model export tutorials, please refer to[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md)
<a name="layout parser预测"></a> <a name="Inference"></a>
### 6.2 layout_parser预测 ### 6.2 Inference
`model_path`指定训练好的模型路径,使用layout parser进行预测: `model_path` represent the trained model path, and layoutparser is used to predict:
```bash ```bash
import layoutparser as lp import layoutparser as lp
...@@ -198,7 +198,6 @@ model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_3 ...@@ -198,7 +198,6 @@ model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_3
*** ***
更多PaddleDetection训练教程,请参考:[PaddleDetection训练](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md) More PaddleDetection training tutorials,please reference:[PaddleDetection Training](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md)
*** ***
# Training layout-parse # 训练版面分析
[1. Installation](#Installation) [1. 安装](#安装)
[1.1 Requirements](#Requirements) [1.1 环境要求](#环境要求)
[1.2 Install PaddleDetection](#Install PaddleDetection) [1.2 安装PaddleDetection](#安装PaddleDetection)
[2. Data preparation](#Data preparation) [2. 准备数据](#准备数据)
[3. Configuration](#Configuration) [3. 配置文件改动和说明](#配置文件改动和说明)
[4. Training](#Training) [4. PaddleDetection训练](#训练)
[5. Prediction](#Prediction) [5. PaddleDetection预测](#预测)
[6. Deployment](#Deployment) [6. 预测部署](#预测部署)
[6.1 Export model](#Export model) [6.1 模型导出](#模型导出)
[6.2 Inference](#Inference) [6.2 layout parser预测](#layout_parser预测)
<a name="Installation"></a> <a name="安装"></a>
## 1. Installation ## 1. 安装
<a name="Requirements"></a> <a name="环境要求"></a>
### 1.1 Requirements ### 1.1 环境要求
- PaddlePaddle 2.1 - PaddlePaddle 2.1
- OS 64 bit - OS 64 bit
...@@ -35,56 +35,56 @@ ...@@ -35,56 +35,56 @@
- CUDA >= 10.1 - CUDA >= 10.1
- cuDNN >= 7.6 - cuDNN >= 7.6
<a name="Install PaddleDetection"></a> <a name="安装PaddleDetection"></a>
### 1.2 Install PaddleDetection ### 1.2 安装PaddleDetection
```bash ```bash
# Clone PaddleDetection repository # 克隆PaddleDetection仓库
cd <path/to/clone/PaddleDetection> cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection cd PaddleDetection
# Install other dependencies # 安装其他依赖
pip install -r requirements.txt pip install -r requirements.txt
``` ```
For more installation tutorials, please refer to: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md) 更多安装教程,请参考: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)
<a name="Data preparation"></a> <a name="数据准备"></a>
## 2. Data preparation ## 2. 准备数据
Download the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset 下载 [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) 数据集:
```bash ```bash
cd PaddleDetection/dataset/ cd PaddleDetection/dataset/
mkdir publaynet mkdir publaynet
# execute the command,download PubLayNet # 执行命令,下载
wget -O publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz?_ga=2.104193024.1076900768.1622560733-649911202.1622560733 wget -O publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz?_ga=2.104193024.1076900768.1622560733-649911202.1622560733
# unpack # 解压
tar -xvf publaynet.tar.gz tar -xvf publaynet.tar.gz
``` ```
PubLayNet directory structure after decompressing 解压之后PubLayNet目录结构
| File or Folder | Description | num | | File or Folder | Description | num |
| :------------- | :----------------------------------------------- | ------- | | :------------- | :----------------------------------------------- | ------- |
| `train/` | Images in the training subset | 335,703 | | `train/` | Images in the training subset | 335,703 |
| `val/` | Images in the validation subset | 11,245 | | `val/` | Images in the validation subset | 11,245 |
| `test/` | Images in the testing subset | 11,405 | | `test/` | Images in the testing subset | 11,405 |
| `train.json` | Annotations for training images | 1 | | `train.json` | Annotations for training images | 1 |
| `val.json` | Annotations for validation images | 1 | | `val.json` | Annotations for validation images | 1 |
| `LICENSE.txt` | Plaintext version of the CDLA-Permissive license | 1 | | `LICENSE.txt` | Plaintext version of the CDLA-Permissive license | 1 |
| `README.txt` | Text file with the file names and description | 1 | | `README.txt` | Text file with the file names and description | 1 |
For other datasets,please refer to [the PrepareDataSet]((https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) ) 如果使用其它数据集,请参考[准备训练数据](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md)
<a name="Configuration"></a> <a name="配置文件改动和说明"></a>
## 3. Configuration ## 3. 配置文件改动和说明
We use the `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml` configuration for training,the configuration file is as follows 我们使用 `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml`配置进行训练,配置文件摘要如下:
```bash ```bash
_BASE_: [ _BASE_: [
...@@ -98,96 +98,96 @@ _BASE_: [ ...@@ -98,96 +98,96 @@ _BASE_: [
snapshot_epoch: 8 snapshot_epoch: 8
weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final
``` ```
The `ppyolov2_r50vd_dcn_365e_coco.yml` configuration depends on other configuration files, in this case: 从中可以看到 `ppyolov2_r50vd_dcn_365e_coco.yml` 配置需要依赖其他的配置文件,在该例子中需要依赖:
- coco_detection.yml:mainly explains the path of training data and verification data - coco_detection.yml:主要说明了训练数据和验证数据的路径
- runtime.yml:mainly describes the common parameters, such as whether to use the GPU and how many epoch to save model etc. - runtime.yml:主要说明了公共的运行参数,比如是否使用GPU、每多少个epoch存储checkpoint等
- optimizer_365e.yml:mainly explains the learning rate and optimizer configuration - optimizer_365e.yml:主要说明了学习率和优化器的配置
- ppyolov2_r50vd_dcn.yml:mainly describes the model and the network - ppyolov2_r50vd_dcn.yml:主要说明模型和主干网络的情况
- ppyolov2_reader.yml:mainly describes the configuration of data readers, such as batch size and number of concurrent loading child processes, and also includes post preprocessing, such as resize and data augmention etc. - ppyolov2_reader.yml:主要说明数据读取器配置,如batch size,并发加载子进程数等,同时包含读取后预处理操作,如resize、数据增强等等
Modify the preceding files, such as the dataset path and batch size etc. 根据实际情况,修改上述文件,比如数据集路径、batch size等。
<a name="Training"></a> <a name="训练"></a>
## 4. Training ## 4. PaddleDetection训练
PaddleDetection provides single-card/multi-card training mode to meet various training needs of users: PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求
* GPU single card training * GPU 单卡训练
```bash ```bash
export CUDA_VISIBLE_DEVICES=0 #Don't need to run this command on Windows and Mac export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml python tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml
``` ```
* GPU multi-card training * GPU多卡训练
```bash ```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval
``` ```
--eval: training while verifying --eval:表示边训练边验证
* Model recovery training * 模型恢复训练
During the daily training, if training is interrupted due to some reasons, you can use the -r command to resume the training: 在日常训练过程中,有的用户由于一些原因导致训练中断,用户可以使用-r的命令恢复训练:
```bash ```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval -r output/ppyolov2_r50vd_dcn_365e_coco/10000 python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --eval -r output/ppyolov2_r50vd_dcn_365e_coco/10000
``` ```
Note: If you encounter "`Out of memory error`" , try reducing `batch_size` in the `ppyolov2_reader.yml` file 注意:如果遇到 "`Out of memory error`" 问题, 尝试在 `ppyolov2_reader.yml` 文件中调小`batch_size`
prediction<a name="Prediction"></a> <a name="预测"></a>
## 5. Prediction ## 5. PaddleDetection预测
Set parameters and use PaddleDetection to predict 设置参数,使用PaddleDetection预测
```bash ```bash
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=0
python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer_img=images/paper-image.jpg --output_dir=infer_output/ --draw_threshold=0.5 -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final --use_vdl=Ture python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer_img=images/paper-image.jpg --output_dir=infer_output/ --draw_threshold=0.5 -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final --use_vdl=Ture
``` ```
`--draw_threshold` is an optional parameter. According to the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659), different threshold will produce different results, ` keep_top_k ` represent the maximum amount of output target, the default value is 10. You can set different value according to your own actual situation `--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果 `keep_top_k`表示设置输出目标的最大数量,默认值为100,用户可以根据自己的实际情况进行设定
<a name="Deployment"></a> <a name="预测部署"></a>
## 6. Deployment ## 6. 预测部署
Use your trained model in Layout Parser 在layout parser中使用自己训练好的模型。
<a name="Export model"></a> <a name="模型导出"></a>
### 6.1 Export model ### 6.1 模型导出
n the process of model training, the model file saved contains the process of forward prediction and back propagation. In the actual industrial deployment, there is no need for back propagation. Therefore, the model should be translated into the model format required by the deployment. The `tools/export_model.py` script is provided in PaddleDetection to export the model. 在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。 在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型。
The exported model name defaults to `model.*`, Layout Parser's code model is `inference.*`, So change [PaddleDetection/ppdet/engine/trainer. Py ](https://github.com/PaddlePaddle/PaddleDetection/blob/b87a1ea86fa18ce69e44a17ad1b49c1326f19ff9/ppdet/engine/trainer.py# L512) (click on the link to see the detailed line of code), change 'model' to 'inference'. 导出模型名称默认是`model.*`,layout parser代码模型名称是`inference.*`, 所以修改[PaddleDetection/ppdet/engine/trainer.py ](https://github.com/PaddlePaddle/PaddleDetection/blob/b87a1ea86fa18ce69e44a17ad1b49c1326f19ff9/ppdet/engine/trainer.py#L512) (点开链接查看详细代码行),将`model`改为`inference`即可。
Execute the script to export model: 执行导出模型脚本:
```bash ```bash
python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --output_dir=./inference -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final.pdparams python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --output_dir=./inference -o weights=output/ppyolov2_r50vd_dcn_365e_coco/model_final.pdparams
``` ```
The prediction model is exported to `inference/ppyolov2_r50vd_dcn_365e_coco` ,including:`infer_cfg.yml`(prediction not required), `inference.pdiparams`, `inference.pdiparams.info`,`inference.pdmodel` 预测模型会导出到`inference/ppyolov2_r50vd_dcn_365e_coco`目录下,分别为`infer_cfg.yml`(预测不需要), `inference.pdiparams`, `inference.pdiparams.info`,`inference.pdmodel`
More model export tutorials, please refer to[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md) 更多模型导出教程,请参考[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md)
<a name="Inference"></a> <a name="layout parser预测"></a>
### 6.2 Inference ### 6.2 layout_parser预测
`model_path` represent the trained model path, and layoutparser is used to predict: `model_path`指定训练好的模型路径,使用layout parser进行预测:
```bash ```bash
import layoutparser as lp import layoutparser as lp
...@@ -198,7 +198,6 @@ model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_3 ...@@ -198,7 +198,6 @@ model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_3
*** ***
More PaddleDetection training tutorials,please reference:[PaddleDetection Training](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md) 更多PaddleDetection训练教程,请参考:[PaddleDetection训练](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md)
*** ***
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册