Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleOCR
提交
3de86afd
P
PaddleOCR
项目概览
s920243400
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
3de86afd
编写于
8月 29, 2022
作者:
L
LDOUBLEV
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'release/2.6' of
https://github.com/PaddlePaddle/PaddleOCR
into 26_doc
上级
ebd31950
350e68cf
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
22 addition
and
22 deletion
+22
-22
README.md
README.md
+1
-1
README_ch.md
README_ch.md
+1
-1
applications/发票关键信息抽取.md
applications/发票关键信息抽取.md
+8
-8
ppstructure/kie/README.md
ppstructure/kie/README.md
+6
-6
ppstructure/kie/README_ch.md
ppstructure/kie/README_ch.md
+6
-6
未找到文件。
README.md
浏览文件 @
3de86afd
...
...
@@ -128,7 +128,7 @@ PaddleOCR support a variety of cutting-edge algorithms related to OCR, and devel
-
[
Inference and Deployment
](
./deploy/README.md
)
-
[
Python Inference
](
./ppstructure/docs/inference_en.md
)
-
[
C++ Inference
](
./deploy/cpp_infer/readme.md
)
-
[
Serving
](
./deploy/
pdserving/README
.md
)
-
[
Serving
](
./deploy/
hubserving/readme_en
.md
)
-
[
Academic Algorithms
](
./doc/doc_en/algorithm_overview_en.md
)
-
[
Text detection
](
./doc/doc_en/algorithm_overview_en.md
)
-
[
Text recognition
](
./doc/doc_en/algorithm_overview_en.md
)
...
...
README_ch.md
浏览文件 @
3de86afd
...
...
@@ -140,7 +140,7 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力
-
[
推理部署
](
./deploy/README_ch.md
)
-
[
基于Python预测引擎推理
](
./ppstructure/docs/inference.md
)
-
[
基于C++预测引擎推理
](
./deploy/cpp_infer/readme_ch.md
)
-
[
服务化部署
](
./deploy/
pdserving/README_CN
.md
)
-
[
服务化部署
](
./deploy/
hubserving/readme
.md
)
-
[
前沿算法与模型🚀
](
./doc/doc_ch/algorithm_overview.md
)
-
[
文本检测算法
](
./doc/doc_ch/algorithm_overview.md
)
-
[
文本识别算法
](
./doc/doc_ch/algorithm_overview.md
)
...
...
applications/发票关键信息抽取.md
浏览文件 @
3de86afd
...
...
@@ -30,7 +30,7 @@ cd PaddleOCR
# 安装PaddleOCR的依赖
pip
install
-r
requirements.txt
# 安装关键信息抽取任务的依赖
pip
install
-r
./ppstructure/
vqa
/requirements.txt
pip
install
-r
./ppstructure/
kie
/requirements.txt
```
## 4. 关键信息抽取
...
...
@@ -94,7 +94,7 @@ VI-LayoutXLM的配置为[ser_vi_layoutxlm_xfund_zh_udml.yml](../configs/kie/vi_l
```
yml
Architecture
:
model_type
:
&model_type
"
vqa
"
model_type
:
&model_type
"
kie
"
name
:
DistillationModel
algorithm
:
Distillation
Models
:
...
...
@@ -177,7 +177,7 @@ python3 tools/eval.py -c ./fapiao/ser_vi_layoutxlm.yml -o Architecture.Backbone.
使用下面的命令进行预测。
```
bash
python3 tools/infer_
vqa
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/XFUND/zh_val/val.json Global.infer_mode
=
False
python3 tools/infer_
kie
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/XFUND/zh_val/val.json Global.infer_mode
=
False
```
预测结果会保存在配置文件中的
`Global.save_res_path`
目录中。
...
...
@@ -195,7 +195,7 @@ python3 tools/infer_vqa_token_ser.py -c fapiao/ser_vi_layoutxlm.yml -o Architect
```
bash
python3 tools/infer_
vqa
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/imgs/b25.jpg Global.infer_mode
=
True
python3 tools/infer_
kie
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/imgs/b25.jpg Global.infer_mode
=
True
```
结果如下所示。
...
...
@@ -211,7 +211,7 @@ python3 tools/infer_vqa_token_ser.py -c fapiao/ser_vi_layoutxlm.yml -o Architect
如果希望构建基于你在垂类场景训练得到的OCR检测与识别模型,可以使用下面的方法传入检测与识别的inference 模型路径,即可完成OCR文本检测与识别以及SER的串联过程。
```
bash
python3 tools/infer_
vqa
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/imgs/b25.jpg Global.infer_mode
=
True Global.kie_rec_model_dir
=
"your_rec_model"
Global.kie_det_model_dir
=
"your_det_model"
python3 tools/infer_
kie
_token_ser.py
-c
fapiao/ser_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/imgs/b25.jpg Global.infer_mode
=
True Global.kie_rec_model_dir
=
"your_rec_model"
Global.kie_det_model_dir
=
"your_det_model"
```
### 4.4 关系抽取(Relation Extraction)
...
...
@@ -316,7 +316,7 @@ python3 tools/eval.py -c ./fapiao/re_vi_layoutxlm.yml -o Architecture.Backbone.c
# -o 后面的字段是RE任务的配置
# -c_ser 后面的是SER任务的配置文件
# -c_ser 后面的字段是SER任务的配置
python3 tools/infer_
vqa_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
False
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml
/best_accuracy
python3 tools/infer_
kie_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_trained/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
False
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_trained
/best_accuracy
```
预测结果会保存在配置文件中的
`Global.save_res_path`
目录中。
...
...
@@ -333,11 +333,11 @@ python3 tools/infer_vqa_token_ser_re.py -c fapiao/re_vi_layoutxlm.yml -o Archite
如果希望使用OCR引擎结果得到的结果进行推理,则可以使用下面的命令进行推理。
```
bash
python3 tools/infer_
vqa
_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
True
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy
python3 tools/infer_
kie
_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
True
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy
```
如果希望构建基于你在垂类场景训练得到的OCR检测与识别模型,可以使用下面的方法传入,即可完成SER + RE的串联过程。
```
bash
python3 tools/infer_
vqa
_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
True
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.kie_rec_model_dir
=
"your_rec_model"
Global.kie_det_model_dir
=
"your_det_model"
python3 tools/infer_
kie
_token_ser_re.py
-c
fapiao/re_vi_layoutxlm.yml
-o
Architecture.Backbone.checkpoints
=
fapiao/models/re_vi_layoutxlm_fapiao_udml/best_accuracy Global.infer_img
=
./train_data/zzsfp/val.json Global.infer_mode
=
True
-c_ser
fapiao/ser_vi_layoutxlm.yml
-o_ser
Architecture.Backbone.checkpoints
=
fapiao/models/ser_vi_layoutxlm_fapiao_udml/best_accuracy Global.kie_rec_model_dir
=
"your_rec_model"
Global.kie_det_model_dir
=
"your_det_model"
```
ppstructure/kie/README.md
浏览文件 @
3de86afd
...
...
@@ -172,16 +172,16 @@ If you want to use OCR engine to obtain end-to-end prediction results, you can u
# just predict using SER trained model
python3 tools/infer_kie_token_ser.py
\
-c
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./ppstructure/docs/kie/input/zh_val_42.jpg
# predict using SER and RE trained model at the same time
python3 ./tools/infer_kie_token_ser_re.py
\
-c
configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/image/zh_val_42.jpg
\
-c_ser
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
```
The visual result images and the predicted text file will be saved in the
`Global.save_res_path`
directory.
...
...
@@ -193,18 +193,18 @@ If you want to load the text detection and recognition results collected before,
# just predict using SER trained model
python3 tools/infer_kie_token_ser.py
\
-c
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/val.json
\
Global.infer_mode
=
False
# predict using SER and RE trained model at the same time
python3 ./tools/infer_kie_token_ser_re.py
\
-c
configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/val.json
\
Global.infer_mode
=
False
\
-c_ser
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
```
#### 4.2.3 Inference using PaddleInference
...
...
ppstructure/kie/README_ch.md
浏览文件 @
3de86afd
...
...
@@ -156,16 +156,16 @@ wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layou
# 仅预测SER模型
python3 tools/infer_kie_token_ser.py
\
-c
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./ppstructure/docs/kie/input/zh_val_42.jpg
# SER + RE模型串联
python3 ./tools/infer_kie_token_ser_re.py
\
-c
configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/image/zh_val_42.jpg
\
-c_ser
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
```
`Global.save_res_path`
目录中会保存可视化的结果图像以及预测的文本文件。
...
...
@@ -177,18 +177,18 @@ python3 ./tools/infer_kie_token_ser_re.py \
# 仅预测SER模型
python3 tools/infer_kie_token_ser.py
\
-c
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/val.json
\
Global.infer_mode
=
False
# SER + RE模型串联
python3 ./tools/infer_kie_token_ser_re.py
\
-c
configs/kie/vi_layoutxlm/re_vi_layoutxlm_xfund_zh.yml
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
_models
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
-o
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/re_vi_layoutxlm_xfund_pretrained/best_accuracy
\
Global.infer_img
=
./train_data/XFUND/zh_val/val.json
\
Global.infer_mode
=
False
\
-c_ser
configs/kie/vi_layoutxlm/ser_vi_layoutxlm_xfund_zh.yml
\
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
_models
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
-o_ser
Architecture.Backbone.checkpoints
=
./pretrain
ed_model
/ser_vi_layoutxlm_xfund_pretrained/best_accuracy
```
#### 4.2.3 基于PaddleInference的预测
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录