diff --git a/ppstructure/docs/inference.md b/ppstructure/docs/inference.md
index 3f92a6046e94d2eeba1bbea80a9663dabfd4b245..150471795a8c460badb4782fee3c906f985c881b 100644
--- a/ppstructure/docs/inference.md
+++ b/ppstructure/docs/inference.md
@@ -16,23 +16,26 @@ cd ppstructure
下载模型
```bash
mkdir inference && cd inference
-# 下载PP-OCRv2文本检测模型并解压
-wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar && tar xf ch_PP-OCRv2_det_slim_quant_infer.tar
-# 下载PP-OCRv2文本识别模型并解压
-wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar && tar xf ch_PP-OCRv2_rec_slim_quant_infer.tar
-# 下载超轻量级英文表格预测模型并解压
-wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar && tar xf en_ppocr_mobile_v2.0_table_structure_infer.tar
+# 下载PP-Structurev2版面分析模型并解压
+wget https://paddleocr.bj.bcebos.com/ppstructure/models/layout/picodet_lcnet_x1_0_layout_infer.tar && tar xf picodet_lcnet_x1_0_layout_infer.tar
+# 下载PP-OCRv3文本检测模型并解压
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_slim_infer.tar && tar xf ch_PP-OCRv3_det_slim_infer.tar
+# 下载PP-OCRv3文本识别模型并解压
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_slim_infer.tar && tar xf ch_PP-OCRv3_rec_slim_infer.tar
+# 下载PP-Structurev2表格识别模型并解压
+wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf ch_ppstructure_mobile_v2.0_SLANet_infer.tar
cd ..
```
### 1.1 版面分析+表格识别
```bash
-python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv2_det_slim_quant_infer \
- --rec_model_dir=inference/ch_PP-OCRv2_rec_slim_quant_infer \
- --table_model_dir=inference/en_ppocr_mobile_v2.0_table_structure_infer \
+python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv3_det_slim_infer \
+ --rec_model_dir=inference/ch_PP-OCRv3_rec_slim_infer \
+ --table_model_dir=inference/ch_ppstructure_mobile_v2.0_SLANet_infer \
+ --layout_model_dir=inference/picodet_lcnet_x1_0_layout_infer \
--image_dir=./docs/table/1.png \
--rec_char_dict_path=../ppocr/utils/ppocr_keys_v1.txt \
- --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
+ --table_char_dict_path=../ppocr/utils/dict/table_structure_dict_ch.txt \
--output=../output \
--vis_font_path=../doc/fonts/simfang.ttf
```
@@ -41,19 +44,23 @@ python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv2_det_slim_quant_i
### 1.2 版面分析
```bash
-python3 predict_system.py --image_dir=./docs/table/1.png --table=false --ocr=false --output=../output/
+python3 predict_system.py --layout_model_dir=inference/picodet_lcnet_x1_0_layout_infer \
+ --image_dir=./docs/table/1.png \
+ --output=../output \
+ --table=false \
+ --ocr=false
```
运行完成后,每张图片会在`output`字段指定的目录下的`structure`目录下有一个同名目录,图片区域会被裁剪之后保存下来,图片名为表格在图片里的坐标。版面分析结果会存储在`res.txt`文件中。
### 1.3 表格识别
```bash
-python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv2_det_slim_quant_infer \
- --rec_model_dir=inference/ch_PP-OCRv2_rec_slim_quant_infer \
- --table_model_dir=inference/en_ppocr_mobile_v2.0_table_structure_infer \
+python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv3_det_slim_infer \
+ --rec_model_dir=inference/ch_PP-OCRv3_rec_slim_infer \
+ --table_model_dir=inference/ch_ppstructure_mobile_v2.0_SLANet_infer \
--image_dir=./docs/table/table.jpg \
--rec_char_dict_path=../ppocr/utils/ppocr_keys_v1.txt \
- --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
+ --table_char_dict_path=../ppocr/utils/dict/table_structure_dict_ch.txt \
--output=../output \
--vis_font_path=../doc/fonts/simfang.ttf \
--layout=false
diff --git a/ppstructure/docs/inference_en.md b/ppstructure/docs/inference_en.md
index 126878378d54932937054e2aa0503214f876bfbf..1e35e62f8e8090d6c25623dd8bc1e9d393c3ded3 100644
--- a/ppstructure/docs/inference_en.md
+++ b/ppstructure/docs/inference_en.md
@@ -18,23 +18,26 @@ download model
```bash
mkdir inference && cd inference
-# Download the PP-OCRv2 text detection model and unzip it
-wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar && tar xf ch_PP-OCRv2_det_slim_quant_infer.tar
-# Download the PP-OCRv2 text recognition model and unzip it
-wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar && tar xf ch_PP-OCRv2_rec_slim_quant_infer.tar
-# Download the ultra-lightweight English table structure model and unzip it
-wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar && tar xf en_ppocr_mobile_v2.0_table_structure_infer.tar
+# Download the PP-Structurev2 layout analysis model and unzip it
+wget https://paddleocr.bj.bcebos.com/ppstructure/models/layout/picodet_lcnet_x1_0_layout_infer.tar && tar xf picodet_lcnet_x1_0_layout_infer.tar
+# Download the PP-OCRv3 text detection model and unzip it
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_slim_infer.tar && tar xf ch_PP-OCRv3_det_slim_infer.tar
+# Download the PP-OCRv3 text recognition model and unzip it
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_slim_infer.tar && tar xf ch_PP-OCRv3_rec_slim_infer.tar
+# Download the PP-Structurev2 form recognition model and unzip it
+wget https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar && tar xf ch_ppstructure_mobile_v2.0_SLANet_infer.tar
cd ..
```
### 1.1 layout analysis + table recognition
```bash
-python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv2_det_slim_quant_infer \
- --rec_model_dir=inference/ch_PP-OCRv2_rec_slim_quant_infer \
- --table_model_dir=inference/en_ppocr_mobile_v2.0_table_structure_infer \
+python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv3_det_slim_infer \
+ --rec_model_dir=inference/ch_PP-OCRv3_rec_slim_infer \
+ --table_model_dir=inference/ch_ppstructure_mobile_v2.0_SLANet_infer \
+ --layout_model_dir=inference/picodet_lcnet_x1_0_layout_infer \
--image_dir=./docs/table/1.png \
--rec_char_dict_path=../ppocr/utils/ppocr_keys_v1.txt \
- --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
+ --table_char_dict_path=../ppocr/utils/dict/table_structure_dict_ch.txt \
--output=../output \
--vis_font_path=../doc/fonts/simfang.ttf
```
@@ -43,19 +46,23 @@ After the operation is completed, each image will have a directory with the same
### 1.2 layout analysis
```bash
-python3 predict_system.py --image_dir=./docs/table/1.png --table=false --ocr=false --output=../output/
+python3 predict_system.py --layout_model_dir=inference/picodet_lcnet_x1_0_layout_infer \
+ --image_dir=./docs/table/1.png \
+ --output=../output \
+ --table=false \
+ --ocr=false
```
After the operation is completed, each image will have a directory with the same name in the `structure` directory under the directory specified by the `output` field. Each picture in image will be cropped and saved. The filename of picture area is their coordinates in the image. Layout analysis results will be stored in the `res.txt` file
### 1.3 table recognition
```bash
-python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv2_det_slim_quant_infer \
- --rec_model_dir=inference/ch_PP-OCRv2_rec_slim_quant_infer \
- --table_model_dir=inference/en_ppocr_mobile_v2.0_table_structure_infer \
+python3 predict_system.py --det_model_dir=inference/ch_PP-OCRv3_det_slim_infer \
+ --rec_model_dir=inference/ch_PP-OCRv3_rec_slim_infer \
+ --table_model_dir=inference/ch_ppstructure_mobile_v2.0_SLANet_infer \
--image_dir=./docs/table/table.jpg \
--rec_char_dict_path=../ppocr/utils/ppocr_keys_v1.txt \
- --table_char_dict_path=../ppocr/utils/dict/table_structure_dict.txt \
+ --table_char_dict_path=../ppocr/utils/dict/table_structure_dict_ch.txt \
--output=../output \
--vis_font_path=../doc/fonts/simfang.ttf \
--layout=false
diff --git a/ppstructure/utility.py b/ppstructure/utility.py
index 270ee3aef9ced40f47eaa5dd9aac3054469d69a8..3bc275eba3c09ff35b7993e1ec1ef6d4c1ecdd59 100644
--- a/ppstructure/utility.py
+++ b/ppstructure/utility.py
@@ -38,7 +38,7 @@ def init_args():
parser.add_argument(
"--layout_dict_path",
type=str,
- default="../ppocr/utils/dict/layout_dict/layout_pubalynet_dict.txt")
+ default="../ppocr/utils/dict/layout_dict/layout_publaynet_dict.txt")
parser.add_argument(
"--layout_score_threshold",
type=float,