提交 24d917d2 编写于 作者: 文幕地方's avatar 文幕地方

update cpp infer doc

上级 3867c8cc
...@@ -171,6 +171,9 @@ inference/ ...@@ -171,6 +171,9 @@ inference/
|-- cls |-- cls
| |--inference.pdiparams | |--inference.pdiparams
| |--inference.pdmodel | |--inference.pdmodel
|-- table
| |--inference.pdiparams
| |--inference.pdmodel
``` ```
...@@ -275,6 +278,22 @@ Specifically, ...@@ -275,6 +278,22 @@ Specifically,
--cls=true \ --cls=true \
``` ```
##### 7. table
```shell
./build/ppocr --det_model_dir=inference/det_db \
--rec_model_dir=inference/rec_rcnn \
--cls_model_dir=inference/cls \
--table_model_dir=inference/table \
--image_dir=../../ppstructure/docs/table/table.jpg \
--use_angle_cls=true \
--det=true \
--rec=true \
--cls=true \
--type=structure \
--table=true
```
More parameters are as follows, More parameters are as follows,
- Common parameters - Common parameters
...@@ -293,9 +312,9 @@ More parameters are as follows, ...@@ -293,9 +312,9 @@ More parameters are as follows,
|parameter|data type|default|meaning| |parameter|data type|default|meaning|
| :---: | :---: | :---: | :---: | | :---: | :---: | :---: | :---: |
|det|bool|true|前向是否执行文字检测| |det|bool|true|Whether to perform text detection in the forward direction|
|rec|bool|true|前向是否执行文字识别| |rec|bool|true|Whether to perform text recognition in the forward direction|
|cls|bool|false|前向是否执行文字方向分类| |cls|bool|false|Whether to perform text direction classification in the forward direction|
- Detection related parameters - Detection related parameters
...@@ -329,6 +348,15 @@ More parameters are as follows, ...@@ -329,6 +348,15 @@ More parameters are as follows,
|rec_img_h|int|48|image height of recognition| |rec_img_h|int|48|image height of recognition|
|rec_img_w|int|320|image width of recognition| |rec_img_w|int|320|image width of recognition|
- Table recognition related parameters
|parameter|data type|default|meaning|
| :---: | :---: | :---: | :---: |
|table_model_dir|string|-|Address of table recognition inference model|
|table_char_dict_path|string|../../ppocr/utils/dict/table_structure_dict.txt|dictionary file|
|table_max_len|int|488|The size of the long side of the input image of the table recognition model, the final input image size of the network is(table_max_len,table_max_len)|
* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `rec_char_dict_path` and `rec_model_dir`. * Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `rec_char_dict_path` and `rec_model_dir`.
...@@ -344,6 +372,12 @@ predict img: ../../doc/imgs/12.jpg ...@@ -344,6 +372,12 @@ predict img: ../../doc/imgs/12.jpg
The detection visualized image saved in ./output//12.jpg The detection visualized image saved in ./output//12.jpg
``` ```
- table
```bash
predict img: ../../ppstructure/docs/table/table.jpg
0 type: table, region: [0,0,371,293], res: <html><body><table><thead><tr><td>Methods</td><td>R</td><td>P</td><td>F</td><td>FPS</td></tr></thead><tbody><tr><td>SegLink [26]</td><td>70.0</td><td>86.0</td><td>77.0</td><td>8.9</td></tr><tr><td>PixelLink [4]</td><td>73.2</td><td>83.0</td><td>77.8</td><td>-</td></tr><tr><td>TextSnake [18]</td><td>73.9</td><td>83.2</td><td>78.3</td><td>1.1</td></tr><tr><td>TextField [37]</td><td>75.9</td><td>87.4</td><td>81.3</td><td>5.2 </td></tr><tr><td>MSR[38]</td><td>76.7</td><td>87.4</td><td>81.7</td><td>-</td></tr><tr><td>FTSN [3]</td><td>77.1</td><td>87.6</td><td>82.0</td><td>-</td></tr><tr><td>LSE[30]</td><td>81.7</td><td>84.2</td><td>82.9</td><td>-</td></tr><tr><td>CRAFT [2]</td><td>78.2</td><td>88.2</td><td>82.9</td><td>8.6</td></tr><tr><td>MCN [16]</td><td>79</td><td>88</td><td>83</td><td>-</td></tr><tr><td>ATRR[35]</td><td>82.1</td><td>85.2</td><td>83.6</td><td>-</td></tr><tr><td>PAN [34]</td><td>83.8</td><td>84.4</td><td>84.1</td><td>30.2</td></tr><tr><td>DB[12]</td><td>79.2</td><td>91.5</td><td>84.9</td><td>32.0</td></tr><tr><td>DRRG [41]</td><td>82.30</td><td>88.05</td><td>85.08</td><td>-</td></tr><tr><td>Ours (SynText)</td><td>80.68</td><td>85.40</td><td>82.97</td><td>12.68</td></tr><tr><td>Ours (MLT-17)</td><td>84.54</td><td>86.62</td><td>85.57</td><td>12.31</td></tr></tbody></table></body></html>
```
<a name="3"></a> <a name="3"></a>
## 3. FAQ ## 3. FAQ
......
...@@ -181,6 +181,9 @@ inference/ ...@@ -181,6 +181,9 @@ inference/
|-- cls |-- cls
| |--inference.pdiparams | |--inference.pdiparams
| |--inference.pdmodel | |--inference.pdmodel
|-- table
| |--inference.pdiparams
| |--inference.pdmodel
``` ```
<a name="22"></a> <a name="22"></a>
...@@ -285,6 +288,21 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir ...@@ -285,6 +288,21 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir
--cls=true \ --cls=true \
``` ```
##### 7. 表格识别
```shell
./build/ppocr --det_model_dir=inference/det_db \
--rec_model_dir=inference/rec_rcnn \
--cls_model_dir=inference/cls \
--table_model_dir=inference/table \
--image_dir=../../ppstructure/docs/table/table.jpg \
--use_angle_cls=true \
--det=true \
--rec=true \
--cls=true \
--type=structure \
--table=true
```
更多支持的可调节参数解释如下: 更多支持的可调节参数解释如下:
- 通用参数 - 通用参数
...@@ -328,21 +346,32 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir ...@@ -328,21 +346,32 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir
|cls_thresh|float|0.9|方向分类器的得分阈值| |cls_thresh|float|0.9|方向分类器的得分阈值|
|cls_batch_num|int|1|方向分类器batchsize| |cls_batch_num|int|1|方向分类器batchsize|
- 识别模型相关 - 文字识别模型相关
|参数名称|类型|默认参数|意义| |参数名称|类型|默认参数|意义|
| :---: | :---: | :---: | :---: | | :---: | :---: | :---: | :---: |
|rec_model_dir|string|-|识别模型inference model地址| |rec_model_dir|string|-|文字识别模型inference model地址|
|rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件| |rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件|
|rec_batch_num|int|6|识别模型batchsize| |rec_batch_num|int|6|文字识别模型batchsize|
|rec_img_h|int|48|识别模型输入图像高度| |rec_img_h|int|48|文字识别模型输入图像高度|
|rec_img_w|int|320|识别模型输入图像宽度| |rec_img_w|int|320|文字识别模型输入图像宽度|
- 表格识别模型相关
|参数名称|类型|默认参数|意义|
| :---: | :---: | :---: | :---: |
|table_model_dir|string|-|表格识别模型inference model地址|
|table_char_dict_path|string|../../ppocr/utils/dict/table_structure_dict.txt|字典文件|
|table_max_len|int|488|表格识别模型输入图像长边大小,最终网络输入图像大小为(table_max_len,table_max_len)|
* PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`rec_char_dict_path`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。 * PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`rec_char_dict_path`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。
最终屏幕上会输出检测结果如下。 最终屏幕上会输出检测结果如下。
- ocr
```bash ```bash
predict img: ../../doc/imgs/12.jpg predict img: ../../doc/imgs/12.jpg
../../doc/imgs/12.jpg ../../doc/imgs/12.jpg
...@@ -353,6 +382,13 @@ predict img: ../../doc/imgs/12.jpg ...@@ -353,6 +382,13 @@ predict img: ../../doc/imgs/12.jpg
The detection visualized image saved in ./output//12.jpg The detection visualized image saved in ./output//12.jpg
``` ```
- table
```bash
predict img: ../../ppstructure/docs/table/table.jpg
0 type: table, region: [0,0,371,293], res: <html><body><table><thead><tr><td>Methods</td><td>R</td><td>P</td><td>F</td><td>FPS</td></tr></thead><tbody><tr><td>SegLink [26]</td><td>70.0</td><td>86.0</td><td>77.0</td><td>8.9</td></tr><tr><td>PixelLink [4]</td><td>73.2</td><td>83.0</td><td>77.8</td><td>-</td></tr><tr><td>TextSnake [18]</td><td>73.9</td><td>83.2</td><td>78.3</td><td>1.1</td></tr><tr><td>TextField [37]</td><td>75.9</td><td>87.4</td><td>81.3</td><td>5.2 </td></tr><tr><td>MSR[38]</td><td>76.7</td><td>87.4</td><td>81.7</td><td>-</td></tr><tr><td>FTSN [3]</td><td>77.1</td><td>87.6</td><td>82.0</td><td>-</td></tr><tr><td>LSE[30]</td><td>81.7</td><td>84.2</td><td>82.9</td><td>-</td></tr><tr><td>CRAFT [2]</td><td>78.2</td><td>88.2</td><td>82.9</td><td>8.6</td></tr><tr><td>MCN [16]</td><td>79</td><td>88</td><td>83</td><td>-</td></tr><tr><td>ATRR[35]</td><td>82.1</td><td>85.2</td><td>83.6</td><td>-</td></tr><tr><td>PAN [34]</td><td>83.8</td><td>84.4</td><td>84.1</td><td>30.2</td></tr><tr><td>DB[12]</td><td>79.2</td><td>91.5</td><td>84.9</td><td>32.0</td></tr><tr><td>DRRG [41]</td><td>82.30</td><td>88.05</td><td>85.08</td><td>-</td></tr><tr><td>Ours (SynText)</td><td>80.68</td><td>85.40</td><td>82.97</td><td>12.68</td></tr><tr><td>Ours (MLT-17)</td><td>84.54</td><td>86.62</td><td>85.57</td><td>12.31</td></tr></tbody></table></body></html>
```
<a name="3"></a> <a name="3"></a>
## 3. FAQ ## 3. FAQ
......
...@@ -119,7 +119,7 @@ void structure(std::vector<cv::String> &cv_all_img_names) { ...@@ -119,7 +119,7 @@ void structure(std::vector<cv::String> &cv_all_img_names) {
std::vector<std::vector<StructurePredictResult>> structure_results = std::vector<std::vector<StructurePredictResult>> structure_results =
engine.structure(cv_all_img_names, false, FLAGS_table); engine.structure(cv_all_img_names, false, FLAGS_table);
for (int i = 0; i < cv_all_img_names.size(); i++) { for (int i = 0; i < cv_all_img_names.size(); i++) {
cout << cv_all_img_names[i] << "\n"; cout << "predict img: " << cv_all_img_names[i] << endl;
for (int j = 0; j < structure_results[i].size(); j++) { for (int j = 0; j < structure_results[i].size(); j++) {
std::cout << j << "\ttype: " << structure_results[i][j].type std::cout << j << "\ttype: " << structure_results[i][j].type
<< ", region: ["; << ", region: [";
...@@ -127,7 +127,7 @@ void structure(std::vector<cv::String> &cv_all_img_names) { ...@@ -127,7 +127,7 @@ void structure(std::vector<cv::String> &cv_all_img_names) {
<< structure_results[i][j].box[1] << "," << structure_results[i][j].box[1] << ","
<< structure_results[i][j].box[2] << "," << structure_results[i][j].box[2] << ","
<< structure_results[i][j].box[3] << "], res: "; << structure_results[i][j].box[3] << "], res: ";
if (structure_results[i][j].type == "Table") { if (structure_results[i][j].type == "table") {
std::cout << structure_results[i][j].html << std::endl; std::cout << structure_results[i][j].html << std::endl;
} else { } else {
Utility::print_result(structure_results[i][j].text_res); Utility::print_result(structure_results[i][j].text_res);
......
...@@ -55,7 +55,7 @@ PaddleStructure::structure(std::vector<cv::String> cv_all_img_names, ...@@ -55,7 +55,7 @@ PaddleStructure::structure(std::vector<cv::String> cv_all_img_names,
if (layout) { if (layout) {
} else { } else {
StructurePredictResult res; StructurePredictResult res;
res.type = "Table"; res.type = "table";
res.box = std::vector<int>(4, 0); res.box = std::vector<int>(4, 0);
res.box[2] = srcimg.cols; res.box[2] = srcimg.cols;
res.box[3] = srcimg.rows; res.box[3] = srcimg.rows;
...@@ -65,7 +65,7 @@ PaddleStructure::structure(std::vector<cv::String> cv_all_img_names, ...@@ -65,7 +65,7 @@ PaddleStructure::structure(std::vector<cv::String> cv_all_img_names,
for (int i = 0; i < structure_result.size(); i++) { for (int i = 0; i < structure_result.size(); i++) {
// crop image // crop image
roi_img = Utility::crop_image(srcimg, structure_result[i].box); roi_img = Utility::crop_image(srcimg, structure_result[i].box);
if (structure_result[i].type == "Table") { if (structure_result[i].type == "table") {
this->table(roi_img, structure_result[i], time_info_table, this->table(roi_img, structure_result[i], time_info_table,
time_info_det, time_info_rec, time_info_cls); time_info_det, time_info_rec, time_info_cls);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册