diff --git a/deploy/cpp_infer/docs/windows_vs2019_build.md b/deploy/cpp_infer/docs/windows_vs2019_build.md index 24a1e55cd7e5728e9cd56da8a35a72892380d28b..7c813429fbc82da97de272f3ea6a073e0e8d7734 100644 --- a/deploy/cpp_infer/docs/windows_vs2019_build.md +++ b/deploy/cpp_infer/docs/windows_vs2019_build.md @@ -103,7 +103,7 @@ cd D:\projects\PaddleOCR\deploy\cpp_infer ### FQA * 在Windows下的终端中执行文件exe时,可能会发生乱码的现象,此时需要在终端中输入`CHCP 65001`,将终端的编码方式由GBK编码(默认)改为UTF-8编码,更加具体的解释可以参考这篇博客:[https://blog.csdn.net/qq_35038153/article/details/78430359](https://blog.csdn.net/qq_35038153/article/details/78430359)。 -* 编译时,如果报错`错误:C1083 无法打开包括文件:"dirent.h":No such file or directory`,可以参考该[文档](https://blog.csdn.net/Dora_blank/article/details/117740837#41_C1083_direnthNo_such_file_or_directory_54),新建`dirent.h`文件,并添加到`utility.cpp`的头文件引用中。同时修改`utility.cpp`70行:`lstat`改成`stat`。 +* 编译时,如果报错`错误:C1083 无法打开包括文件:"dirent.h":No such file or directory`,下载可[dirent.h](https://paddleocr.bj.bcebos.com/deploy/cpp_infer/cpp_files/dirent.h)文件,并添加到`utility.cpp`的头文件引用中。 * 编译时,如果报错`Autolog未定义`,新建`autolog.h`文件,内容为:[autolog.h](https://github.com/LDOUBLEV/AutoLog/blob/main/auto_log/autolog.h),并添加到`main.cpp`的头文件引用中,再次编译。 diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index 725197ad5cf9c7bf54be445f2bb3698096e7f9fb..8ca0e4a8c6c0eb7d09312645b70291d7e8c8016e 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -1,9 +1,3 @@ -# 服务器端C++预测 - -本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)。 -C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成 -PaddleOCR模型部署。 - - [服务器端C++预测](#服务器端c预测) - [1. 准备环境](#1-准备环境) - [1.0 运行准备](#10-运行准备) @@ -18,6 +12,14 @@ PaddleOCR模型部署。 - [1. 只调用检测:](#1-只调用检测) - [2. 只调用识别:](#2-只调用识别) - [3. 调用串联:](#3-调用串联) + - [3. FAQ](#3-faq) + +# 服务器端C++预测 + +本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)。 +C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成 +PaddleOCR模型部署。 + @@ -280,10 +282,10 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir |参数名称|类型|默认参数|意义| | :---: | :---: | :---: | :---: | |rec_model_dir|string|-|识别模型inference model地址| -|char_list_file|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件| +|rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件| -* PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`char_list_file`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。 +* PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`rec_char_dict_path`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。 最终屏幕上会输出检测结果如下。 @@ -291,5 +293,6 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir +## 3. FAQ -**注意:在使用Paddle预测库时,推荐使用2.0.0版本的预测库。** + 1. 遇到报错 `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.`, 将 `deploy/cpp_infer/external-cmake/auto-log.cmake` 中的github地址改为 https://gitee.com/Double_V/AutoLog 地址即可。 diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index f4cfab24350c1a6be3d8ebebf6b47b0baaa4f26e..55160ede6bdd2f387124021f9ff25cdfb6b5a23a 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -1,3 +1,19 @@ +- [Server-side C++ Inference](#server-side-c-inference) + - [1. Prepare the Environment](#1-prepare-the-environment) + - [Environment](#environment) + - [1.1 Compile OpenCV](#11-compile-opencv) + - [1.2 Compile or Download or the Paddle Inference Library](#12-compile-or-download-or-the-paddle-inference-library) + - [1.2.1 Direct download and installation](#121-direct-download-and-installation) + - [1.2.2 Compile the inference source code](#122-compile-the-inference-source-code) + - [2. Compile and Run the Demo](#2-compile-and-run-the-demo) + - [2.1 Export the inference model](#21-export-the-inference-model) + - [2.2 Compile PaddleOCR C++ inference demo](#22-compile-paddleocr-c-inference-demo) + - [Run the demo](#run-the-demo) + - [1. run det demo:](#1-run-det-demo) + - [2. run rec demo:](#2-run-rec-demo) + - [3. run system demo:](#3-run-system-demo) + - [3. FAQ](#3-faq) + # Server-side C++ Inference This chapter introduces the C++ deployment steps of the PaddleOCR model. The corresponding Python predictive deployment method refers to [document](../../doc/doc_ch/inference.md). @@ -258,9 +274,9 @@ More parameters are as follows, |parameter|data type|default|meaning| | --- | --- | --- | --- | |rec_model_dir|string|-|Address of recognition inference model| -|char_list_file|string|../../ppocr/utils/ppocr_keys_v1.txt|dictionary file| +|rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|dictionary file| -* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `char_list_file` and `rec_model_dir`. +* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `rec_char_dict_path` and `rec_model_dir`. The detection results will be shown on the screen, which is as follows. @@ -270,6 +286,6 @@ The detection results will be shown on the screen, which is as follows. -### 2.3 Notes +## 3. FAQ -* Paddle 2.0.0 inference model library is recommended for this tutorial. + 1. Encountered the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.`, change the github address in `deploy/cpp_infer/external-cmake/auto-log.cmake` to the https://gitee.com/Double_V/AutoLog address. diff --git a/deploy/cpp_infer/src/main.cpp b/deploy/cpp_infer/src/main.cpp index 664b10b2f579fd8681c65dcf1ded5ebe53d0424c..31d0685f543a1441eab8b9d2595d008ff65763f8 100644 --- a/deploy/cpp_infer/src/main.cpp +++ b/deploy/cpp_infer/src/main.cpp @@ -63,7 +63,7 @@ DEFINE_double(cls_thresh, 0.9, "Threshold of cls_thresh."); // recognition related DEFINE_string(rec_model_dir, "", "Path of rec inference model."); DEFINE_int32(rec_batch_num, 6, "rec_batch_num."); -DEFINE_string(char_list_file, "../../ppocr/utils/ppocr_keys_v1.txt", +DEFINE_string(rec_char_dict_path, "../../ppocr/utils/ppocr_keys_v1.txt", "Path of dictionary."); using namespace std; @@ -130,14 +130,14 @@ int main_det(std::vector cv_all_img_names) { int main_rec(std::vector cv_all_img_names) { std::vector time_info = {0, 0, 0}; - std::string char_list_file = FLAGS_char_list_file; + std::string rec_char_dict_path = FLAGS_rec_char_dict_path; if (FLAGS_benchmark) - char_list_file = FLAGS_char_list_file.substr(6); - cout << "label file: " << char_list_file << endl; + rec_char_dict_path = FLAGS_rec_char_dict_path.substr(6); + cout << "label file: " << rec_char_dict_path << endl; CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id, FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn, - char_list_file, FLAGS_use_tensorrt, FLAGS_precision, + rec_char_dict_path, FLAGS_use_tensorrt, FLAGS_precision, FLAGS_rec_batch_num); std::vector img_list; @@ -186,14 +186,14 @@ int main_system(std::vector cv_all_img_names) { FLAGS_cls_thresh, FLAGS_use_tensorrt, FLAGS_precision); } - std::string char_list_file = FLAGS_char_list_file; + std::string rec_char_dict_path = FLAGS_rec_char_dict_path; if (FLAGS_benchmark) - char_list_file = FLAGS_char_list_file.substr(6); - cout << "label file: " << char_list_file << endl; + rec_char_dict_path = FLAGS_rec_char_dict_path.substr(6); + cout << "label file: " << rec_char_dict_path << endl; CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id, FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn, - char_list_file, FLAGS_use_tensorrt, FLAGS_precision, + rec_char_dict_path, FLAGS_use_tensorrt, FLAGS_precision, FLAGS_rec_batch_num); for (int i = 0; i < cv_all_img_names.size(); ++i) { diff --git a/deploy/cpp_infer/src/utility.cpp b/deploy/cpp_infer/src/utility.cpp index c3c7b8485520579e8e2a23ae03543e3a9fc821bf..6952be54eed14d06ddcf3572d9bd2f4153894534 100644 --- a/deploy/cpp_infer/src/utility.cpp +++ b/deploy/cpp_infer/src/utility.cpp @@ -67,7 +67,7 @@ void Utility::GetAllFiles(const char *dir_name, return; } struct stat s; - lstat(dir_name, &s); + stat(dir_name, &s); if (!S_ISDIR(s.st_mode)) { std::cout << "dir_name is not a valid directory !" << std::endl; all_inputs.push_back(dir_name);