提交 5c4810ad 编写于 作者: 文幕地方's avatar 文幕地方

update doc and code

上级 f9a2b26a
# 服务器端C++预测
本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)
C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成
PaddleOCR模型部署。
- [服务器端C++预测](#服务器端c预测) - [服务器端C++预测](#服务器端c预测)
- [1. 准备环境](#1-准备环境) - [1. 准备环境](#1-准备环境)
- [1.0 运行准备](#10-运行准备) - [1.0 运行准备](#10-运行准备)
...@@ -18,6 +12,14 @@ PaddleOCR模型部署。 ...@@ -18,6 +12,14 @@ PaddleOCR模型部署。
- [1. 只调用检测:](#1-只调用检测) - [1. 只调用检测:](#1-只调用检测)
- [2. 只调用识别:](#2-只调用识别) - [2. 只调用识别:](#2-只调用识别)
- [3. 调用串联:](#3-调用串联) - [3. 调用串联:](#3-调用串联)
- [3. FAQ](#3-faq)
# 服务器端C++预测
本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)
C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成
PaddleOCR模型部署。
<a name="1"></a> <a name="1"></a>
...@@ -280,10 +282,10 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir ...@@ -280,10 +282,10 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir
|参数名称|类型|默认参数|意义| |参数名称|类型|默认参数|意义|
| :---: | :---: | :---: | :---: | | :---: | :---: | :---: | :---: |
|rec_model_dir|string|-|识别模型inference model地址| |rec_model_dir|string|-|识别模型inference model地址|
|char_list_file|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件| |rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|字典文件|
* PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`char_list_file`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。 * PaddleOCR也支持多语言的预测,更多支持的语言和模型可以参考[识别文档](../../doc/doc_ch/recognition.md)中的多语言字典与模型部分,如果希望进行多语言预测,只需将修改`rec_char_dict_path`(字典文件路径)以及`rec_model_dir`(inference模型路径)字段即可。
最终屏幕上会输出检测结果如下。 最终屏幕上会输出检测结果如下。
...@@ -291,5 +293,6 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir ...@@ -291,5 +293,6 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir
<img src="./imgs/cpp_infer_pred_12.png" width="600"> <img src="./imgs/cpp_infer_pred_12.png" width="600">
</div> </div>
## 3. FAQ
**注意:在使用Paddle预测库时,推荐使用2.0.0版本的预测库。** 1. 遇到报错 `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.` 首先在gitee导入`https://github.com/LDOUBLEV/AutoLog` 项目,然后将 `deploy/cpp_infer/external-cmake/auto-log.cmake` 中的github地址改为gitee地址即可。
- [Server-side C++ Inference](#server-side-c-inference)
- [1. Prepare the Environment](#1-prepare-the-environment)
- [Environment](#environment)
- [1.1 Compile OpenCV](#11-compile-opencv)
- [1.2 Compile or Download or the Paddle Inference Library](#12-compile-or-download-or-the-paddle-inference-library)
- [1.2.1 Direct download and installation](#121-direct-download-and-installation)
- [1.2.2 Compile the inference source code](#122-compile-the-inference-source-code)
- [2. Compile and Run the Demo](#2-compile-and-run-the-demo)
- [2.1 Export the inference model](#21-export-the-inference-model)
- [2.2 Compile PaddleOCR C++ inference demo](#22-compile-paddleocr-c-inference-demo)
- [Run the demo](#run-the-demo)
- [1. run det demo:](#1-run-det-demo)
- [2. run rec demo:](#2-run-rec-demo)
- [3. run system demo:](#3-run-system-demo)
- [3. FAQ](#3-faq)
# Server-side C++ Inference # Server-side C++ Inference
This chapter introduces the C++ deployment steps of the PaddleOCR model. The corresponding Python predictive deployment method refers to [document](../../doc/doc_ch/inference.md). This chapter introduces the C++ deployment steps of the PaddleOCR model. The corresponding Python predictive deployment method refers to [document](../../doc/doc_ch/inference.md).
...@@ -258,9 +274,9 @@ More parameters are as follows, ...@@ -258,9 +274,9 @@ More parameters are as follows,
|parameter|data type|default|meaning| |parameter|data type|default|meaning|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|rec_model_dir|string|-|Address of recognition inference model| |rec_model_dir|string|-|Address of recognition inference model|
|char_list_file|string|../../ppocr/utils/ppocr_keys_v1.txt|dictionary file| |rec_char_dict_path|string|../../ppocr/utils/ppocr_keys_v1.txt|dictionary file|
* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `char_list_file` and `rec_model_dir`. * Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `rec_char_dict_path` and `rec_model_dir`.
The detection results will be shown on the screen, which is as follows. The detection results will be shown on the screen, which is as follows.
...@@ -270,6 +286,6 @@ The detection results will be shown on the screen, which is as follows. ...@@ -270,6 +286,6 @@ The detection results will be shown on the screen, which is as follows.
</div> </div>
### 2.3 Notes ## 3. FAQ
* Paddle 2.0.0 inference model library is recommended for this tutorial. 1. Encountered the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.` First import `https://github. com/LDOUBLEV/AutoLog` project on gitee, and then change the github address in `deploy/cpp_infer/external-cmake/auto-log.cmake` to the gitee address.
\ No newline at end of file
...@@ -63,7 +63,7 @@ DEFINE_double(cls_thresh, 0.9, "Threshold of cls_thresh."); ...@@ -63,7 +63,7 @@ DEFINE_double(cls_thresh, 0.9, "Threshold of cls_thresh.");
// recognition related // recognition related
DEFINE_string(rec_model_dir, "", "Path of rec inference model."); DEFINE_string(rec_model_dir, "", "Path of rec inference model.");
DEFINE_int32(rec_batch_num, 6, "rec_batch_num."); DEFINE_int32(rec_batch_num, 6, "rec_batch_num.");
DEFINE_string(char_list_file, "../../ppocr/utils/ppocr_keys_v1.txt", DEFINE_string(rec_char_dict_path, "../../ppocr/utils/ppocr_keys_v1.txt",
"Path of dictionary."); "Path of dictionary.");
using namespace std; using namespace std;
...@@ -130,14 +130,14 @@ int main_det(std::vector<cv::String> cv_all_img_names) { ...@@ -130,14 +130,14 @@ int main_det(std::vector<cv::String> cv_all_img_names) {
int main_rec(std::vector<cv::String> cv_all_img_names) { int main_rec(std::vector<cv::String> cv_all_img_names) {
std::vector<double> time_info = {0, 0, 0}; std::vector<double> time_info = {0, 0, 0};
std::string char_list_file = FLAGS_char_list_file; std::string rec_char_dict_path = FLAGS_rec_char_dict_path;
if (FLAGS_benchmark) if (FLAGS_benchmark)
char_list_file = FLAGS_char_list_file.substr(6); rec_char_dict_path = FLAGS_rec_char_dict_path.substr(6);
cout << "label file: " << char_list_file << endl; cout << "label file: " << rec_char_dict_path << endl;
CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id, CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id,
FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn, FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn,
char_list_file, FLAGS_use_tensorrt, FLAGS_precision, rec_char_dict_path, FLAGS_use_tensorrt, FLAGS_precision,
FLAGS_rec_batch_num); FLAGS_rec_batch_num);
std::vector<cv::Mat> img_list; std::vector<cv::Mat> img_list;
...@@ -186,14 +186,14 @@ int main_system(std::vector<cv::String> cv_all_img_names) { ...@@ -186,14 +186,14 @@ int main_system(std::vector<cv::String> cv_all_img_names) {
FLAGS_cls_thresh, FLAGS_use_tensorrt, FLAGS_precision); FLAGS_cls_thresh, FLAGS_use_tensorrt, FLAGS_precision);
} }
std::string char_list_file = FLAGS_char_list_file; std::string rec_char_dict_path = FLAGS_rec_char_dict_path;
if (FLAGS_benchmark) if (FLAGS_benchmark)
char_list_file = FLAGS_char_list_file.substr(6); rec_char_dict_path = FLAGS_rec_char_dict_path.substr(6);
cout << "label file: " << char_list_file << endl; cout << "label file: " << rec_char_dict_path << endl;
CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id, CRNNRecognizer rec(FLAGS_rec_model_dir, FLAGS_use_gpu, FLAGS_gpu_id,
FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn, FLAGS_gpu_mem, FLAGS_cpu_threads, FLAGS_enable_mkldnn,
char_list_file, FLAGS_use_tensorrt, FLAGS_precision, rec_char_dict_path, FLAGS_use_tensorrt, FLAGS_precision,
FLAGS_rec_batch_num); FLAGS_rec_batch_num);
for (int i = 0; i < cv_all_img_names.size(); ++i) { for (int i = 0; i < cv_all_img_names.size(); ++i) {
......
...@@ -67,7 +67,7 @@ void Utility::GetAllFiles(const char *dir_name, ...@@ -67,7 +67,7 @@ void Utility::GetAllFiles(const char *dir_name,
return; return;
} }
struct stat s; struct stat s;
lstat(dir_name, &s); stat(dir_name, &s);
if (!S_ISDIR(s.st_mode)) { if (!S_ISDIR(s.st_mode)) {
std::cout << "dir_name is not a valid directory !" << std::endl; std::cout << "dir_name is not a valid directory !" << std::endl;
all_inputs.push_back(dir_name); all_inputs.push_back(dir_name);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册