-[Server-side C++ Inference](#server-side-c-inference)
-[1. Prepare the Environment](#1-prepare-the-environment)
-[Environment](#environment)
-[1.1 Compile OpenCV](#11-compile-opencv)
-[1.2 Compile or Download or the Paddle Inference Library](#12-compile-or-download-or-the-paddle-inference-library)
-[1.2.1 Direct download and installation](#121-direct-download-and-installation)
-[1.2.2 Compile the inference source code](#122-compile-the-inference-source-code)
-[2. Compile and Run the Demo](#2-compile-and-run-the-demo)
-[2.1 Export the inference model](#21-export-the-inference-model)
-[2.2 Compile PaddleOCR C++ inference demo](#22-compile-paddleocr-c-inference-demo)
-[Run the demo](#run-the-demo)
-[1. det+cls+rec:](#1-detclsrec)
-[2. det+rec:](#2-detrec)
-[3. det](#3-det)
-[4. cls+rec:](#4-clsrec)
-[5. rec](#5-rec)
-[6. cls](#6-cls)
-[3. FAQ](#3-faq)
English | [简体中文](readme_ch.md)
# Server-side C++ Inference
This chapter introduces the C++ deployment steps of the PaddleOCR model. The corresponding Python predictive deployment method refers to [document](../../doc/doc_ch/inference.md).
C++ is better than python in terms of performance. Therefore, in CPU and GPU deployment scenarios, C++ deployment is mostly used.
-[1. Prepare the Environment](#1)
-[1.1 Environment](#11)
-[1.2 Compile OpenCV](#12)
-[1.3 Compile or Download or the Paddle Inference Library](#13)
-[2. Compile and Run the Demo](#2)
-[2.1 Export the inference model](#21)
-[2.2 Compile PaddleOCR C++ inference demo](#22)
-[2.3 Run the demo](#23)
-[3. FAQ](#3)
This chapter introduces the C++ deployment steps of the PaddleOCR model. C++ is better than Python in terms of performance. Therefore, in CPU and GPU deployment scenarios, C++ deployment is mostly used.
This section will introduce how to configure the C++ environment and deploy PaddleOCR in Linux (CPU\GPU) environment. For Windows deployment please refer to [Windows](./docs/windows_vs2019_build.md) compilation guidelines.
<aname="1"></a>
## 1. Prepare the Environment
### Environment
<aname="11"></a>
### 1.1 Environment
- Linux, docker is recommended.
- Windows.
### 1.1 Compile OpenCV
<aname="12"></a>
### 1.2 Compile OpenCV
* First of all, you need to download the source code compiled package in the Linux environment from the OpenCV official website. Taking OpenCV 3.4.7 as an example, the download command is as follows.
...
...
@@ -92,11 +88,12 @@ opencv3/
|-- share
```
### 1.2 Compile or Download or the Paddle Inference Library
<aname="13"></a>
### 1.3 Compile or Download or the Paddle Inference Library
* There are 2 ways to obtain the Paddle inference library, described in detail below.
#### 1.2.1 Direct download and installation
#### 1.3.1 Direct download and installation
[Paddle inference library official website](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html#linux). You can review and select the appropriate version of the inference library on the official website.
...
...
@@ -109,7 +106,7 @@ tar -xf paddle_inference.tgz
Finally you will see the the folder of `paddle_inference/` in the current path.
#### 1.2.2 Compile the inference source code
#### 1.3.2 Compile the inference source code
* If you want to get the latest Paddle inference library features, you can download the latest code from Paddle GitHub repository and compile the inference library from the source code. It is recommended to download the inference library with paddle version greater than or equal to 2.0.1.
* You can refer to [Paddle inference library] (https://www.paddlepaddle.org.cn/documentation/docs/en/advanced_guide/inference_deployment/inference/build_and_install_lib_en.html) to get the Paddle source code from GitHub, and then compile To generate the latest inference library. The method of using git to access the code is as follows.
`paddle` is the Paddle library required for C++ prediction later, and `version.txt` contains the version information of the current inference library.
<aname="2"></a>
## 2. Compile and Run the Demo
<aname="21"></a>
### 2.1 Export the inference model
* You can refer to [Model inference](../../doc/doc_ch/inference.md) and export the inference model. After the model is exported, assuming it is placed in the `inference` directory, the directory structure is as follows.
...
...
@@ -175,9 +174,9 @@ inference/
```
<aname="22"></a>
### 2.2 Compile PaddleOCR C++ inference demo
* The compilation commands are as follows. The addresses of Paddle C++ inference library, opencv and other Dependencies need to be replaced with the actual addresses on your own machines.
```shell
...
...
@@ -201,7 +200,9 @@ or the generated Paddle inference library path (`build/paddle_inference_install_
* After the compilation is completed, an executable file named `ppocr` will be generated in the `build` folder.
### Run the demo
<aname="23"></a>
### 2.3 Run the demo
Execute the built executable file:
```shell
./build/ppocr [--param1][--param2][...]
...
...
@@ -342,6 +343,7 @@ The detection visualized image saved in ./output//12.jpg
```
<aname="3"></a>
## 3. FAQ
1. Encountered the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.`, change the github address in `deploy/cpp_infer/external-cmake/auto-log.cmake` to the https://gitee.com/Double_V/AutoLog address.
Please prepare your environment referring to [prepare the environment](./environment_en.md) and [clone the repo](./clone_en.md).
<aname="3"></a>
## 3. Model Training / Evaluation / Prediction
Please refer to [text detection training tutorial](./detection_en.md). PaddleOCR has modularized the code structure, so that you only need to **replace the configuration file** to train different detection models.
<aname="4"></a>
## 4. Inference and Deployment
<aname="4-1"></a>
### 4.1 Python Inference
First, convert the model saved in the DB text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as example ([model download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)), you can use the following command to convert:
The visualized text detection results are saved to the `./inference_results` folder by default, and the name of the result file is prefixed with 'det_res'. Examples of results are as follows:
![](../imgs_results/det_res_img_10_db.jpg)
**Note**: Since the ICDAR2015 dataset has only 1,000 training images, mainly for English scenes, the above model has very poor detection result on Chinese text images.
<aname="4-2"></a>
### 4.2 C++ Inference
With the inference model prepared, refer to the [cpp infer](../../deploy/cpp_infer/) tutorial for C++ inference.
<aname="4-3"></a>
### 4.3 Serving
With the inference model prepared, refer to the [pdserving](../../deploy/pdserving/) tutorial for service deployment by Paddle Serving.
<aname="4-4"></a>
### 4.4 More
More deployment schemes supported for DB:
- Paddle2ONNX: with the inference model prepared, please refer to the [paddle2onnx](../../deploy/paddle2onnx/) tutorial.
<aname="5"></a>
## 5. FAQ
## Citation
```bibtex
@inproceedings{liao2020real,
title={Real-time scene text detection with differentiable binarization},
author={Liao, Minghui and Wan, Zhaoyi and Yao, Cong and Chen, Kai and Bai, Xiang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},