readme_en.md 9.6 KB
Newer Older
littletomatodonkey's avatar
littletomatodonkey 已提交
1 2
# Server-side C++ inference

L
LDOUBLEV 已提交
3 4 5 6
This chapter introduces the C++ deployment method of the PaddleOCR model, and the corresponding python predictive deployment method refers to [document](../../doc/doc_ch/inference.md).
C++ is better than python in terms of performance calculation. Therefore, in most CPU and GPU deployment scenarios, C++ deployment is mostly used.
This section will introduce how to configure the C++ environment and complete it in the Linux\Windows (CPU\GPU) environment
PaddleOCR model deployment.
littletomatodonkey's avatar
littletomatodonkey 已提交
7 8 9 10 11 12 13 14 15 16 17 18 19 20


## 1. Prepare the environment

### Environment

- Linux, docker is recommended.


### 1.1 Compile opencv

* First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows.

```
W
WenmuZhou 已提交
21
cd deploy/cpp_infer
littletomatodonkey's avatar
littletomatodonkey 已提交
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
tar -xf 3.4.7.tar.gz
```

Finally, you can see the folder of `opencv-3.4.7/` in the current directory.

* Compile opencv, the opencv source path (`root_path`) and installation path (`install_path`) should be set by yourself. Enter the opencv source code path and compile it in the following way.


```shell
root_path=your_opencv_root_path
install_path=${root_path}/opencv3

rm -rf build
mkdir build
cd build

cmake .. \
    -DCMAKE_INSTALL_PREFIX=${install_path} \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_SHARED_LIBS=OFF \
    -DWITH_IPP=OFF \
    -DBUILD_IPP_IW=OFF \
    -DWITH_LAPACK=OFF \
    -DWITH_EIGEN=OFF \
    -DCMAKE_INSTALL_LIBDIR=lib64 \
    -DWITH_ZLIB=ON \
    -DBUILD_ZLIB=ON \
    -DWITH_JPEG=ON \
    -DBUILD_JPEG=ON \
    -DWITH_PNG=ON \
    -DBUILD_PNG=ON \
    -DWITH_TIFF=ON \
    -DBUILD_TIFF=ON

make -j
make install
```

Among them, `root_path` is the downloaded opencv source code path, and `install_path` is the installation path of opencv. After `make install` is completed, the opencv header file and library file will be generated in this folder for later OCR source code compilation.



The final file structure under the opencv installation path is as follows.

```
opencv3/
|-- bin
|-- include
|-- lib
|-- lib64
|-- share
```

### 1.2 Compile or download or  the Paddle inference library

* There are 2 ways to obtain the Paddle inference library, described in detail below.

littletomatodonkey's avatar
littletomatodonkey 已提交
80
#### 1.2.1 Direct download and installation
littletomatodonkey's avatar
littletomatodonkey 已提交
81

L
LDOUBLEV 已提交
82
[Paddle inference library official website](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0/guides/05_inference_deployment/inference/build_and_install_lib_cn.html). You can view and select the appropriate version of the inference library on the official website.
littletomatodonkey's avatar
littletomatodonkey 已提交
83 84 85 86 87 88 89 90 91 92 93


* After downloading, use the following method to uncompress.

```
tar -xf paddle_inference.tgz
```

Finally you can see the following files in the folder of `paddle_inference/`.

#### 1.2.2 Compile from the source code
L
LDOUBLEV 已提交
94 95
* If you want to get the latest Paddle inference library features, you can download the latest code from Paddle github repository and compile the inference library from the source code. It is recommended to download the inference library with paddle version greater than or equal to 2.0.1.
* You can refer to [Paddle inference library] (https://www.paddlepaddle.org.cn/documentation/docs/en/advanced_guide/inference_deployment/inference/build_and_install_lib_en.html) to get the Paddle source code from github, and then compile To generate the latest inference library. The method of using git to access the code is as follows.
littletomatodonkey's avatar
littletomatodonkey 已提交
96 97 98 99


```shell
git clone https://github.com/PaddlePaddle/Paddle.git
L
LDOUBLEV 已提交
100
git checkout release/2.1
littletomatodonkey's avatar
littletomatodonkey 已提交
101 102
```

L
LDOUBLEV 已提交
103
* After entering the Paddle directory, the commands to compile the paddle inference library are as follows.
littletomatodonkey's avatar
littletomatodonkey 已提交
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122

```shell
rm -rf build
mkdir build
cd build

cmake  .. \
    -DWITH_CONTRIB=OFF \
    -DWITH_MKL=ON \
    -DWITH_MKLDNN=ON  \
    -DWITH_TESTING=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DWITH_INFERENCE_API_TEST=OFF \
    -DON_INFER=ON \
    -DWITH_PYTHON=ON
make -j
make inference_lib_dist
```

L
LDOUBLEV 已提交
123
For more compilation parameter options, please refer to the [document](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0/guides/05_inference_deployment/inference/build_and_install_lib_cn.html#congyuanmabianyi).
littletomatodonkey's avatar
littletomatodonkey 已提交
124 125


L
LDOUBLEV 已提交
126
* After the compilation process, you can see the following files in the folder of `build/paddle_inference_install_dir/`.
littletomatodonkey's avatar
littletomatodonkey 已提交
127 128

```
L
LDOUBLEV 已提交
129
build/paddle_inference_install_dir/
littletomatodonkey's avatar
littletomatodonkey 已提交
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
|-- CMakeCache.txt
|-- paddle
|-- third_party
|-- version.txt
```

Among them, `paddle` is the Paddle library required for C++ prediction later, and `version.txt` contains the version information of the current inference library.


## 2. Compile and run the demo

### 2.1 Export the inference model

* You can refer to [Model inference](../../doc/doc_ch/inference.md),export the inference model. After the model is exported, assuming it is placed in the `inference` directory, the directory structure is as follows.

```
inference/
|-- det_db
M
MissPenguin 已提交
148 149
|   |--inference.pdiparams
|   |--inference.pdmodel
littletomatodonkey's avatar
littletomatodonkey 已提交
150
|-- rec_rcnn
M
MissPenguin 已提交
151 152
|   |--inference.pdiparams
|   |--inference.pdmodel
littletomatodonkey's avatar
littletomatodonkey 已提交
153 154 155 156 157 158 159 160 161
```


### 2.2 Compile PaddleOCR C++ inference demo


* The compilation commands are as follows. The addresses of Paddle C++ inference library, opencv and other Dependencies need to be replaced with the actual addresses on your own machines.

```shell
M
MissPenguin 已提交
162 163 164 165 166 167 168
sh tools/build.sh MODE(['det', 'rec', 'system'])
```
Here, `MODE` should be one of ['det', 'rec', 'system']. Choose one as your need. The explanation is as follows: 
```shell
sh tools/build.sh det # build demo for detection only
sh tools/build.sh rec # build demo for recogniton only
sh tools/build.sh system # build demo for a system (including the text direction classifier)
littletomatodonkey's avatar
littletomatodonkey 已提交
169 170
```

M
MissPenguin 已提交
171
Specifically, you should modify the paths in `tools/build.sh`. The related content is as follows.
littletomatodonkey's avatar
littletomatodonkey 已提交
172 173 174 175 176 177 178 179

```shell
OPENCV_DIR=your_opencv_dir
LIB_DIR=your_paddle_inference_dir
CUDA_LIB_DIR=your_cuda_lib_dir
CUDNN_LIB_DIR=your_cudnn_lib_dir
```

L
LDOUBLEV 已提交
180 181 182
`OPENCV_DIR` is the opencv installation path; `LIB_DIR` is the download (`paddle_inference` folder)
or the generated Paddle inference library path (`build/paddle_inference_install_dir` folder);
`CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`.
littletomatodonkey's avatar
littletomatodonkey 已提交
183 184


M
MissPenguin 已提交
185
* After the compilation is completed, an executable file named `ocr_det` or `ocr_rec` or `ocr_system` will be generated in the `build` folder.
littletomatodonkey's avatar
littletomatodonkey 已提交
186 187 188


### Run the demo
M
MissPenguin 已提交
189 190
* Execute the built executable file by ```./build/ocr_***```. You'll get some command line parameter information. 
##### 1. run ocr_det demo:
littletomatodonkey's avatar
littletomatodonkey 已提交
191
```shell
M
MissPenguin 已提交
192 193 194
./build/ocr_det 
    --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer 
    --image_dir=../../doc/imgs/12.jpg
littletomatodonkey's avatar
littletomatodonkey 已提交
195
```
M
MissPenguin 已提交
196 197 198 199 200
##### 2. run ocr_rec demo:
```shell
./build/ocr_rec 
    --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer 
    --image_dir=../../doc/imgs_words/ch/
Z
zhoujun 已提交
201
```
M
MissPenguin 已提交
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243
##### 3. run ocr_system demo:
```shell
# without text direction classifier
./build/ocr_rec 
    --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer 
    --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer 
    --image_dir=../../doc/imgs/12.jpg
# with text direction classifier
./build/ocr_rec 
    --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer 
    --use_angle_cls=true 
    --cls_model_dir=inference/ch_ppocr_mobile_v2.0_cls_infer 
    --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer 
    --image_dir=../../doc/imgs/12.jpg
```

More parameters are as follows,  

|parameter|data type|default|meaning|
| --- | --- | --- | --- |
|use_gpu|bool|false|Whether to use GPU|
|gpu_id|int|0|GPU id when use_gpu is true|
|gpu_mem|int|4000|GPU memory requested|
|cpu_math_library_num_threads|int|10|Number of threads when using CPU inference. When machine cores is enough, the large the value, the faster the inference speed|
|use_mkldnn|bool|true|Whether to use mkdlnn library|
|**detection related**|
|det_model_dir|string|-|Address of detection inference model|
|max_side_len|int|960|Limit the maximum image height and width to 960|
|det_db_thresh|float|0.3|Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result|
|det_db_box_thresh|float|0.5|DB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate|
|det_db_unclip_ratio|float|1.6|Indicates the compactness of the text box, the smaller the value, the closer the text box to the text|
|use_polygon_score|bool|false|Whether to use polygon box to calculate bbox score, false means to use rectangle box to calculate. Use rectangular box to calculate faster, and polygonal box more accurate for curved text area.|
|visualize|bool|true|Whether to visualize the results,when it is set as true, The prediction result will be save in the image file `./ocr_vis.png`.|
|**recogniton related**|
|use_angle_cls|bool|false|Whether to use the direction classifier|
|cls_model_dir|string|-|Address of direction classifier inference model|
|cls_thresh|float|0.9|Score threshold of the  direction classifier|
|**classification related**|
|rec_model_dir|string|-|Address of recognition inference model|
|char_list_file|string|../../ppocr/utils/ppocr_keys_v1.txt|dictionary file|

* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `char_list_file` and `rec_model_dir`.
Z
zhoujun 已提交
244 245


littletomatodonkey's avatar
littletomatodonkey 已提交
246 247 248
The detection results will be shown on the screen, which is as follows.

<div align="center">
littletomatodonkey's avatar
littletomatodonkey 已提交
249
    <img src="./imgs/cpp_infer_pred_12.png" width="600">
littletomatodonkey's avatar
littletomatodonkey 已提交
250 251 252
</div>


Z
zhoujun 已提交
253
### 2.3 Notes
littletomatodonkey's avatar
littletomatodonkey 已提交
254

littletomatodonkey's avatar
littletomatodonkey 已提交
255
* Paddle2.0.0 inference model library is recommended for this toturial.