Specifically, the content in `tools/build.sh` is as follows.
Specifically, you should modify the paths in `tools/build.sh`. The related content is as follows.
```shell
```shell
OPENCV_DIR=your_opencv_dir
OPENCV_DIR=your_opencv_dir
LIB_DIR=your_paddle_inference_dir
LIB_DIR=your_paddle_inference_dir
CUDA_LIB_DIR=your_cuda_lib_dir
CUDA_LIB_DIR=your_cuda_lib_dir
CUDNN_LIB_DIR=your_cudnn_lib_dir
CUDNN_LIB_DIR=your_cudnn_lib_dir
BUILD_DIR=build
rm-rf${BUILD_DIR}
mkdir${BUILD_DIR}
cd${BUILD_DIR}
cmake .. \
-DPADDLE_LIB=${LIB_DIR}\
-DWITH_MKL=ON \
-DDEMO_NAME=ocr_system \
-DWITH_GPU=OFF \
-DWITH_STATIC_LIB=OFF \
-DUSE_TENSORRT=OFF \
-DOPENCV_DIR=${OPENCV_DIR}\
-DCUDNN_LIB=${CUDNN_LIB_DIR}\
-DCUDA_LIB=${CUDA_LIB_DIR}\
make -j
```
```
`OPENCV_DIR` is the opencv installation path; `LIB_DIR` is the download (`paddle_inference` folder)
`OPENCV_DIR` is the opencv installation path; `LIB_DIR` is the download (`paddle_inference` folder)
...
@@ -193,48 +176,84 @@ or the generated Paddle inference library path (`build/paddle_inference_install_
...
@@ -193,48 +176,84 @@ or the generated Paddle inference library path (`build/paddle_inference_install_
`CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`.
`CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`.
* After the compilation is completed, an executable file named `ocr_system` will be generated in the `build` folder.
* After the compilation is completed, an executable file named `ppocr` will be generated in the `build` folder.
### Run the demo
### Run the demo
* Execute the following command to complete the OCR recognition and detection of an image.
Execute the built executable file:
```shell
./build/ppocr <mode> [--param1][--param2][...]
```
Here, `mode` is a required parameter,and the value range is ['det', 'rec', 'system'], representing using detection only, using recognition only and using the end-to-end system respectively. Specifically,
* If you want to orientation classifier to correct the detected boxes, you can set `use_angle_cls` in the file `tools/config.txt` as 1 to enable the function.
More parameters are as follows,
* What's more, Parameters and their meanings in `tools/config.txt` are as follows.
- common parameters
```
|parameter|data type|default|meaning|
use_gpu 0 # Whether to use GPU, 0 means not to use, 1 means to use
| --- | --- | --- | --- |
gpu_id 0 # GPU id when use_gpu is 1
|use_gpu|bool|false|Whether to use GPU|
gpu_mem 4000 # GPU memory requested
|gpu_id|int|0|GPU id when use_gpu is true|
cpu_math_library_num_threads 10 # Number of threads when using CPU inference. When machine cores is enough, the large the value, the faster the inference speed
|gpu_mem|int|4000|GPU memory requested|
use_mkldnn 1 # Whether to use mkdlnn library
|cpu_math_library_num_threads|int|10|Number of threads when using CPU inference. When machine cores is enough, the large the value, the faster the inference speed|
|use_mkldnn|bool|true|Whether to use mkdlnn library|
max_side_len 960 # Limit the maximum image height and width to 960
- detection related parameters
det_db_thresh 0.3 # Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result
det_db_box_thresh 0.5 # DDB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate
det_db_unclip_ratio 1.6 # Indicates the compactness of the text box, the smaller the value, the closer the text box to the text
use_polygon_score 1 # Whether to use polygon box to calculate bbox score, 0 means to use rectangle box to calculate. Use rectangular box to calculate faster, and polygonal box more accurate for curved text area.
det_model_dir ./inference/det_db # Address of detection inference model
# cls config
|parameter|data type|default|meaning|
use_angle_cls 0 # Whether to use the direction classifier, 0 means not to use, 1 means to use
| --- | --- | --- | --- |
cls_model_dir ./inference/cls # Address of direction classifier inference model
|det_model_dir|string|-|Address of detection inference model|
cls_thresh 0.9 # Score threshold of the direction classifier
|max_side_len|int|960|Limit the maximum image height and width to 960|
|det_db_thresh|float|0.3|Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result|
|det_db_box_thresh|float|0.5|DB post-processing filter box threshold, if there is a missing box detected, it can be reduced as appropriate|
|det_db_unclip_ratio|float|1.6|Indicates the compactness of the text box, the smaller the value, the closer the text box to the text|
|use_polygon_score|bool|false|Whether to use polygon box to calculate bbox score, false means to use rectangle box to calculate. Use rectangular box to calculate faster, and polygonal box more accurate for curved text area.|
|visualize|bool|true|Whether to visualize the results,when it is set as true, The prediction result will be save in the image file `./ocr_vis.png`.|
# rec config
- classifier related parameters
rec_model_dir ./inference/rec_crnn # Address of recognition inference model
* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `char_list_file` and `rec_model_dir` in file `tools/config.txt`.
* Multi-language inference is also supported in PaddleOCR, you can refer to [recognition tutorial](../../doc/doc_en/recognition_en.md) for more supported languages and models in PaddleOCR. Specifically, if you want to infer using multi-language models, you just need to modify values of `char_list_file` and `rec_model_dir`.
The detection results will be shown on the screen, which is as follows.
The detection results will be shown on the screen, which is as follows.