From ecacf4c0682be81ae75e5dbf4960dd33d08500d7 Mon Sep 17 00:00:00 2001 From: MissPenguin Date: Wed, 11 Aug 2021 13:37:41 +0000 Subject: [PATCH] update readme --- deploy/cpp_infer/readme.md | 35 +++++++++++++++++------------------ deploy/cpp_infer/readme_en.md | 31 +++++++++++++++---------------- 2 files changed, 32 insertions(+), 34 deletions(-) diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index 49dd02cc..9bdd5466 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -154,18 +154,11 @@ inference/ * 编译命令如下,其中Paddle C++预测库、opencv等其他依赖库的地址需要换成自己机器上的实际地址。 - -```shell -sh tools/build.sh MODE(['det', 'rec', 'system']) -``` -其中`MODE`表示demo功能,支持3种参数,**按需选择一种参数即可**,相应解释如下: ```shell -sh tools/build.sh det # 编译检测demo -sh tools/build.sh rec # 编译识别demo -sh tools/build.sh system # 编译串联demo(包括方向分类器) +sh tools/build.sh ``` -此外,需要修改`tools/build.sh`中环境路径,相关内容如下: +* 具体的,需要修改`tools/build.sh`中环境路径,相关内容如下: ```shell OPENCV_DIR=your_opencv_dir @@ -177,32 +170,38 @@ CUDNN_LIB_DIR=/your_cudnn_lib_dir 其中,`OPENCV_DIR`为opencv编译安装的地址;`LIB_DIR`为下载(`paddle_inference`文件夹)或者编译生成的Paddle预测库地址(`build/paddle_inference_install_dir`文件夹);`CUDA_LIB_DIR`为cuda库文件地址,在docker中为`/usr/local/cuda/lib64`;`CUDNN_LIB_DIR`为cudnn库文件地址,在docker中为`/usr/lib/x86_64-linux-gnu/`。**注意:以上路径都写绝对路径,不要写相对路径。** -* 编译完成之后,会在`build`文件夹下生成一个名为`ocr_det`或`ocr_rec`或`ocr_system`的可执行文件。 +* 编译完成之后,会在`build`文件夹下生成一个名为`ppocr`的可执行文件。 ### 运行demo -直接运行编译好的可执行文件:```./build/ocr_***```,可获得参数信息提示。 -##### 1. 检测demo运行方式: + +运行方式: +```shell +./build/ppocr [--param1] [--param2] [...] +``` +其中,`mode`为必选参数,表示选择的功能,取值范围['det', 'rec', 'system'],分别表示调用检测、识别、检测识别串联(包括方向分类器)。具体命令如下: + +##### 1. 只调用检测: ```shell -./build/ocr_det \ +./build/ppocr det \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --image_dir=../../doc/imgs/12.jpg ``` -##### 2. 识别demo运行方式: +##### 2. 只调用识别: ```shell -./build/ocr_rec \ +./build/ppocr rec \ --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer \ --image_dir=../../doc/imgs_words/ch/ ``` -##### 3. 串联demo运行方式: +##### 3. 调用串联: ```shell # 不使用方向分类器 -./build/ocr_system \ +./build/ppocr system \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer \ --image_dir=../../doc/imgs/12.jpg # 使用方向分类器 -./build/ocr_system \ +./build/ppocr system \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --use_angle_cls=true \ --cls_model_dir=inference/ch_ppocr_mobile_v2.0_cls_infer \ diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index cd98ad1c..039aecf1 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -159,13 +159,7 @@ inference/ * The compilation commands are as follows. The addresses of Paddle C++ inference library, opencv and other Dependencies need to be replaced with the actual addresses on your own machines. ```shell -sh tools/build.sh MODE(['det', 'rec', 'system']) -``` -Here, `MODE` should be one of ['det', 'rec', 'system']. Choose one as your need. The explanation is as follows: -```shell -sh tools/build.sh det # build demo for detection only -sh tools/build.sh rec # build demo for recogniton only -sh tools/build.sh system # build demo for a system (including the text direction classifier) +sh tools/build.sh ``` Specifically, you should modify the paths in `tools/build.sh`. The related content is as follows. @@ -182,32 +176,37 @@ or the generated Paddle inference library path (`build/paddle_inference_install_ `CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`. -* After the compilation is completed, an executable file named `ocr_det` or `ocr_rec` or `ocr_system` will be generated in the `build` folder. +* After the compilation is completed, an executable file named `ppocr` will be generated in the `build` folder. ### Run the demo -* Execute the built executable file by ```./build/ocr_***```. You'll get some command line parameter information. -##### 1. run ocr_det demo: +Execute the built executable file: +```shell +./build/ppocr [--param1] [--param2] [...] +``` +Here, `mode` is a required parameter,and the value range is ['det', 'rec', 'system'], representing using detection only, using recognition only and using the end-to-end system respectively. Specifically, + +##### 1. run det demo: ```shell -./build/ocr_det \ +./build/ppocr det \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --image_dir=../../doc/imgs/12.jpg ``` -##### 2. run ocr_rec demo: +##### 2. run rec demo: ```shell -./build/ocr_rec \ +./build/ppocr rec \ --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer \ --image_dir=../../doc/imgs_words/ch/ ``` -##### 3. run ocr_system demo: +##### 3. run system demo: ```shell # without text direction classifier -./build/ocr_system \ +./build/ppocr system \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --rec_model_dir=inference/ch_ppocr_mobile_v2.0_rec_infer \ --image_dir=../../doc/imgs/12.jpg # with text direction classifier -./build/ocr_system \ +./build/ppocr system \ --det_model_dir=inference/ch_ppocr_mobile_v2.0_det_infer \ --use_angle_cls=true \ --cls_model_dir=inference/ch_ppocr_mobile_v2.0_cls_infer \ -- GitLab