提交 dbd27878 编写于 作者: L LDOUBLEV

update cpp_infer readme

上级 c94428a8
...@@ -122,10 +122,10 @@ build/paddle_inference_install_dir/ ...@@ -122,10 +122,10 @@ build/paddle_inference_install_dir/
* 下载之后使用下面的方法解压。 * 下载之后使用下面的方法解压。
``` ```
tar -xf fluid_inference.tgz tar -xf paddle_inference.tgz
``` ```
最终会在当前的文件夹中生成`fluid_inference/`的子文件夹。 最终会在当前的文件夹中生成`paddle_inference/`的子文件夹。
## 2 开始运行 ## 2 开始运行
...@@ -137,11 +137,11 @@ tar -xf fluid_inference.tgz ...@@ -137,11 +137,11 @@ tar -xf fluid_inference.tgz
``` ```
inference/ inference/
|-- det_db |-- det_db
| |--model | |--inference.pdparams
| |--params | |--inference.pdimodel
|-- rec_rcnn |-- rec_rcnn
| |--model | |--inference.pdparams
| |--params | |--inference.pdparams
``` ```
...@@ -180,7 +180,7 @@ cmake .. \ ...@@ -180,7 +180,7 @@ cmake .. \
make -j make -j
``` ```
`OPENCV_DIR`为opencv编译安装的地址;`LIB_DIR`为下载(`fluid_inference`文件夹)或者编译生成的Paddle预测库地址(`build/fluid_inference_install_dir`文件夹);`CUDA_LIB_DIR`为cuda库文件地址,在docker中;为`/usr/local/cuda/lib64``CUDNN_LIB_DIR`为cudnn库文件地址,在docker中为`/usr/lib/x86_64-linux-gnu/` `OPENCV_DIR`为opencv编译安装的地址;`LIB_DIR`为下载(`paddle_inference`文件夹)或者编译生成的Paddle预测库地址(`build/paddle_inference_install_dir`文件夹);`CUDA_LIB_DIR`为cuda库文件地址,在docker中;为`/usr/local/cuda/lib64``CUDNN_LIB_DIR`为cudnn库文件地址,在docker中为`/usr/lib/x86_64-linux-gnu/`
* 编译完成之后,会在`build`文件夹下生成一个名为`ocr_system`的可执行文件。 * 编译完成之后,会在`build`文件夹下生成一个名为`ocr_system`的可执行文件。
...@@ -202,7 +202,6 @@ gpu_id 0 # GPU id,使用GPU时有效 ...@@ -202,7 +202,6 @@ gpu_id 0 # GPU id,使用GPU时有效
gpu_mem 4000 # 申请的GPU内存 gpu_mem 4000 # 申请的GPU内存
cpu_math_library_num_threads 10 # CPU预测时的线程数,在机器核数充足的情况下,该值越大,预测速度越快 cpu_math_library_num_threads 10 # CPU预测时的线程数,在机器核数充足的情况下,该值越大,预测速度越快
use_mkldnn 1 # 是否使用mkldnn库 use_mkldnn 1 # 是否使用mkldnn库
use_zero_copy_run 1 # 是否使用use_zero_copy_run进行预测
# det config # det config
max_side_len 960 # 输入图像长宽大于960时,等比例缩放图像,使得图像最长边为960 max_side_len 960 # 输入图像长宽大于960时,等比例缩放图像,使得图像最长边为960
......
...@@ -130,10 +130,10 @@ Among them, `paddle` is the Paddle library required for C++ prediction later, an ...@@ -130,10 +130,10 @@ Among them, `paddle` is the Paddle library required for C++ prediction later, an
* After downloading, use the following method to uncompress. * After downloading, use the following method to uncompress.
``` ```
tar -xf fluid_inference.tgz tar -xf paddle_inference.tgz
``` ```
Finally you can see the following files in the folder of `fluid_inference/`. Finally you can see the following files in the folder of `paddle_inference/`.
## 2. Compile and run the demo ## 2. Compile and run the demo
...@@ -145,11 +145,11 @@ Finally you can see the following files in the folder of `fluid_inference/`. ...@@ -145,11 +145,11 @@ Finally you can see the following files in the folder of `fluid_inference/`.
``` ```
inference/ inference/
|-- det_db |-- det_db
| |--model | |--inference.pdparams
| |--params | |--inference.pdimodel
|-- rec_rcnn |-- rec_rcnn
| |--model | |--inference.pdparams
| |--params | |--inference.pdparams
``` ```
...@@ -188,7 +188,9 @@ cmake .. \ ...@@ -188,7 +188,9 @@ cmake .. \
make -j make -j
``` ```
`OPENCV_DIR` is the opencv installation path; `LIB_DIR` is the download (`fluid_inference` folder) or the generated Paddle inference library path (`build/fluid_inference_install_dir` folder); `CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`. `OPENCV_DIR` is the opencv installation path; `LIB_DIR` is the download (`paddle_inference` folder)
or the generated Paddle inference library path (`build/paddle_inference_install_dir` folder);
`CUDA_LIB_DIR` is the cuda library file path, in docker; it is `/usr/local/cuda/lib64`; `CUDNN_LIB_DIR` is the cudnn library file path, in docker it is `/usr/lib/x86_64-linux-gnu/`.
* After the compilation is completed, an executable file named `ocr_system` will be generated in the `build` folder. * After the compilation is completed, an executable file named `ocr_system` will be generated in the `build` folder.
...@@ -211,7 +213,6 @@ gpu_id 0 # GPU id when use_gpu is 1 ...@@ -211,7 +213,6 @@ gpu_id 0 # GPU id when use_gpu is 1
gpu_mem 4000 # GPU memory requested gpu_mem 4000 # GPU memory requested
cpu_math_library_num_threads 10 # Number of threads when using CPU inference. When machine cores is enough, the large the value, the faster the inference speed cpu_math_library_num_threads 10 # Number of threads when using CPU inference. When machine cores is enough, the large the value, the faster the inference speed
use_mkldnn 1 # Whether to use mkdlnn library use_mkldnn 1 # Whether to use mkdlnn library
use_zero_copy_run 1 # Whether to use use_zero_copy_run for inference
max_side_len 960 # Limit the maximum image height and width to 960 max_side_len 960 # Limit the maximum image height and width to 960
det_db_thresh 0.3 # Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result det_db_thresh 0.3 # Used to filter the binarized image of DB prediction, setting 0.-0.3 has no obvious effect on the result
...@@ -244,4 +245,4 @@ The detection results will be shown on the screen, which is as follows. ...@@ -244,4 +245,4 @@ The detection results will be shown on the screen, which is as follows.
### 2.3 Notes ### 2.3 Notes
* Paddle2.0.0-beta0 inference model library is recommanded for this tuturial. * Paddle2.0.0-beta0 inference model library is recommended for this toturial.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册