From 693e4db5012ca2361f6a284e29eb30cb8345461f Mon Sep 17 00:00:00 2001 From: LDOUBLEV Date: Thu, 28 Apr 2022 15:52:27 +0800 Subject: [PATCH] fix comment --- deploy/jeston/readme.md | 26 +++++++++++++++++++------- deploy/jeston/readme_en.md | 26 +++++++++++++++++++------- 2 files changed, 38 insertions(+), 14 deletions(-) diff --git a/deploy/jeston/readme.md b/deploy/jeston/readme.md index e1a2d04e..042932b6 100644 --- a/deploy/jeston/readme.md +++ b/deploy/jeston/readme.md @@ -1,5 +1,5 @@ -# Jeston +# Jeston部署PaddleOCR模型 本节介绍PaddleOCR在Jeston NX、TX2、nano、AGX等系列硬件的部署。 @@ -8,7 +8,7 @@ 需要准备一台Jeston开发板,如果需要TensorRT预测,需准备好TensorRT环境,建议使用7.1.3版本的TensorRT; -1. jeston安装paddlepaddle +1. jeston安装PaddlePaddle paddlepaddle下载[链接](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python) 请选择适合的您Jetpack版本、cuda版本、trt版本的安装包。 @@ -26,7 +26,7 @@ pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl git clone https://github.com/PaddlePaddle/PaddleOCR ``` -其次,安装依赖: +然后,安装依赖: ``` cd PaddleOCR pip3 install -r requirements.txt @@ -50,23 +50,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar 执行文本检测预测: ``` cd PaddleOCR -python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True +python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/french_0.jpg --use_gpu=True ``` +执行命令后在终端会打印出预测的信息,并在 `./inference_results/` 下保存可视化结果。 +![](./images/det_res_french_0.jpg) + + 执行文本识别预测: ``` -python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True +python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/en/word_2.png --use_gpu=True --rec_image_shape="3,48,320" +``` + +执行命令后在终端会打印出预测的信息,输出如下: +``` +[2022/04/28 15:41:45] root INFO: Predicts of ./doc/imgs_words/en/word_2.png:('yourself', 0.98084533) ``` 执行文本检测+文本识别串联预测: ``` -python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True +python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --rec_image_shape="3,48,320" ``` +执行命令后在终端会打印出预测的信息,并在 `./inference_results/` 下保存可视化结果。 +![](./images/00057937.jpg) + 开启TRT预测只需要在以上命令基础上设置`--use_tensorrt=True`即可: ``` -python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True +python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/00057937.jpg --use_gpu=True --use_tensorrt=True --rec_image_shape="3,48,320" ``` 更多ppocr模型预测请参考[文档](../../doc/doc_ch/inference_ppocr.md) diff --git a/deploy/jeston/readme_en.md b/deploy/jeston/readme_en.md index dd32e890..659ca3d3 100644 --- a/deploy/jeston/readme_en.md +++ b/deploy/jeston/readme_en.md @@ -1,14 +1,14 @@ -# Jeston +# Jeston Deployment for PaddleOCR This section introduces the deployment of PaddleOCR on Jeston NX, TX2, nano, AGX and other series of hardware. -## 1. 环境准备 +## 1. Prepare Environment You need to prepare a Jeston development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3; -1. jeston install paddlepaddle +1. Install PaddlePaddle in Jeston paddlepaddle download [link](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python) Please select the appropriate installation package for your Jetpack version, cuda version, and trt version. @@ -49,23 +49,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar The text detection inference: ``` cd PaddleOCR -python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True +python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/french_0.jpg --use_gpu=True ``` +After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory. +![](./images/det_res_french_0.jpg) + + The text recognition inference: ``` -python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True +python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/en/word_2.png --use_gpu=True --rec_image_shape="3,48,320" +``` + +After executing the command, the predicted information will be printed on the terminal, and the output is as follows: +``` +[2022/04/28 15:41:45] root INFO: Predicts of ./doc/imgs_words/en/word_2.png:('yourself', 0.98084533) ``` The text detection and text recognition inference: ``` -python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True +python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/00057937.jpg --use_gpu=True --rec_image_shape="3,48,320" ``` +After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory. +![](./images/00057937.jpg) + To enable TRT prediction, you only need to set `--use_tensorrt=True` on the basis of the above command: ``` -python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True +python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --rec_image_shape="3,48,320" --use_gpu=True --use_tensorrt=True ``` For more ppocr model predictions, please refer to[document](../../doc/doc_en/inference_ppocr_en.md) -- GitLab