提交 693e4db5 编写于 作者: L LDOUBLEV

fix comment

上级 00be3062
# Jeston # Jeston部署PaddleOCR模型
本节介绍PaddleOCR在Jeston NX、TX2、nano、AGX等系列硬件的部署。 本节介绍PaddleOCR在Jeston NX、TX2、nano、AGX等系列硬件的部署。
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
需要准备一台Jeston开发板,如果需要TensorRT预测,需准备好TensorRT环境,建议使用7.1.3版本的TensorRT; 需要准备一台Jeston开发板,如果需要TensorRT预测,需准备好TensorRT环境,建议使用7.1.3版本的TensorRT;
1. jeston安装paddlepaddle 1. jeston安装PaddlePaddle
paddlepaddle下载[链接](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python) paddlepaddle下载[链接](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python)
请选择适合的您Jetpack版本、cuda版本、trt版本的安装包。 请选择适合的您Jetpack版本、cuda版本、trt版本的安装包。
...@@ -26,7 +26,7 @@ pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl ...@@ -26,7 +26,7 @@ pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl
git clone https://github.com/PaddlePaddle/PaddleOCR git clone https://github.com/PaddlePaddle/PaddleOCR
``` ```
其次,安装依赖: 然后,安装依赖:
``` ```
cd PaddleOCR cd PaddleOCR
pip3 install -r requirements.txt pip3 install -r requirements.txt
...@@ -50,23 +50,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar ...@@ -50,23 +50,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar
执行文本检测预测: 执行文本检测预测:
``` ```
cd PaddleOCR cd PaddleOCR
python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/french_0.jpg --use_gpu=True
``` ```
执行命令后在终端会打印出预测的信息,并在 `./inference_results/` 下保存可视化结果。
![](./images/det_res_french_0.jpg)
执行文本识别预测: 执行文本识别预测:
``` ```
python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/en/word_2.png --use_gpu=True --rec_image_shape="3,48,320"
```
执行命令后在终端会打印出预测的信息,输出如下:
```
[2022/04/28 15:41:45] root INFO: Predicts of ./doc/imgs_words/en/word_2.png:('yourself', 0.98084533)
``` ```
执行文本检测+文本识别串联预测: 执行文本检测+文本识别串联预测:
``` ```
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --rec_image_shape="3,48,320"
``` ```
执行命令后在终端会打印出预测的信息,并在 `./inference_results/` 下保存可视化结果。
![](./images/00057937.jpg)
开启TRT预测只需要在以上命令基础上设置`--use_tensorrt=True`即可: 开启TRT预测只需要在以上命令基础上设置`--use_tensorrt=True`即可:
``` ```
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/00057937.jpg --use_gpu=True --use_tensorrt=True --rec_image_shape="3,48,320"
``` ```
更多ppocr模型预测请参考[文档](../../doc/doc_ch/inference_ppocr.md) 更多ppocr模型预测请参考[文档](../../doc/doc_ch/inference_ppocr.md)
# Jeston # Jeston Deployment for PaddleOCR
This section introduces the deployment of PaddleOCR on Jeston NX, TX2, nano, AGX and other series of hardware. This section introduces the deployment of PaddleOCR on Jeston NX, TX2, nano, AGX and other series of hardware.
## 1. 环境准备 ## 1. Prepare Environment
You need to prepare a Jeston development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3; You need to prepare a Jeston development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3;
1. jeston install paddlepaddle 1. Install PaddlePaddle in Jeston
paddlepaddle download [link](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python) paddlepaddle download [link](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python)
Please select the appropriate installation package for your Jetpack version, cuda version, and trt version. Please select the appropriate installation package for your Jetpack version, cuda version, and trt version.
...@@ -49,23 +49,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar ...@@ -49,23 +49,35 @@ tar xf ch_PP-OCRv3_rec_infer.tar
The text detection inference: The text detection inference:
``` ```
cd PaddleOCR cd PaddleOCR
python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/french_0.jpg --use_gpu=True
``` ```
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory.
![](./images/det_res_french_0.jpg)
The text recognition inference: The text recognition inference:
``` ```
python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/en/word_2.png --use_gpu=True --rec_image_shape="3,48,320"
```
After executing the command, the predicted information will be printed on the terminal, and the output is as follows:
```
[2022/04/28 15:41:45] root INFO: Predicts of ./doc/imgs_words/en/word_2.png:('yourself', 0.98084533)
``` ```
The text detection and text recognition inference: The text detection and text recognition inference:
``` ```
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/00057937.jpg --use_gpu=True --rec_image_shape="3,48,320"
``` ```
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory.
![](./images/00057937.jpg)
To enable TRT prediction, you only need to set `--use_tensorrt=True` on the basis of the above command: To enable TRT prediction, you only need to set `--use_tensorrt=True` on the basis of the above command:
``` ```
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --rec_image_shape="3,48,320" --use_gpu=True --use_tensorrt=True
``` ```
For more ppocr model predictions, please refer to[document](../../doc/doc_en/inference_ppocr_en.md) For more ppocr model predictions, please refer to[document](../../doc/doc_en/inference_ppocr_en.md)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册