readme_en.md 3.2 KB
Newer Older
L
LDOUBLEV 已提交
1

L
LDOUBLEV 已提交
2
# Jeston Deployment for PaddleOCR
L
LDOUBLEV 已提交
3 4 5 6

This section introduces the deployment of PaddleOCR on Jeston NX, TX2, nano, AGX and other series of hardware.


L
LDOUBLEV 已提交
7
## 1. Prepare Environment
L
LDOUBLEV 已提交
8 9 10

You need to prepare a Jeston development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3;

L
LDOUBLEV 已提交
11
1. Install PaddlePaddle in Jeston
L
LDOUBLEV 已提交
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

paddlepaddle download [link](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python)
Please select the appropriate installation package for your Jetpack version, cuda version, and trt version.

Install paddlepaddle:
```shell
pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl
```


2. Download PaddleOCR code and install dependencies

Clone the PaddleOCR code:
```
git clone https://github.com/PaddlePaddle/PaddleOCR
```

and install dependencies:
```
cd PaddleOCR
pip3 install -r requirements.txt
```

*Note: Jeston hardware CPU is poor, dependency installation is slow, please wait patiently*

## 2. Perform prediction

Obtain the PPOCR model from the [document](../../doc/doc_en/ppocr_introduction_en.md) model library. The following takes the PP-OCRv3 model as an example to introduce the use of the PPOCR model on jeston:

Download and unzip the PP-OCRv3 models.
```
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
tar xf ch_PP-OCRv3_det_infer.tar
tar xf ch_PP-OCRv3_rec_infer.tar
```

The text detection inference:
```
cd PaddleOCR
L
LDOUBLEV 已提交
52
python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/  --image_dir=./doc/imgs/french_0.jpg  --use_gpu=True
L
LDOUBLEV 已提交
53 54
```

L
LDOUBLEV 已提交
55 56 57 58
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory.
![](./images/det_res_french_0.jpg)


L
LDOUBLEV 已提交
59 60
The text recognition inference:
```
L
LDOUBLEV 已提交
61 62 63 64 65 66
python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/  --image_dir=./doc/imgs_words/en/word_2.png  --use_gpu=True --rec_image_shape="3,48,320"
```

After executing the command, the predicted information will be printed on the terminal, and the output is as follows:
```
[2022/04/28 15:41:45] root INFO: Predicts of ./doc/imgs_words/en/word_2.png:('yourself', 0.98084533)
L
LDOUBLEV 已提交
67 68 69 70 71
```

The text  detection and text recognition inference:

```
L
LDOUBLEV 已提交
72
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/00057937.jpg --use_gpu=True --rec_image_shape="3,48,320"
L
LDOUBLEV 已提交
73 74
```

L
LDOUBLEV 已提交
75 76 77
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the `./inference_results/` directory.
![](./images/00057937.jpg)

L
LDOUBLEV 已提交
78 79
To enable TRT prediction, you only need to set `--use_tensorrt=True` on the basis of the above command:
```
L
LDOUBLEV 已提交
80
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/  --rec_image_shape="3,48,320" --use_gpu=True --use_tensorrt=True
L
LDOUBLEV 已提交
81 82 83
```

For more ppocr model predictions, please refer to[document](../../doc/doc_en/inference_ppocr_en.md)