Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.0.1.
2. The steps of PaddleServing operating environment prepare are as follows:
...
...
@@ -45,23 +47,17 @@ PaddleOCR operating environment and Paddle Serving operating environment are nee
```
3. Install the client to send requests to the service
```
pip3 install paddle-serving-client==0.5.0 # for CPU
In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
The python3.7 version is recommended here:
pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
**note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
...
...
@@ -74,38 +70,38 @@ When using PaddleServing for service deployment, you need to convert the saved i
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
```
# Download and unzip the OCR text detection model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar
# Download and unzip the OCR text recognition model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar
```
Then, you can use installed paddle_serving_client tool to convert inference model to server model.
Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
After the detection model is converted, there will be additional folders of `ppocr_det_server_2.0_serving` and `ppocr_det_server_2.0_client` in the current folder, with the following format:
After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
```
|- ppocr_det_server_2.0_serving/
|- ppocr_det_mobile_2.0_serving/
|- __model__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- ppocr_det_server_2.0_client
|- ppocr_det_mobile_2.0_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
...
...
@@ -147,6 +143,61 @@ The recognition model is the same.
After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
![](./imgs/results.png)
Adjust the number of concurrency in config.yml to get the largest QPS. Generally, the number of concurrent detection and recognition is 2:1
```
det:
concurrency: 8
...
rec:
concurrency: 4
...
```
Multiple service requests can be sent at the same time if necessary.
The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
Tested on 200 real pictures, and limited the detection long side to 960. The average QPS on T4 GPU can reach around 23: