Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.0.1.
2. The steps of PaddleServing operating environment prepare are as follows:
2. The steps of PaddleServing operating environment prepare are as follows:
Install serving which used to start the service
Install serving which used to start the service
```
```
pip3 install paddle-serving-server==0.5.0 # for CPU
pip3 install paddle-serving-server==0.6.1 # for CPU
pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
# Other GPU environments need to confirm the environment and then choose to execute the following commands
# Other GPU environments need to confirm the environment and then choose to execute the following commands
pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
```
```
3. Install the client to send requests to the service
3. Install the client to send requests to the service
```
In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
pip3 install paddle-serving-client==0.5.0 # for CPU
The python3.7 version is recommended here:
pip3 install paddle-serving-client-gpu==0.5.0 # for GPU
**note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
**note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
...
@@ -74,38 +68,38 @@ When using PaddleServing for service deployment, you need to convert the saved i
...
@@ -74,38 +68,38 @@ When using PaddleServing for service deployment, you need to convert the saved i
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
```
```
# Download and unzip the OCR text detection model
# Download and unzip the OCR text detection model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar
# Download and unzip the OCR text recognition model
# Download and unzip the OCR text recognition model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar
```
```
Then, you can use installed paddle_serving_client tool to convert inference model to server model.
Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
After the detection model is converted, there will be additional folders of `ppocr_det_server_2.0_serving` and `ppocr_det_server_2.0_client` in the current folder, with the following format:
After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
```
```
|- ppocr_det_server_2.0_serving/
|- ppocr_det_mobile_2.0_serving/
|- __model__
|- __model__
|- __params__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- serving_server_conf.stream.prototxt
|- ppocr_det_server_2.0_client
|- ppocr_det_mobile_2.0_client
|- serving_client_conf.prototxt
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
|- serving_client_conf.stream.prototxt
...
@@ -147,6 +141,80 @@ The recognition model is the same.
...
@@ -147,6 +141,80 @@ The recognition model is the same.
After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
![](./imgs/results.png)
![](./imgs/results.png)
Adjust the number of concurrency in config.yml to get the largest QPS. Generally, the number of concurrent detection and recognition is 2:1
```
det:
concurrency: 8
...
rec:
concurrency: 4
...
```
Multiple service requests can be sent at the same time if necessary.
The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
Tested on 200 real pictures, and limited the detection long side to 960. The average QPS on T4 GPU can reach around 23:
Windows does not support Pipeline Serving, if we want to lauch paddle serving on Windows, we should use Web Service, for more infomation please refer to [Paddle Serving for Windows Users](https://github.com/PaddlePaddle/Serving/blob/develop/doc/WINDOWS_TUTORIAL.md)
1. Start Server
```
cd win
python3 ocr_web_server.py gpu(for gpu user)
or
python3 ocr_web_server.py cpu(for cpu user)
```
2. Client Send Requests
```
python3 ocr_web_client.py
```
<aname="faq"></a>
<aname="faq"></a>
## FAQ
## FAQ
**Q1**: No result return after sending the request.
**Q1**: No result return after sending the request.