README.md 9.3 KB
Newer Older
L
LDOUBLEV 已提交
1 2 3 4
# OCR Pipeline WebService

(English|[简体中文](./README_CN.md))

L
LDOUBLEV 已提交
5
PaddleOCR provides two service deployment methods:
L
LDOUBLEV 已提交
6 7
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please follow this tutorial.
L
LDOUBLEV 已提交
8

L
LDOUBLEV 已提交
9 10 11 12 13 14 15 16 17 18
# Service deployment based on PaddleServing  

This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the PPOCR dynamic graph model as a pipeline online service.

Some Key Features of Paddle Serving:
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
- Highly concurrent and efficient communication between clients and servers supported.

The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
L
LDOUBLEV 已提交
19 20 21


## Contents
L
LDOUBLEV 已提交
22 23 24 25
- [Environmental preparation](#environmental-preparation)
- [Model conversion](#model-conversion)
- [Paddle Serving pipeline deployment](#paddle-serving-pipeline-deployment)
- [FAQ](#faq)
L
LDOUBLEV 已提交
26

L
LDOUBLEV 已提交
27
<a name="environmental-preparation"></a>
L
LDOUBLEV 已提交
28 29
## Environmental preparation

L
LDOUBLEV 已提交
30
PaddleOCR operating environment and Paddle Serving operating environment are needed.
L
LDOUBLEV 已提交
31

L
LDOUBLEV 已提交
32
1. Please prepare PaddleOCR operating environment reference [link](../../doc/doc_ch/installation.md).
T
tink2123 已提交
33 34
   Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.0.1.

L
LDOUBLEV 已提交
35

L
LDOUBLEV 已提交
36
2. The steps of PaddleServing operating environment prepare are as follows:
L
LDOUBLEV 已提交
37

L
LDOUBLEV 已提交
38 39 40 41 42 43 44 45 46 47 48 49
    Install serving which used to start the service
    ```
    pip3 install paddle-serving-server==0.5.0 # for CPU
    pip3 install paddle-serving-server-gpu==0.5.0 # for GPU
    # Other GPU environments need to confirm the environment and then choose to execute the following commands
    pip3 install paddle-serving-server-gpu==0.5.0.post9 # GPU with CUDA9.0
    pip3 install paddle-serving-server-gpu==0.5.0.post10 # GPU with CUDA10.0
    pip3 install paddle-serving-server-gpu==0.5.0.post101 # GPU with CUDA10.1 + TensorRT6
    pip3 install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + TensorRT7
    ```

3. Install the client to send requests to the service
T
tink2123 已提交
50 51
    In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
    The python3.7 version is recommended here:
L
LDOUBLEV 已提交
52

T
tink2123 已提交
53 54 55
    ```
    wget https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
    pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl
L
LDOUBLEV 已提交
56 57 58 59
    ```

4. Install serving-app
    ```
T
tink2123 已提交
60
    pip3 install paddle-serving-app==0.3.1
L
LDOUBLEV 已提交
61 62 63 64 65 66
    ```

   **note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).


<a name="model-conversion"></a>
L
LDOUBLEV 已提交
67 68 69
## Model conversion
When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.

L
LDOUBLEV 已提交
70
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
L
LDOUBLEV 已提交
71 72
```
# Download and unzip the OCR text detection model
T
add qps  
tink2123 已提交
73
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar
L
LDOUBLEV 已提交
74
# Download and unzip the OCR text recognition model
T
add qps  
tink2123 已提交
75
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar
L
LDOUBLEV 已提交
76

L
LDOUBLEV 已提交
77
```
T
add qps  
tink2123 已提交
78
Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
L
LDOUBLEV 已提交
79
```
L
LDOUBLEV 已提交
80
#  Detection model conversion
T
add qps  
tink2123 已提交
81
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_det_infer/ \
L
LDOUBLEV 已提交
82 83
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
T
add qps  
tink2123 已提交
84 85
                                         --serving_server ./ppocr_det_mobile_2.0_serving/ \
                                         --serving_client ./ppocr_det_mobile_2.0_client/
L
LDOUBLEV 已提交
86

L
LDOUBLEV 已提交
87
#  Recognition model conversion
T
add qps  
tink2123 已提交
88
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_rec_infer/ \
L
LDOUBLEV 已提交
89 90
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
T
add qps  
tink2123 已提交
91 92
                                         --serving_server ./ppocr_rec_mobile_2.0_serving/  \
                                         --serving_client ./ppocr_rec_mobile_2.0_client/
L
LDOUBLEV 已提交
93 94 95

```

T
add qps  
tink2123 已提交
96
After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
L
LDOUBLEV 已提交
97
```
T
add qps  
tink2123 已提交
98
|- ppocr_det_mobile_2.0_serving/
L
LDOUBLEV 已提交
99 100 101 102 103
   |- __model__
   |- __params__
   |- serving_server_conf.prototxt
   |- serving_server_conf.stream.prototxt

T
add qps  
tink2123 已提交
104
|- ppocr_det_mobile_2.0_client
L
LDOUBLEV 已提交
105 106 107 108 109 110
   |- serving_client_conf.prototxt
   |- serving_client_conf.stream.prototxt

```
The recognition model is the same.

L
LDOUBLEV 已提交
111
<a name="paddle-serving-pipeline-deployment"></a>
L
LDOUBLEV 已提交
112 113 114
## Paddle Serving pipeline deployment

1. Download the PaddleOCR code, if you have already downloaded it, you can skip this step.
L
LDOUBLEV 已提交
115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
    ```
    git clone https://github.com/PaddlePaddle/PaddleOCR

    # Enter the working directory  
    cd PaddleOCR/deploy/pdserver/
    ```

    The pdserver directory contains the code to start the pipeline service and send prediction requests, including:
    ```
    __init__.py
    config.yml # Start the service configuration file
    ocr_reader.py # OCR model pre-processing and post-processing code implementation
    pipeline_http_client.py # Script to send pipeline prediction request
    web_service.py # Start the script of the pipeline server
    ```
L
LDOUBLEV 已提交
130 131

2. Run the following command to start the service.
L
LDOUBLEV 已提交
132 133 134 135 136 137
    ```
    # Start the service and save the running log in log.txt
    python3 web_service.py &>log.txt &
    ```
    After the service is successfully started, a log similar to the following will be printed in log.txt
    ![](./imgs/start_server.png)
L
LDOUBLEV 已提交
138 139

3. Send service request
L
LDOUBLEV 已提交
140 141 142 143 144
    ```
    python3 pipeline_http_client.py
    ```
    After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
    ![](./imgs/results.png)  
L
LDOUBLEV 已提交
145

T
add qps  
tink2123 已提交
146 147 148 149 150 151 152 153 154 155 156 157 158
    Adjust the number of concurrency in config.yml to get the largest QPS. Generally, the number of concurrent detection and recognition is 2:1

    ```
    det:
        concurrency: 8
        ...
    rec:
        concurrency: 4
        ...
    ```

    Multiple service requests can be sent at the same time if necessary.

T
add qps  
tink2123 已提交
159 160 161
    The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.

    Tested on 200 real pictures, and limited the detection long side to 960. The average QPS on T4 GPU can reach around 13:
T
add qps  
tink2123 已提交
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199

    ```
    2021-05-12 10:03:24,812 ==================== TRACER ======================
    2021-05-12 10:03:24,904 Op(rec):
    2021-05-12 10:03:24,904         in[51.5634921875 ms]
    2021-05-12 10:03:24,904         prep[215.310859375 ms]
    2021-05-12 10:03:24,904         midp[33.1617109375 ms]
    2021-05-12 10:03:24,905         postp[10.451234375 ms]
    2021-05-12 10:03:24,905         out[9.736765625 ms]
    2021-05-12 10:03:24,905         idle[0.1914292677880819]
    2021-05-12 10:03:24,905 Op(det):
    2021-05-12 10:03:24,905         in[218.63487096774193 ms]
    2021-05-12 10:03:24,906         prep[357.35925 ms]
    2021-05-12 10:03:24,906         midp[31.47598387096774 ms]
    2021-05-12 10:03:24,906         postp[15.274870967741936 ms]
    2021-05-12 10:03:24,906         out[16.245693548387095 ms]
    2021-05-12 10:03:24,906         idle[0.3675805857279226]
    2021-05-12 10:03:24,906 DAGExecutor:
    2021-05-12 10:03:24,906         Query count[128]
    2021-05-12 10:03:24,906         QPS[12.8 q/s]
    2021-05-12 10:03:24,906         Succ[1.0]
    2021-05-12 10:03:24,907         Error req[]
    2021-05-12 10:03:24,907         Latency:
    2021-05-12 10:03:24,907                 ave[798.6557734374998 ms]
    2021-05-12 10:03:24,907                 .50[867.936 ms]
    2021-05-12 10:03:24,907                 .60[914.507 ms]
    2021-05-12 10:03:24,907                 .70[961.064 ms]
    2021-05-12 10:03:24,907                 .80[1043.264 ms]
    2021-05-12 10:03:24,907                 .90[1117.923 ms]
    2021-05-12 10:03:24,907                 .95[1207.056 ms]
    2021-05-12 10:03:24,908                 .99[1325.008 ms]
    2021-05-12 10:03:24,908 Channel (server worker num[10]):
    2021-05-12 10:03:24,909         chl0(In: ['@DAGExecutor'], Out: ['det']) size[0/0]
    2021-05-12 10:03:24,909         chl1(In: ['det'], Out: ['rec']) size[1/0]
    2021-05-12 10:03:24,910         chl2(In: ['rec'], Out: ['@DAGExecutor']) size[0/0]
    ```


L
LDOUBLEV 已提交
200
<a name="faq"></a>
L
LDOUBLEV 已提交
201
## FAQ
M
MissPenguin 已提交
202
**Q1**: No result return after sending the request.
L
LDOUBLEV 已提交
203

M
MissPenguin 已提交
204
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
L
LDOUBLEV 已提交
205 206 207 208
```
unset https_proxy
unset http_proxy
```