After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
After the ResNet50_vd inference model is converted, there will be additional folders of `ResNet50_vd_serving` and `ResNet50_vd_client` in the current folder, with the following format:
```
|- ResNet50_vd_client/
|- __model__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- ResNet50_vd_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
```
```
|- ppocr_det_mobile_2.0_serving/
|- __model__
|- __params__
|- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt
|- ppocr_det_mobile_2.0_client
|- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt
Once you have the deploy model file, you need to change the alias name in serving_server_conf.prototxt: Change 'alias_name' in 'feed_var' to 'image', change 'alias_name' in 'fetch_var' to 'prediction',
The modified serving_server_conf.prototxt file is as follows:
```
feed_var {
name: "inputs"
alias_name: "image"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 224
shape: 224
}
fetch_var {
name: "save_infer_model/scale_0.tmp_1"
alias_name: "prediction"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
```
```
The recognition model is the same.
<aname="paddle-serving-pipeline-deployment"></a>
<aname="paddle-serving-pipeline-deployment"></a>
## Paddle Serving pipeline deployment
## Paddle Serving pipeline deployment
1. Download the PaddleOCR code, if you have already downloaded it, you can skip this step.
1. Download the PaddleClas code, if you have already downloaded it, you can skip this step.
Windows does not support Pipeline Serving, if we want to lauch paddle serving on Windows, we should use Web Service, for more infomation please refer to [Paddle Serving for Windows Users](https://github.com/PaddlePaddle/Serving/blob/develop/doc/WINDOWS_TUTORIAL.md)
**WINDOWS user can only use version 0.5.0 CPU Mode**
**Prepare Stage:**
```
pip3 install paddle-serving-server==0.5.0
pip3 install paddle-serving-app==0.3.1
```
1. Start Server
```
cd win
python3 ocr_web_server.py gpu(for gpu user)
or
python3 ocr_web_server.py cpu(for cpu user)
```
2. Client Send Requests
```
python3 ocr_web_client.py
```
<aname="faq"></a>
<aname="faq"></a>
## FAQ
## FAQ
**Q1**: No result return after sending the request.
**Q1**: No result return after sending the request.