提交 c7a50dbd 编写于 作者: littletomatodonkey's avatar littletomatodonkey

fix serving serviosn

上级 ad3c1f98
...@@ -34,29 +34,24 @@ PaddleOCR提供2种服务部署方式: ...@@ -34,29 +34,24 @@ PaddleOCR提供2种服务部署方式:
- 准备PaddleServing的运行环境,步骤如下 - 准备PaddleServing的运行环境,步骤如下
1. 安装serving,用于启动服务 ```bash
``` # 安装serving,用于启动服务
pip3 install paddle-serving-server==0.6.1 # for CPU wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl
pip3 install paddle-serving-server-gpu==0.6.1 # for GPU pip install paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl
# 其他GPU环境需要确认环境再选择执行如下命令 # 如果是cuda10.1环境,可以使用下面的命令安装paddle-serving-server
pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6 # wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl
pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7 # pip install paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl
```
# 安装client,用于向服务发送请求
2. 安装client,用于向服务发送请求 wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.7.0-cp37-none-any.whl
[下载链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)中找到对应python版本的client安装包,这里推荐python3.7版本: pip install paddle_serving_client-0.7.0-cp37-none-any.whl
``` # 安装serving-app
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.7.0-py3-none-any.whl
pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl pip install paddle_serving_app-0.7.0-py3-none-any.whl
``` ```
3. 安装serving-app
```
pip3 install paddle-serving-app==0.6.1
```
**Note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)。 **Note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Latest_Packages_CN.md)
<a name="模型转换"></a> <a name="模型转换"></a>
## 模型转换 ## 模型转换
...@@ -64,40 +59,41 @@ PaddleOCR提供2种服务部署方式: ...@@ -64,40 +59,41 @@ PaddleOCR提供2种服务部署方式:
使用PaddleServing做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。 使用PaddleServing做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。
首先,下载PPOCR的[inference模型](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) 首先,下载PPOCR的[inference模型](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15)
```
```bash
# 下载并解压 OCR 文本检测模型 # 下载并解压 OCR 文本检测模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar -O ch_PP-OCRv2_det_infer.tar && tar -xf ch_PP-OCRv2_det_infer.tar
# 下载并解压 OCR 文本识别模型 # 下载并解压 OCR 文本识别模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar -O ch_PP-OCRv2_rec_infer.tar && tar -xf ch_PP-OCRv2_rec_infer.tar
``` ```
接下来,用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。 接下来,用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。
``` ```bash
# 转换检测模型 # 转换检测模型
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_det_infer/ \ python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_det_infer/ \
--model_filename inference.pdmodel \ --model_filename inference.pdmodel \
--params_filename inference.pdiparams \ --params_filename inference.pdiparams \
--serving_server ./ppocr_det_mobile_2.0_serving/ \ --serving_server ./ppocrv2_det_serving/ \
--serving_client ./ppocr_det_mobile_2.0_client/ --serving_client ./ppocrv2_det_client/
# 转换识别模型 # 转换识别模型
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_rec_infer/ \ python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_rec_infer/ \
--model_filename inference.pdmodel \ --model_filename inference.pdmodel \
--params_filename inference.pdiparams \ --params_filename inference.pdiparams \
--serving_server ./ppocr_rec_mobile_2.0_serving/ \ --serving_server ./ppocrv2_rec_serving/ \
--serving_client ./ppocr_rec_mobile_2.0_client/ --serving_client ./ppocrv2_rec_client/
``` ```
检测模型转换完成后,会在当前文件夹多出`ppocr_det_mobile_2.0_serving``ppocr_det_mobile_2.0_client`的文件夹,具备如下格式: 检测模型转换完成后,会在当前文件夹多出`ppocrv2_det_serving``ppocrv2_det_client`的文件夹,具备如下格式:
``` ```
|- ppocr_det_mobile_2.0_serving/ |- ppocrv2_det_serving/
|- __model__ |- __model__
|- __params__ |- __params__
|- serving_server_conf.prototxt |- serving_server_conf.prototxt
|- serving_server_conf.stream.prototxt |- serving_server_conf.stream.prototxt
|- ppocr_det_mobile_2.0_client |- ppocrv2_det_client
|- serving_client_conf.prototxt |- serving_client_conf.prototxt
|- serving_client_conf.stream.prototxt |- serving_client_conf.stream.prototxt
......
...@@ -34,7 +34,7 @@ op: ...@@ -34,7 +34,7 @@ op:
client_type: local_predictor client_type: local_predictor
#det模型路径 #det模型路径
model_config: ./ppocr_det_mobile_2.0_serving model_config: ./ppocrv2_det_serving
#Fetch结果列表,以client_config中fetch_var的alias_name为准 #Fetch结果列表,以client_config中fetch_var的alias_name为准
fetch_list: ["save_infer_model/scale_0.tmp_1"] fetch_list: ["save_infer_model/scale_0.tmp_1"]
...@@ -60,7 +60,7 @@ op: ...@@ -60,7 +60,7 @@ op:
client_type: local_predictor client_type: local_predictor
#rec模型路径 #rec模型路径
model_config: ./ppocr_rec_mobile_2.0_serving model_config: ./ppocrv2_rec_serving
#Fetch结果列表,以client_config中fetch_var的alias_name为准 #Fetch结果列表,以client_config中fetch_var的alias_name为准
fetch_list: ["save_infer_model/scale_0.tmp_1"] fetch_list: ["save_infer_model/scale_0.tmp_1"]
......
...@@ -54,7 +54,7 @@ class DetOp(Op): ...@@ -54,7 +54,7 @@ class DetOp(Op):
_, self.new_h, self.new_w = det_img.shape _, self.new_h, self.new_w = det_img.shape
return {"x": det_img[np.newaxis, :].copy()}, False, None, "" return {"x": det_img[np.newaxis, :].copy()}, False, None, ""
def postprocess(self, input_dicts, fetch_dict, log_id): def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
det_out = fetch_dict["save_infer_model/scale_0.tmp_1"] det_out = fetch_dict["save_infer_model/scale_0.tmp_1"]
ratio_list = [ ratio_list = [
float(self.new_h) / self.ori_h, float(self.new_w) / self.ori_w float(self.new_h) / self.ori_h, float(self.new_w) / self.ori_w
...@@ -129,7 +129,7 @@ class RecOp(Op): ...@@ -129,7 +129,7 @@ class RecOp(Op):
return feed_list, False, None, "" return feed_list, False, None, ""
def postprocess(self, input_dicts, fetch_data, log_id): def postprocess(self, input_dicts, fetch_data, data_id, log_id):
res_list = [] res_list = []
if isinstance(fetch_data, dict): if isinstance(fetch_data, dict):
if len(fetch_data) > 0: if len(fetch_data) > 0:
......
...@@ -54,7 +54,7 @@ class DetOp(Op): ...@@ -54,7 +54,7 @@ class DetOp(Op):
_, self.new_h, self.new_w = det_img.shape _, self.new_h, self.new_w = det_img.shape
return {"x": det_img[np.newaxis, :].copy()}, False, None, "" return {"x": det_img[np.newaxis, :].copy()}, False, None, ""
def postprocess(self, input_dicts, fetch_dict, log_id): def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
det_out = fetch_dict["save_infer_model/scale_0.tmp_1"] det_out = fetch_dict["save_infer_model/scale_0.tmp_1"]
ratio_list = [ ratio_list = [
float(self.new_h) / self.ori_h, float(self.new_w) / self.ori_w float(self.new_h) / self.ori_h, float(self.new_w) / self.ori_w
......
...@@ -56,7 +56,7 @@ class RecOp(Op): ...@@ -56,7 +56,7 @@ class RecOp(Op):
feed_list.append(feed) feed_list.append(feed)
return feed_list, False, None, "" return feed_list, False, None, ""
def postprocess(self, input_dicts, fetch_data, log_id): def postprocess(self, input_dicts, fetch_data, data_id, log_id):
res_list = [] res_list = []
if isinstance(fetch_data, dict): if isinstance(fetch_data, dict):
if len(fetch_data) > 0: if len(fetch_data) > 0:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册