提交 b0a5ccb3 编写于 作者: 文幕地方's avatar 文幕地方

add table to hub serving

上级 1eaae43a
[English](readme_en.md) | 简体中文 [English](readme_en.md) | 简体中文
- [基于PaddleHub Serving的服务部署](#基于paddlehub-serving的服务部署)
- [快速启动服务](#快速启动服务)
- [1. 准备环境](#1-准备环境)
- [2. 下载推理模型](#2-下载推理模型)
- [3. 安装服务模块](#3-安装服务模块)
- [4. 启动服务](#4-启动服务)
- [方式1. 命令行命令启动(仅支持CPU)](#方式1-命令行命令启动仅支持cpu)
- [方式2. 配置文件启动(支持CPU、GPU)](#方式2-配置文件启动支持cpugpu)
- [发送预测请求](#发送预测请求)
- [返回结果格式说明](#返回结果格式说明)
- [自定义修改服务模块](#自定义修改服务模块)
PaddleOCR提供2种服务部署方式: PaddleOCR提供2种服务部署方式:
- 基于PaddleHub Serving的部署:代码路径为"`./deploy/hubserving`",按照本教程使用; - 基于PaddleHub Serving的部署:代码路径为"`./deploy/hubserving`",按照本教程使用;
- 基于PaddleServing的部署:代码路径为"`./deploy/pdserving`",使用方法参考[文档](../../deploy/pdserving/README_CN.md) - 基于PaddleServing的部署:代码路径为"`./deploy/pdserving`",使用方法参考[文档](../../deploy/pdserving/README_CN.md)
...@@ -149,10 +162,11 @@ hub serving start -c deploy/hubserving/ocr_system/config.json ...@@ -149,10 +162,11 @@ hub serving start -c deploy/hubserving/ocr_system/config.json
`http://127.0.0.1:8866/predict/ocr_cls` `http://127.0.0.1:8866/predict/ocr_cls`
`http://127.0.0.1:8867/predict/ocr_rec` `http://127.0.0.1:8867/predict/ocr_rec`
`http://127.0.0.1:8868/predict/ocr_system` `http://127.0.0.1:8868/predict/ocr_system`
- **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径 - **image_dir**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径
- **visualize**:是否可视化结果,默认为False
访问示例: 访问示例:
```python tools/test_hubserving.py http://127.0.0.1:8868/predict/ocr_system ./doc/imgs/``` ```python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir./doc/imgs/ --visualize=false```
## 返回结果格式说明 ## 返回结果格式说明
返回结果为列表(list),列表中的每一项为词典(dict),词典一共可能包含3种字段,信息如下: 返回结果为列表(list),列表中的每一项为词典(dict),词典一共可能包含3种字段,信息如下:
...@@ -170,7 +184,7 @@ hub serving start -c deploy/hubserving/ocr_system/config.json ...@@ -170,7 +184,7 @@ hub serving start -c deploy/hubserving/ocr_system/config.json
| ---- | ---- | ---- | ---- | ---- | | ---- | ---- | ---- | ---- | ---- |
|angle| | ✔ | | ✔ | |angle| | ✔ | | ✔ |
|text| | |✔|✔| |text| | |✔|✔|
|confidence| |✔ |✔|| |confidence| |✔ |✔| |
|text_region| ✔| | |✔ | |text_region| ✔| | |✔ |
**说明:** 如果需要增加、删除、修改返回字段,可在相应模块的`module.py`文件中进行修改,完整流程参考下一节自定义修改服务模块。 **说明:** 如果需要增加、删除、修改返回字段,可在相应模块的`module.py`文件中进行修改,完整流程参考下一节自定义修改服务模块。
......
English | [简体中文](readme.md) English | [简体中文](readme.md)
- [Service deployment based on PaddleHub Serving](#service-deployment-based-on-paddlehub-serving)
- [Quick start service](#quick-start-service)
- [1. Prepare the environment](#1-prepare-the-environment)
- [2. Download inference model](#2-download-inference-model)
- [3. Install Service Module](#3-install-service-module)
- [4. Start service](#4-start-service)
- [Way 1. Start with command line parameters (CPU only)](#way-1-start-with-command-line-parameters-cpu-only)
- [Way 2. Start with configuration file(CPU、GPU)](#way-2-start-with-configuration-filecpugpu)
- [Send prediction requests](#send-prediction-requests)
- [Returned result format](#returned-result-format)
- [User defined service module modification](#user-defined-service-module-modification)
PaddleOCR provides 2 service deployment methods: PaddleOCR provides 2 service deployment methods:
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please follow this tutorial. - Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please follow this tutorial.
- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please refer to the [tutorial](../../deploy/pdserving/README.md) for usage. - Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please refer to the [tutorial](../../deploy/pdserving/README.md) for usage.
...@@ -154,11 +167,12 @@ For example, if the detection, recognition and 2-stage serial services are start ...@@ -154,11 +167,12 @@ For example, if the detection, recognition and 2-stage serial services are start
`http://127.0.0.1:8866/predict/ocr_cls` `http://127.0.0.1:8866/predict/ocr_cls`
`http://127.0.0.1:8867/predict/ocr_rec` `http://127.0.0.1:8867/predict/ocr_rec`
`http://127.0.0.1:8868/predict/ocr_system` `http://127.0.0.1:8868/predict/ocr_system`
- **image_path**:Test image path, can be a single image path or an image directory path - **image_dir**:Test image path, can be a single image path or an image directory path
- **visualize**:Whether to visualize the results, the default value is False
**Eg.** **Eg.**
```shell ```shell
python tools/test_hubserving.py http://127.0.0.1:8868/predict/ocr_system ./doc/imgs/ python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir./doc/imgs/ --visualize=false`
``` ```
## Returned result format ## Returned result format
...@@ -177,7 +191,7 @@ The fields returned by different modules are different. For example, the results ...@@ -177,7 +191,7 @@ The fields returned by different modules are different. For example, the results
| ---- | ---- | ---- | ---- | ---- | | ---- | ---- | ---- | ---- | ---- |
|angle| | ✔ | | ✔ | |angle| | ✔ | | ✔ |
|text| | |✔|✔| |text| | |✔|✔|
|confidence| |✔ |✔|| |confidence| |✔ |✔| |
|text_region| ✔| | |✔ | |text_region| ✔| | |✔ |
**Note:** If you need to add, delete or modify the returned fields, you can modify the file `module.py` of the corresponding module. For the complete process, refer to the user-defined modification service module in the next section. **Note:** If you need to add, delete or modify the returned fields, you can modify the file `module.py` of the corresponding module. For the complete process, refer to the user-defined modification service module in the next section.
......
{
"modules_info": {
"structure_table": {
"init_args": {
"version": "1.0.0",
"use_gpu": true
},
"predict_args": {
}
}
},
"port": 8869,
"use_multiprocess": false,
"workers": 2
}
# -*- coding:utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
sys.path.insert(0, ".")
import copy
import time
import paddlehub
from paddlehub.common.logger import logger
from paddlehub.module.module import moduleinfo, runnable, serving
import cv2
import numpy as np
import paddlehub as hub
from tools.infer.utility import base64_to_cv2
from ppstructure.table.predict_table import TableSystem as _TableSystem
from ppstructure.predict_system import save_structure_res
from ppstructure.utility import parse_args
from deploy.hubserving.structure_table.params import read_params
@moduleinfo(
name="structure_table",
version="1.0.0",
summary="PP-Structure table service",
author="paddle-dev",
author_email="paddle-dev@baidu.com",
type="cv/table_recognition")
class TableSystem(hub.Module):
def _initialize(self, use_gpu=False, enable_mkldnn=False):
"""
initialize with the necessary elements
"""
cfg = self.merge_configs()
cfg.use_gpu = use_gpu
if use_gpu:
try:
_places = os.environ["CUDA_VISIBLE_DEVICES"]
int(_places[0])
print("use gpu: ", use_gpu)
print("CUDA_VISIBLE_DEVICES: ", _places)
cfg.gpu_mem = 8000
except:
raise RuntimeError(
"Environment Variable CUDA_VISIBLE_DEVICES is not set correctly. If you wanna use gpu, please set CUDA_VISIBLE_DEVICES via export CUDA_VISIBLE_DEVICES=cuda_device_id."
)
cfg.ir_optim = True
cfg.enable_mkldnn = enable_mkldnn
self.table_sys = _TableSystem(cfg)
def merge_configs(self):
# deafult cfg
backup_argv = copy.deepcopy(sys.argv)
sys.argv = sys.argv[:1]
cfg = parse_args()
update_cfg_map = vars(read_params())
for key in update_cfg_map:
cfg.__setattr__(key, update_cfg_map[key])
sys.argv = copy.deepcopy(backup_argv)
return cfg
def read_images(self, paths=[]):
images = []
for img_path in paths:
assert os.path.isfile(
img_path), "The {} isn't a valid file.".format(img_path)
img = cv2.imread(img_path)
if img is None:
logger.info("error in loading image:{}".format(img_path))
continue
images.append(img)
return images
def predict(self, images=[], paths=[]):
"""
Get the chinese texts in the predicted images.
Args:
images (list(numpy.ndarray)): images data, shape of each is [H, W, C]. If images not paths
paths (list[str]): The paths of images. If paths not images
Returns:
res (list): The result of chinese texts and save path of images.
"""
if images != [] and isinstance(images, list) and paths == []:
predicted_data = images
elif images == [] and isinstance(paths, list) and paths != []:
predicted_data = self.read_images(paths)
else:
raise TypeError("The input data is inconsistent with expectations.")
assert predicted_data != [], "There is not any image to be predicted. Please check the input data."
all_results = []
for img in predicted_data:
if img is None:
logger.info("error in loading image")
all_results.append([])
continue
starttime = time.time()
pred_html = self.table_sys(img)
elapse = time.time() - starttime
logger.info("Predict time: {}".format(elapse))
all_results.append(pred_html)
return all_results
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = [base64_to_cv2(image) for image in images]
results = self.predict(images_decode, **kwargs)
return results
if __name__ == '__main__':
table_system = TableSystem()
table_system._initialize()
image_path = ['./doc/table/table.jpg']
res = table_system.predict(paths=image_path)
print(res)
# -*- coding:utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from deploy.hubserving.ocr_system.params import read_params as pp_ocr_read_params
def read_params():
cfg = pp_ocr_read_params()
# params for table structure model
cfg.table_max_len = 488
cfg.table_model_dir = './inference/en_ppocr_mobile_v2.0_table_structure_infer/'
cfg.table_char_type = 'en'
cfg.table_char_dict_path = './ppocr/utils/dict/table_structure_dict.txt'
cfg.show_log = False
return cfg
...@@ -25,7 +25,9 @@ import numpy as np ...@@ -25,7 +25,9 @@ import numpy as np
import time import time
from PIL import Image from PIL import Image
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
from tools.infer.utility import draw_ocr, draw_boxes from tools.infer.utility import draw_ocr, draw_boxes, str2bool
from ppstructure.utility import draw_structure_result
from ppstructure.predict_system import save_structure_res, to_excel
import requests import requests
import json import json
...@@ -69,8 +71,8 @@ def draw_server_result(image_file, res): ...@@ -69,8 +71,8 @@ def draw_server_result(image_file, res):
return draw_img return draw_img
def main(url, image_path): def main(args):
image_file_list = get_image_file_list(image_path) image_file_list = get_image_file_list(args.image_dir)
is_visualize = False is_visualize = False
headers = {"Content-type": "application/json"} headers = {"Content-type": "application/json"}
cnt = 0 cnt = 0
...@@ -80,18 +82,25 @@ def main(url, image_path): ...@@ -80,18 +82,25 @@ def main(url, image_path):
if img is None: if img is None:
logger.info("error in loading image:{}".format(image_file)) logger.info("error in loading image:{}".format(image_file))
continue continue
img_name = os.path.basename(image_file)
# 发送HTTP请求 # 发送HTTP请求
starttime = time.time() starttime = time.time()
data = {'images': [cv2_to_base64(img)]} data = {'images': [cv2_to_base64(img)]}
r = requests.post(url=url, headers=headers, data=json.dumps(data)) r = requests.post(
url=args.server_url, headers=headers, data=json.dumps(data))
elapse = time.time() - starttime elapse = time.time() - starttime
total_time += elapse total_time += elapse
logger.info("Predict time of %s: %.3fs" % (image_file, elapse)) logger.info("Predict time of %s: %.3fs" % (image_file, elapse))
res = r.json()["results"][0] res = r.json()["results"][0]
logger.info(res) logger.info(res)
if is_visualize: if args.visualize:
draw_img = None
if 'structure_table' in args.server_url:
to_excel(res, './{}.xlsx'.format(img_name))
elif 'structure_system' in args.server_url:
pass
else:
draw_img = draw_server_result(image_file, res) draw_img = draw_server_result(image_file, res)
if draw_img is not None: if draw_img is not None:
draw_img_save = "./server_results/" draw_img_save = "./server_results/"
...@@ -108,10 +117,16 @@ def main(url, image_path): ...@@ -108,10 +117,16 @@ def main(url, image_path):
logger.info("avg time cost: {}".format(float(total_time) / cnt)) logger.info("avg time cost: {}".format(float(total_time) / cnt))
def parse_args():
import argparse
parser = argparse.ArgumentParser(description="args for hub serving")
parser.add_argument("--server_url", type=str, required=True)
parser.add_argument("--image_dir", type=str, required=True)
parser.add_argument("--visualize", type=str2bool, default=False)
args = parser.parse_args()
return args
if __name__ == '__main__': if __name__ == '__main__':
if len(sys.argv) != 3: args = parse_args()
logger.info("Usage: %s server_url image_path" % sys.argv[0]) main(args)
else:
server_url = sys.argv[1]
image_path = sys.argv[2]
main(server_url, image_path)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册