hub server部署不支持 batch_size 参数
Created by: BobDu
根据readme config.json
{
"modules_info": {
"chinese_ocr_db_crnn_mobile": {
"init_args": {
"version": "1.0.3"
},
"predict_args": {
"use_gpu": true,
"batch_size": 20
}
}
},
"port": 3306
}
服务报错
2020-08-21 20:08:55,054-INFO: 211.103.135.176 - - [21/Aug/2020 20:08:55] "POST /predict/chinese_ocr_db_crnn_mobile HTTP/1.1" 200 -
2020-08-21 20:09:08 - recognize_text() got an unexpected keyword argument 'batch_size'
阅读paddlehub的源码中
~/Projects/PaddleHub/hub_module/modules/image/text_recognition/chinese_ocr_db_crnn_mobile/module.py
def recognize_text(self,
images=[],
paths=[],
use_gpu=False,
output_dir='ocr_result',
visualization=False,
box_thresh=0.5,
text_thresh=0.5):
"""
Get the chinese texts in the predicted images.
Args:
images (list(numpy.ndarray)): images data, shape of each is [H, W, C]. If images not paths
paths (list[str]): The paths of images. If paths not images
use_gpu (bool): Whether to use gpu.
batch_size(int): the program deals once with one
output_dir (str): The directory to store output images.
visualization (bool): Whether to save image or not.
box_thresh(float): the threshold of the detected text box's confidence
text_thresh(float): the threshold of the recognize chinese texts' confidence
Returns:
res (list): The result of chinese texts and save path of images.
"""
recognize_text
方法并没有按照文档中写的接受batch_size参数
如果生产化部署中没有 batch_size 预测 那么采用GPU机器因为其不支持 mutilprocessing 是不是就是会比CPU机器还要慢的许多?