Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • models
  • Issue
  • #3724

M
models
  • 项目概览

PaddlePaddle / models
大约 2 年 前同步成功

通知 232
Star 6828
Fork 2962
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 602
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 255
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
M
models
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 602
    • Issue 602
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 255
    • 合并请求 255
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 10月 23, 2019 by saxon_zh@saxon_zhGuest

paddle无法在子进程中进行预测

Created by: mozpp

我想起多个进程,其中一个进程先初始化模型,然后不断预测。发现paddle和python的多进程multiprocessing有冲突

class exec_alg_process(mp.Process):
    def __init__(self, camera_rtsp):
        mp.Process.__init__(self)

        self.camera_rtsp = camera_rtsp

        # """
        # 初始化人头检测算法实例
        self.instance_head_detect \
            = pd.head_detect_cls(model_path='person_detect_mod/model/PHS3_frz_model')

        print("instance_head_detect ok")
    def run(self):
        record_index = 0
        self.video_capture = cv2.VideoCapture(self.camera_rtsp)

        while True:
            instance_tic_toc = ct.tic_toc_cls("head_detect")
            instance_tic_toc.tic()

            ret, frame = self.video_capture.read()
            if ret != True:
                # print("video_capture.read fail")
                # break
                continue

            # 执行人头检测算法
            # head_detect_result_list = np.zeros(0, dtype=int)
            head_detect_result_list = self.instance_head_detect(frame)
def load_predictor(model_path):
    config = AnalysisConfig(model_path)
    print('gpu id: ',config.gpu_device_id())

    # 使能GPU
    config.enable_use_gpu(1000, 0)

    # config.disable_gpu()
    # config.switch_ir_optim(True)  # 开启IR优化
    # config.enable_mkldnn()  # 开启MKLDNN
    # Create PaddlePredictor
    predictor = create_paddle_predictor(config)
    return predictor

def pre_process(ori_im, batch_size):

    image = PaddleTensor()
    image.name = "image"
    image.dtype = PaddleDType.FLOAT32
    # image.data = PaddleBuf(
    #     np.random.randn(*image.shape).flatten().astype("float32").tolist())
    img = ori_im
    h, w, _ = img.shape
    input_size=320
    image.shape = [batch_size, 3, input_size, input_size]
    # pixel mean values
    pixel_means = [0.485, 0.456, 0.406]
    # pixel std values
    pixel_stds = [0.229, 0.224, 0.225]

    img=img_reader1(img,input_size,pixel_means,pixel_stds)
    img=img.reshape(image.shape)
    image.data=PaddleBuf(img.flatten().astype("float32"))
    im_shape= PaddleTensor()
    im_shape.name="im_shape"
    im_shape.dtype=PaddleDType.INT32
    im_shape.shape = [batch_size,2]
    im_shape.data=PaddleBuf(
        np.array([h,w]).flatten().astype("int32"))

    print(im_shape.data.int32_data())
    # print("time: {}s, fps: {}".format(end - start, 1 / (end - start)))
    # print("{} {} {} {} {}".format(time1-time0,time2-time1,time2d5-time2,time3-time2d5,time4-time3))
    return [image,im_shape]

def img_reader1(im, size, mean, std):
    h, w, _ = im.shape
    im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
    im_scale_x = size / float(w)
    im_scale_y = size / float(h)
    out_img = cv2.resize(im, None, None,
                         fx=im_scale_x, fy=im_scale_y,
                         interpolation=cv2.INTER_CUBIC)
    mean = np.array(mean).reshape((1, 1, -1))
    std = np.array(std).reshape((1, 1, -1))
    out_img = (out_img / 255.0 - mean) / std
    out_img = out_img.transpose((2, 0, 1))
    return out_img

class head_detect_cls:
    def __init__(self,model_path='/project/LFFD-Deepsort/face_detection/head_detect_mod/model/PHS3_frz_model',\
                 det_thresh=0.05):
        # self.config = AnalysisConfig(model_path)
        # print('gpu id: ', self.config.gpu_device_id())

        # 使能GPU
        # self.config.enable_use_gpu(1000, 0)

        # self.predictor = create_paddle_predictor(self.config)

        self.predictor=load_predictor(model_path)
        self.det_thresh=det_thresh

    def __call__(self, frame):
        print('start call')
        frame_ = pre_process(frame, 1)
        outputs= self.predictor.run(frame_)
        output = outputs[0]
        output_data = output.data.float_data()

        b = np.array(output_data)
        if b.shape[0] < 6:
            print("No object found.")
            return

        bboxes = b.reshape([-1, 6])
        bboxes = bboxes[bboxes[:, 1] > self.det_thresh]

        cls_ids = bboxes[:, 0].astype('int32')
        cls_conf = bboxes[:, 1].astype('float32')
        boxes = bboxes[:, 2:].astype('float32')

        bbox_xcycwh = boxes
        bbox_xcycwh[:, 2] = (boxes[:, 2] - boxes[:, 0])
        bbox_xcycwh[:, 3] = (boxes[:, 3] - boxes[:, 1])


        if bbox_xcycwh is not None:
            # select class person
            mask = cls_ids == 0
            bbox_xcycwh = bbox_xcycwh[mask]

        if bboxes.shape[1] != 6:
            print("No object found after det_thresh.")
            return
        
        return bbox_xcycwh

TypeError: can't pickle paddle.fluid.core_avx.AnalysisPredictor objects

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/models#3724
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7