Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #13117

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 8月 31, 2018 by saxon_zh@saxon_zhGuest

Bug in parallel Inference in Python Fluid v0.14.0

Created by: yangperasd

I found a bug in doing inference in parallel. Some info about this Bug:

TypeError: run() got multiple values for keyword argument 'fetch_list'

Some info about my env: paddlepaddle: 0.14.0 system: Ubuntu 14.04 python: 2.7

Some code to reproduce this bug. The main code is borrowed from recognize_digits in paddlepaddle book except line 133. https://github.com/PaddlePaddle/book/blob/2b81d844673c1ba09fd596d70492375f2998ad36/02.recognize_digits/train.py#L133 place=place, parallel =True) Use the above code to replace the line 133.

Some reasons about this bug: The main reason is the difference betweent Executor and ParallelExecutor in API. The API of class Executor is

run(program=None, feed=None, fetch_list=None, feed_var_name='feed', fetch_var_name='fetch', scope=None, return_numpy=True, use_program_cache=False)

The API of class ParallelExecutor is

run(fetch_list, feed=None, feed_dict=None, return_numpy=True)

Here is the main code result in this bug: https://github.com/PaddlePaddle/Paddle/blob/c709a04ae22cd848f50a11737bdb3af24eabbe1e/python/paddle/fluid/inferencer.py#L102-L105 When set parallel=True, Inference use ParallelExecutor instead Executor to do inerence. According to the API of ParallelExecutor, the first arg of method run is fetch_list, but #L102-L105 assign fetch_list again by kwags.

Some code to fix this bug ` class Inferencer(object): """ Inferencer High Level API.

Args:
    infer_func (Python func): Infer function that will return predict Variable
    param_path (str): The path where the inference model is saved by fluid.io.save_params
    place (Place): place to do the inference
    parallel (bool): use parallel_executor to run the inference, it will use multi CPU/GPU.

Examples:
    .. code-block:: python

        def inference_program():
            x = fluid.layers.data(name='x', shape=[13], dtype='float32')
            y_predict = fluid.layers.fc(input=x, size=1, act=None)
            return y_predict

        place = fluid.CPUPlace()
        inferencer = fluid.Inferencer(
            infer_func=inference_program, param_path="/tmp/model", place=place)

"""

def __init__(self, infer_func, param_path, place=None, parallel=False):
    self.param_path = param_path
    self.scope = core.Scope()
    self.parallel = parallel
    self.place = check_and_get_place(place)

    self.inference_program = framework.Program()
    with framework.program_guard(self.inference_program):
        with unique_name.guard():
            self.predict_var = infer_func()

    with self._prog_and_scope_guard():
        # load params from param_path into scope
        io.load_params(executor.Executor(self.place), param_path)
    
    self.inference_program = self.inference_program.clone(for_test=True)

    if parallel:
        with self._prog_and_scope_guard():
            self.exe = parallel_executor.ParallelExecutor(
                use_cuda=isinstance(self.place, core.CUDAPlace),
                main_program=self.inference_program)
    else:
        self.exe = executor.Executor(self.place)

    self.parallel = parallel


def infer(self, inputs, return_numpy=True):
    """
    Do Inference for Inputs

    Args:
        inputs (map): a map of {"input_name": input_var} that will be feed into the inference program
        return_numpy (bool): transform return value into numpy or not

    Returns:
        Tensor or Numpy: the predict value of the inference model for the inputs

    Examples:
        .. code-block:: python

            tensor_x = numpy.random.uniform(0, 10, [batch_size, 13]).astype("float32")
            results = inferencer.infer({'x': tensor_x})
    """
    if not isinstance(inputs, dict):
        raise ValueError(
            "inputs should be a map of {'input_name': input_var}")

    with executor.scope_guard(self.scope):
        if self.parallel:
            results = self.exe.run(
                               feed=inputs,
                               fetch_list=[self.predict_var.name],
                               return_numpy=return_numpy)
        else:
            results = self.exe.run(self.inference_program,
                               feed=inputs,
                               fetch_list=[self.predict_var],
                               return_numpy=return_numpy)
    return results

@contextlib.contextmanager
def _prog_and_scope_guard(self):
    with framework.program_guard(main_program=self.inference_program):
        with executor.scope_guard(self.scope):
            yield

`

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#13117
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7