Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #23373

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 4月 01, 2020 by saxon_zh@saxon_zhGuest

gaussian_random_batch_size_like 中不定长数据相加出错

Created by: OleNet

import sys
import argparse

import math
import numpy as np

import paddle
import paddle.fluid as fluid


mp, sp = fluid.Program(), fluid.Program()
with fluid.program_guard(mp, sp):
    x1 = fluid.layers.data(name='x1', shape=[-1, 10, 20], dtype='int')
    g = fluid.layers.gaussian_random_batch_size_like(x1, shape=x1.shape, mean=1.0, std=2.0)
    x2 = g + x1
    
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(sp)

x = np.random.randint(0, 10, size=[3, 8, 20])

print(exe.run(mp, 
            feed={x1.name:x},
            fetch_list=[g])[0].shape)


exe.run(mp, 
            feed={x1.name:x},
            fetch_list=[g])

报错信息:

---------------------------------------------------------------------------
EnforceNotMet                             Traceback (most recent call last)
<ipython-input-3-0251770a8bbf> in <module>
     23 print(exe.run(mp, 
     24             feed={x1.name:x},
---> 25             fetch_list=[g])[0].shape)
     26 
     27 

~/anaconda3/lib/python3.7/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
    778                 warnings.warn(
    779                     "The following exception is not an EOF exception.")
--> 780             six.reraise(*sys.exc_info())
    781 
    782     def _run_impl(self, program, feed, fetch_list, feed_var_name,

~/anaconda3/lib/python3.7/site-packages/six.py in reraise(tp, value, tb)
    691             if value.__traceback__ is not tb:
    692                 raise value.with_traceback(tb)
--> 693             raise value
    694         finally:
    695             value = None

~/anaconda3/lib/python3.7/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
    773                 scope=scope,
    774                 return_numpy=return_numpy,
--> 775                 use_program_cache=use_program_cache)
    776         except Exception as e:
    777             if not isinstance(e, core.EOFException):

~/anaconda3/lib/python3.7/site-packages/paddle/fluid/executor.py in _run_impl(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
    820                 scope=scope,
    821                 return_numpy=return_numpy,
--> 822                 use_program_cache=use_program_cache)
    823 
    824         program._compile(scope, self.place)

~/anaconda3/lib/python3.7/site-packages/paddle/fluid/executor.py in _run_program(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
    897         if not use_program_cache:
    898             self._default_executor.run(program.desc, scope, 0, True, True,
--> 899                                        fetch_var_name)
    900         else:
    901             self._default_executor.run_cached_prepared_ctx(ctx, scope, False,

EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > paddle::platform::GetTraceBackString<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, char const*, int)
1   paddle::operators::get_mid_dims(paddle::framework::DDim const&, paddle::framework::DDim const&, int, int*, int*, int*, int*)
2   void paddle::operators::ElementwiseComputeEx<paddle::operators::AddFunctor<float, void>, paddle::platform::CPUDeviceContext, float, float>(paddle::framework::ExecutionContext const&, paddle::framework::Tensor const*, paddle::framework::Tensor const*, int, paddle::operators::AddFunctor<float, void>, paddle::framework::Tensor*)
3   paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
4   std::__1::__function::__func<paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, long long> >::operator()(char const*, char const*, int) const::'lambda'(paddle::framework::ExecutionContext const&), std::__1::allocator<paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::ElementwiseAddKernel<paddle::platform::CPUDeviceContext, long long> >::operator()(char const*, char const*, int) const::'lambda'(paddle::framework::ExecutionContext const&)>, void (paddle::framework::ExecutionContext const&)>::operator()(paddle::framework::ExecutionContext const&)
5   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
6   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
7   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
8   paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
9   paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, bool)
10  void pybind11::cpp_function::initialize<paddle::pybind::pybind11_init_core_avx(pybind11::module&)::$_103, void, paddle::framework::Executor&, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, pybind11::name, pybind11::is_method, pybind11::sibling>(paddle::pybind::pybind11_init_core_avx(pybind11::module&)::$_103&&, void (*)(paddle::framework::Executor&, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&)
11  pybind11::cpp_function::dispatcher(_object*, _object*, _object*)

------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2488, in append_op
    attrs=kwargs.get("attrs", None))
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py", line 239, in __impl__
    attrs={'axis': axis})
  File "<ipython-input-3-0251770a8bbf>", line 15, in <module>
    x2 = g + x1
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3325, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3248, in run_ast_nodes
    if (await self.run_code(code, result,  async_=asy)):
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async
    interactivity=interactivity, compiler=compiler, result=result)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
    coro.send(None)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2880, in _run_cell
    return runner(coro)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2854, in run_cell
    raw_cell, store_history, silent, shell_futures)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 542, in execute_request
    user_expressions, allow_stdin,
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 272, in dispatch_shell
    yield gen.maybe_future(handler(stream, idents, msg))
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 365, in process_one
    yield gen.maybe_future(dispatch(*args))
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 748, in run
    yielded = self.gen.send(value)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 787, in inner
    self.run()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 743, in _run_callback
    ret = callback()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 690, in <lambda>
    lambda f: self._run_callback(functools.partial(callback, future))
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/asyncio/base_events.py", line 1775, in _run_once
    handle._run()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/asyncio/base_events.py", line 539, in run_forever
    self._run_once()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 148, in start
    self.asyncio_loop.run_forever()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 505, in start
    self.io_loop.start()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/liujiaxiang/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)

----------------------
Error Message Summary:
----------------------
Error: ShapeError: broadcast dimension mismatch. Operands could not be broadcast together with the shape of X = [3, 10, 20] and the shape of Y = [3, 8, 20]. Received [10] in X is not equal to [8] in Y
  [Hint: Expected y_dims[i] == 1, but received y_dims[i]:8 != 1:1.] at (/home/teamcity/work/ef54dc8a5b211854/paddle/fluid/operators/elementwise/elementwise_op_function.h:79)
  [operator < elementwise_add > error]

这里我模拟NLP中的embedding,x1 = fluid.layers.data(name='x1', shape=[-1, 10, 20], dtype='int'), 也就是定义的时候, 按照句子最大长度10定义了x1 但是实际上,输入的句子可能不到10,例如是8,那么x2 = g + x1就会出错。 这种情况得咋办呢?

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#23373
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7