Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #19462

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 8月 27, 2019 by saxon_zh@saxon_zhGuest

layer.concat报错

Created by: dingsiyu

Traceback (most recent call last):
  File "./train_gpu_paddle.py", line 511, in <module>
    train(n_token, cutoffs)
  File "./train_gpu_paddle.py", line 440, in train
    ret = train_exe.run(feed=feed_dict, fetch_list=fetch_list_train)
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/parallel_executor.py", line 280, in run
    return_numpy=return_numpy)
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/executor.py", line 666, in run
    return_numpy=return_numpy)
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/executor.py", line 528, in _run_parallel
    exe.run(fetch_var_names, fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet: Invoke operator concat error.
Python Callstacks:
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/framework.py", line 1771, in append_op
    attrs=kwargs.get("attrs", None))
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
    return self.main_program.current_block().append_op(*args, **kwargs)
  File "/home/dingsiyu/bin/anaconda3/lib/python3.6/site-packages/paddle/fluid/layers/tensor.py", line 210, in concat
    attrs={'axis': axis})
  File "/home/dingsiyu/project/python/transformer-xl-paddlepaddle-lyq2/model.py", line 72, in _cache_mem
    new_mem = layers.concat([prev_mem, curr_out], axis=0)[-mem_len:]
  File "/home/dingsiyu/project/python/transformer-xl-paddlepaddle-lyq2/model.py", line 526, in transformer
    new_mems.append(_cache_mem(enc_input, mems[i], mem_len))
  File "/home/dingsiyu/project/python/transformer-xl-paddlepaddle-lyq2/transformer_xl.py", line 216, in _build_model
    name='encoder'
  File "./train_gpu_paddle.py", line 85, in model_fn
    loss, new_mems = transofrmer_xl._build_model()
  File "./train_gpu_paddle.py", line 105, in single_core_graph
    same_length=same_length)
  File "./train_gpu_paddle.py", line 135, in create_model
    same_length=same_length)
  File "./train_gpu_paddle.py", line 251, in train
    mem_len=FLAGS.mem_len, clamp_len=FLAGS.clamp_len, same_length=FLAGS.same_length)
  File "./train_gpu_paddle.py", line 511, in <module>
    train(n_token, cutoffs)
C++ Callstacks:
Tensor holds the wrong type, it holds ::paddle::platform::float16, but desires to be float at [/paddle/paddle/fluid/framework/tensor_impl.h:30]
PaddlePaddle Call Stacks:
0       0x7f873ca4e890p void paddle::platform::EnforceNotMet::Init<char const*>(char const*, char const*, int) + 352
1       0x7f873ca4ec09p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137
2       0x7f873ca55c89p float const* paddle::framework::Tensor::data<float>() const + 233
3       0x7f873cf26052p paddle::operators::ConcatKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const + 530
4       0x7f873cf26353p std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 1ul, paddle::operators::ConcatKernel<paddle::platform::CUDADeviceContext, double>, paddle::operators::ConcatKernel<paddle::platform::CUDADeviceContext, float>, paddle::operators::ConcatKernel<paddle::platform::CUDADeviceContext, long>, paddle::operators::ConcatKernel<paddle::platform::CUDADeviceContext, int> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) + 35
5       0x7f873eadcf67p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, paddle::framework::RuntimeContext*) const + 375
6       0x7f873eadd341p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 529
7       0x7f873eada93cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 332
8       0x7f873e8d6129p
9       0x7f873e8c750dp
10      0x7f873e8c8244p paddle::framework::details::OpHandleBase::RunAndRecordEvent(std::function<void ()> const&) + 116
11      0x7f873e8d5dbcp paddle::framework::details::ComputationOpHandle::RunImpl() + 124
12      0x7f873e8c87e0p paddle::framework::details::OpHandleBase::Run(bool) + 160
13      0x7f873e8a9b56p paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*) + 310
14      0x7f873e8a87bfp paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue<unsigned long> > const&, unsigned long*) + 47
15      0x7f873e8a8b7fp
16      0x7f873cc818a3p std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&) + 35
17      0x7f873cb185a7p std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) + 39
18      0x7f8777eb1620p pthread_once + 80
19      0x7f873e8a4202p
20      0x7f873cb19b24p ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const + 404
21      0x7f8765442421p
22      0x7f8777eac893p
23      0x7f8777bddbfdp clone + 109
指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#19462
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7