fluid.layers.add_position_encoding 报错
Created by: XiaoYangWu
- 标题:调用fluid.layers.add_position_encoding接口报错
- 版本、环境信息: 1)PaddlePaddle版本:PaddlePaddle 1.7.2 2)CPU/GPU:CPU 3)Python版本号: 2.7.18
复现信息: 搭建transformer结构,模型搭建没问题,feed训练数据进行模型训练时,在add_position_encoding接口报错退出。
调用训练接口: exe.train_from_dataset(program=fleet.main_program, dataset=dataset, debug=False)
报错信息:
71 C++ Call Stacks (More useful to developers): 72 -------------------------------------------- 73 0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int) 74 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) 75 2 paddle::operators::AddPositionEncodingKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 76 3 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::AddPositionEncod ingKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::AddPositionEncodingKernel<paddle::platform::CPUDeviceContext, double> >::operator()(char const*, char const*, int) const::{l ambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 77 4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 78 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 79 6 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 80 7 paddle::framework::HogwildWorker::TrainFiles() 81 82 ------------------------------------------ 83 Python Call Stacks (More useful to users): 84 ------------------------------------------ 85 File "/home/xiaobang/anaconda2/envs/paddle1.7/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op 86 attrs=kwargs.get("attrs", None)) 87 File "/home/xiaobang/anaconda2/envs/paddle1.7/lib/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op 88 return self.main_program.current_block().append_op(*args, **kwargs) 89 File "/home/xiaobang/anaconda2/envs/paddle1.7/lib/python2.7/site-packages/paddle/fluid/layers/nn.py", line 12111, in add_position_encoding 90 "beta": beta}) 91 File "model.py", line 56, in prepare_encoder 92 position_emb = fluid.layers.add_position_encoding(input=emb, alpha=1.0, beta=1.0) 93 File "model.py", line 252, in transformer 94 enc_input = self.prepare_encoder(input_data, params) 95 File "model.py", line 297, in net 96 q_transformer = self.transformer(q, params, "left") 97 File "/home/xiaobang/wuxiaoyang/simnet_transformer/distribute_base.py", line 114, in runtime_main 98 self.avg_cost = self.net(self.inputs, params) 99 File "model.py", line 371, in 100 model.runtime_main(params) 101 102 ---------------------- 103 Error Message Summary: 104 ---------------------- 105 Error: The input X of Add Position Encoding should be 2-D LoDTensor! at (/paddle/paddle/fluid/operators/add_position_encoding_op.h:52) 106 [operator < add_position_encoding > error] 107 W0701 16:09:36.175391 16556 init.cc:209] Warning: PaddlePaddle catches a failure signal, it may not work properly 108 W0701 16:09:36.175431 16556 init.cc:211] You could check whether you killed PaddlePaddle thread/process accidentally or report the case to PaddlePaddle 109 W0701 16:09:36.175443 16556 init.cc:214] The detail failure signal is: 110 111 W0701 16:09:36.175458 16556 init.cc:217] *** Aborted at 1593590976 (unix time) try "date -d @1593590976" if you are using GNU date *** 112 W0701 16:09:36.178263 16556 init.cc:217] PC: @ 0x0 (unknown) 113 W0701 16:09:36.178382 16556 init.cc:217] *** SIGABRT (@0x1f400003b7a) received by PID 15226 (TID 0x7f3e92b9d700) from PID 15226; stack trace: *** 114 W0701 16:09:36.180732 16556 init.cc:217] @ 0x7f3fb10f3500 (unknown) 115 W0701 16:09:36.182962 16556 init.cc:217] @ 0x7f3fb06f98a5 __GI_raise 116 W0701 16:09:36.185030 16556 init.cc:217] @ 0x7f3fb06fb085 __GI_abort 117 W0701 16:09:36.186360 16556 init.cc:217] @ 0x7f3f8ce1f84a __gnu_cxx::__verbose_terminate_handler() 118 W0701 16:09:36.187531 16556 init.cc:217] @ 0x7f3f8ce1df47 __cxxabiv1::__terminate() 119 W0701 16:09:36.188748 16556 init.cc:217] @ 0x7f3f8ce1df7d std::terminate() 120 W0701 16:09:36.189930 16556 init.cc:217] @ 0x7f3f8ce1e15a __cxa_throw 121 W0701 16:09:36.191725 16556 init.cc:217] @ 0x7f3f7277a6f1 paddle::framework::OperatorBase::Run() 122 W0701 16:09:36.193459 16556 init.cc:217] @ 0x7f3f70dd383b paddle::framework::HogwildWorker::TrainFiles() 123 W0701 16:09:36.194747 16556 init.cc:217] @ 0x7f3f8ce3a421 execute_native_thread_routine_compat 124 W0701 16:09:36.196784 16556 init.cc:217] @ 0x7f3fb10eb851 start_thread 125 W0701 16:09:36.198724 16556 init.cc:217] @ 0x7f3fb07ae67d clone 126 W0701 16:09:36.200704 16556 init.cc:217] @ 0x0 (unknown)