在原有层里添加相似度的计算层出错
Created by: youan1
在paddle的配置文件里添加了计算两个slot的
emba_b_cos_sim=paddle.layer.cos_sim(name="emba_b_cos_sim",
a=emba_pooling,
b=embb_pooling,
scale=1)
concat_layer = paddle.layer.concat( name="concat_layer", input=[dense_data_input,, emba_b_cos_sim])
之前没有添加emba_b_cos_sim的时候
concat_layer = paddle.layer.concat( name="concat_layer", input=[dense_data_input,])
dropout_layer0 = paddle.layer.dropout(input=concat_layer, dropout_rate=0.2)
concat_layer2 = paddle.layer.fc(name="concat_layer2", input=[dropout_layer0], size=(dense_data_input_size + 1))
然后concat_layer2后面就开始接隐藏层了,唯一的变化就是在concat_layer里输入添加了一个emba_b_cos_sim,然后在concat_layer2里将size添加了一个 +1
我默认emba_b_cos_sim的size是为1的
可以正常运行
然后将emba_b_cos_sim输入到一个concat层,并作为上面的隐藏层的输入 本地调试时出如下错误: 请paddle的同学看看是什么问题
1016 17:01:59.181259 5025 Util.cpp:166] commandline: --use_gpu=False
I1016 17:07:13.598457 5025 GradientMachine.cpp:85] Initing parameters..
I1016 17:07:59.232594 5025 GradientMachine.cpp:92] Init parameters done.
F1016 17:08:19.681207 18412 Vector.h:132] Check failed: start + size <= src.size_ (1179396 vs. 1172889)
*** Check failure stack trace: ***
@ 0x7f80d010ae6d google::LogMessage::Fail()
@ 0x7f80d010e91c google::LogMessage::SendToLog()
@ 0x7f80d010a993 google::LogMessage::Flush()
@ 0x7f80d010fe2e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f80d094f858 paddle::SgdThreadUpdater::threadUpdateDense()
@ 0x7f80d095053f _ZNSt17_Function_handlerIFvimEZN6paddle16SgdThreadUpdater11finishBatchEfEUlimE_E9_M_invokeERKSt9_Any_dataim
@ 0x7f80d0081a0c _ZNSt6thread5_ImplISt12_Bind_simpleIFZN6paddle14SyncThreadPool5startEvEUliE_mEEE6_M_runEv
@ 0x7f80cf8df8a0 execute_native_thread_routine
@ 0x7f80d9b801c3 start_thread
@ 0x7f80d91a812d __clone
@ (nil) (unknown)
Aborted (core dumped)