nlp中input的lod-level为2时模型设计问题。
Created by: yuanquan123
如果我的输入形状是[Batch_size, sequece_len1, sequece_len2]这样的,进行embedding之后形状变为[Batch_size, sequece_len1, sequece_len2, emb_dim],然后我想对sequece_len2这个维度进行pooling操作,请问我该如何实现? 下面这种思路正确吗,实际使用中会报错 test_input = fluid.layers.data(name='test1', shape=[1, 1], dtype='int64', lod_level=2) test_input_emb = fluid.layers.embedding(input=test_input, size=[vocab_size, 100]) test_sentence_emb = fluid.layers.sequence_pool(test_input_emb, type='sum')