LSTM 输出单元rnn_out的最后一维错误,文档上写着是hidden_size,实际上输出却是input_size
Created by: WeiyueSu
`import paddle.fluid as fluid emb_dim = 256 vocab_size = 10000 data = fluid.layers.data(name='x', shape=[-1, 100, 1], dtype='int32') emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True) batch_size = 20 max_len = 100 dropout_prob = 0.2 input_size = 100 hidden_size = 150 num_layers = 1 init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 ) init_c = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )
rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)`
上面这段为paddlepaddle.org上lstm的例子,rnn_out按文档上写最后一维应该是hidden_size 150,实际跑出来却是emb_dim 256,明显错误,合理也应该是150的输出。