未验证 提交 53dd140d 编写于 作者: X Xing Wu 提交者: GitHub

fix lstm doc error (#1943)

* fix lstm doc error

* fix lstm doc error

* remove unused line in lstm doc
上级 54db6cf0
...@@ -38,7 +38,7 @@ LSTMCell ...@@ -38,7 +38,7 @@ LSTMCell
.. code-block:: python .. code-block:: python
import paddle.fluid.layers as layers import paddle.fluid.layers as layers
cell = layers.rnn.LSTMCell(hidden_size=256) cell = layers.LSTMCell(hidden_size=256)
.. py:method:: call(inputs, states) .. py:method:: call(inputs, states)
...@@ -61,4 +61,4 @@ LSTMCell的 :code:`state_shape` 是一个具有两个形状的列表::math:`[[ ...@@ -61,4 +61,4 @@ LSTMCell的 :code:`state_shape` 是一个具有两个形状的列表::math:`[[
返回:LSTMCell的 :code:`state_shape` 返回:LSTMCell的 :code:`state_shape`
返回类型:list 返回类型:list
\ No newline at end of file
...@@ -57,7 +57,7 @@ lstm ...@@ -57,7 +57,7 @@ lstm
返回: 经过lstm运算输出的三个Tensor的tuple,包括 返回: 经过lstm运算输出的三个Tensor的tuple,包括
- rnn_out:LSTM hidden的输出结果的Tensor,数据类型与input一致,维度为 :math:`[seq\_len, batch\_size, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[seq\_len, batch\_size, hidden\_size*2]` - rnn_out:LSTM hidden的输出结果的Tensor,数据类型与input一致,维度为 :math:`[batch\_size, seq\_len, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[batch\_size, seq\_len, hidden\_size*2]`
- last_h:LSTM最后一步的hidden状态的Tensor,数据类型与input一致,维度为 :math:`[num\_layers, batch\_size, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[num\_layers*2, batch\_size, hidden\_size]` - last_h:LSTM最后一步的hidden状态的Tensor,数据类型与input一致,维度为 :math:`[num\_layers, batch\_size, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[num\_layers*2, batch\_size, hidden\_size]`
- last_c:LSTM最后一步的cell状态的Tensor,数据类型与input一致,维度为 :math:`[num\_layers, batch\_size, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[num\_layers*2, batch\_size, hidden\_size]` - last_c:LSTM最后一步的cell状态的Tensor,数据类型与input一致,维度为 :math:`[num\_layers, batch\_size, hidden\_size]` 。如果 ``is_bidirec`` 设置为True,则维度为 :math:`[num\_layers*2, batch\_size, hidden\_size]`
...@@ -73,12 +73,11 @@ lstm ...@@ -73,12 +73,11 @@ lstm
emb_dim = 256 emb_dim = 256
vocab_size = 10000 vocab_size = 10000
data = fluid.layers.data(name='x', shape=[-1, 100, 1], data = fluid.layers.data(name='x', shape=[-1, 100, 1],
dtype='int32') dtype='int64')
emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True) emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True)
batch_size = 20 batch_size = 20
max_len = 100 max_len = 100
dropout_prob = 0.2 dropout_prob = 0.2
seq_len = 100
hidden_size = 150 hidden_size = 150
num_layers = 1 num_layers = 1
init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 ) init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )
...@@ -87,7 +86,7 @@ lstm ...@@ -87,7 +86,7 @@ lstm
rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob) rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)
rnn_out.shape # (-1, 100, 150) rnn_out.shape # (-1, 100, 150)
last_h.shape # (1, 20, 150) last_h.shape # (1, 20, 150)
layt_c.shape # (1, 20, 150) last_c.shape # (1, 20, 150)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册