Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
97155d2d
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
97155d2d
编写于
10月 11, 2019
作者:
Y
Yiqun Liu
提交者:
GitHub
10月 11, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Polish the English documentation DynamicRNN (#20453) (#20503)
test=develop test=release/1.6 test=document_fix
上级
42909238
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
364 addition
and
118 deletion
+364
-118
paddle/fluid/API.spec
paddle/fluid/API.spec
+7
-7
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+357
-111
未找到文件。
paddle/fluid/API.spec
浏览文件 @
97155d2d
...
...
@@ -363,14 +363,14 @@ paddle.fluid.layers.IfElse.false_block (ArgSpec(args=['self'], varargs=None, key
paddle.fluid.layers.IfElse.input (ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.IfElse.output (ArgSpec(args=['self'], varargs='outs', keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.IfElse.true_block (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.DynamicRNN ('paddle.fluid.layers.control_flow.DynamicRNN', ('document', '
b71e87285dbd4a43a6cc4f8a473245e6
'))
paddle.fluid.layers.DynamicRNN ('paddle.fluid.layers.control_flow.DynamicRNN', ('document', '
cc7209c675ec5841efd10c9d2f1028d9
'))
paddle.fluid.layers.DynamicRNN.__init__ (ArgSpec(args=['self', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.DynamicRNN.block (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '
6d3e0a5d9aa519a9773a36e1620ea9b7
'))
paddle.fluid.layers.DynamicRNN.memory (ArgSpec(args=['self', 'init', 'shape', 'value', 'need_reorder', 'dtype'], varargs=None, keywords=None, defaults=(None, None, 0.0, False, 'float32')), ('document', '
57cdd0a63747f4c670cdb9d250ceb7e
1'))
paddle.fluid.layers.DynamicRNN.output (ArgSpec(args=['self'], varargs='outputs', keywords=None, defaults=None), ('document', '
b439a176a3328de8a75bdc5c08eece4a
'))
paddle.fluid.layers.DynamicRNN.static_input (ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None), ('document', '
55ab9c562edd7dabec0bd6fd6c1a28cc
'))
paddle.fluid.layers.DynamicRNN.step_input (ArgSpec(args=['self', 'x', 'level'], varargs=None, keywords=None, defaults=(0,)), ('document', '
4b300851b5201891d0e11c406e4c7d07
'))
paddle.fluid.layers.DynamicRNN.update_memory (ArgSpec(args=['self', 'ex_mem', 'new_mem'], varargs=None, keywords=None, defaults=None), ('document', '
5d83987da13b98363d6a807a52d8024f
'))
paddle.fluid.layers.DynamicRNN.block (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '
7935170393923803885fa461c3f3cfbc
'))
paddle.fluid.layers.DynamicRNN.memory (ArgSpec(args=['self', 'init', 'shape', 'value', 'need_reorder', 'dtype'], varargs=None, keywords=None, defaults=(None, None, 0.0, False, 'float32')), ('document', '
904250e0e51498e5c52711e667c2094
1'))
paddle.fluid.layers.DynamicRNN.output (ArgSpec(args=['self'], varargs='outputs', keywords=None, defaults=None), ('document', '
eccd774fb93665e51fb38bfb84d78b52
'))
paddle.fluid.layers.DynamicRNN.static_input (ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None), ('document', '
a766b9ea54014d1af913b0d39d7fb010
'))
paddle.fluid.layers.DynamicRNN.step_input (ArgSpec(args=['self', 'x', 'level'], varargs=None, keywords=None, defaults=(0,)), ('document', '
70081f137db4d215b302e49b5db33353
'))
paddle.fluid.layers.DynamicRNN.update_memory (ArgSpec(args=['self', 'ex_mem', 'new_mem'], varargs=None, keywords=None, defaults=None), ('document', '
b416c1c4a742a86379887b092cf7f2f0
'))
paddle.fluid.layers.StaticRNN ('paddle.fluid.layers.control_flow.StaticRNN', ('document', 'f73671cb98696a1962bf5deaf49dc2e9'))
paddle.fluid.layers.StaticRNN.__init__ (ArgSpec(args=['self', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.StaticRNN.memory (ArgSpec(args=['self', 'init', 'shape', 'batch_ref', 'init_value', 'init_batch_dim_idx', 'ref_batch_dim_idx'], varargs=None, keywords=None, defaults=(None, None, None, 0.0, 0, 1)), ('document', 'f1b60dc4194d0bb714d6c6f5921b227f'))
...
...
python/paddle/fluid/layers/control_flow.py
浏览文件 @
97155d2d
...
...
@@ -1812,44 +1812,68 @@ class IfElse(object):
class
DynamicRNN
(
object
):
"""
The dynamic RNN can process a batch of sequence data. The length of each
sample sequence can be different. This API automatically process them in
batch.
**Note: the input of this class should be LoDTensor which holds the
information of variable-length sequences. If the input is fixed-length Tensor,
please use StaticRNN (fluid.layers.** :ref:`api_fluid_layers_StaticRNN` **) for
better performance.**
DynamicRNN can process a minibatch of variable-length sequences.
The length of each sample can be different and is recorded in LoD.
In DynamicRNN, an input sequence will be unfolded into time steps and users
can define how to process each time step in :code:`block()` .
The total number of time steps is determined by the longest sequence.
DynamicRNN will not pad all sequences to the same length, instead it will
sort the sequences internally by the sequence length in descending order.
The input sequences will be shrinked because only sequences of which the
length is larger than the time step will participate the remaining calculation.
If defined :code:`drnn = DynamicRNN()`, then users can call :code:`drnn()`
to obtain the result sequences. It is a LoDTensor gained by merging all
time steps's output. When RNN's input sequence x meets :code:`x.lod_level == 1`,
the output LoDTensor will have the same LoD with x. The result of :code:`drnn()`
includes RNN's outputs of all time steps, users can call
:ref:`api_fluid_layers_sequence_last_step` to extract the data of the last time step.
Warning:
Currently it is not supported to set :code:`is_sparse = True` of any
layers defined within DynamicRNN's :code:`block` function.
The input lod must be set. Please reference to `lod_tensor`.
The dynamic RNN will unfold sequence into timesteps. Users need to define
how to process each time step during the :code:`with` block.
The `memory` is used staging data cross time step. The initial value of
memory can be zero or another variable.
The dynamic RNN can mark multiple variables as its output. Use `drnn()` to
get the output sequence.
NOTES:
Currently it is not supported that setting is_sparse to True of any
layers within DynamicRNN.
Args:
name (str, optional): The default value is None. Normally there is no
need for user to set this property. For more information,
please refer to :ref:`api_guide_Name` .
Examples:
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid as fluid
sentence = fluid.layers.data(name='sentence', shape=[1], dtype='int64', lod_level=1)
embedding = fluid.layers.embedding(input=sentence, size=[65536, 32], is_sparse=True)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(embedding)
prev = drnn.memory(shape=[200])
hidden = fluid.layers.fc(input=[word, prev], size=200, act='relu')
drnn.update_memory(prev, hidden) # set prev to hidden
drnn.output(hidden)
# Get the last time step of rnn. It is the encoding result.
rnn_output = drnn()
last = fluid.layers.sequence_last_step(rnn_output)
sentence = fluid.data(name='sentence', shape=[None, 32], dtype='float32', lod_level=1)
encoder_proj = fluid.data(name='encoder_proj', shape=[None, 32], dtype='float32', lod_level=1)
decoder_boot = fluid.data(name='boot', shape=[None, 10], dtype='float32')
drnn = fluid.layers.DynamicRNN()
with drnn.block():
# Set sentence as RNN's input, each time step processes a word from the sentence
current_word = drnn.step_input(sentence)
# Set encode_proj as RNN's static input
encoder_word = drnn.static_input(encoder_proj)
# Initialize memory with boot_memory, which need reorder according to RNN's input sequences
memory = drnn.memory(init=decoder_boot, need_reorder=True)
fc_1 = fluid.layers.fc(input=encoder_word, size=30)
fc_2 = fluid.layers.fc(input=current_word, size=30)
decoder_inputs = fc_1 + fc_2
hidden, _, _ = fluid.layers.gru_unit(input=decoder_inputs, hidden=memory, size=30)
# Update memory with hidden
drnn.update_memory(ex_mem=memory, new_mem=hidden)
out = fluid.layers.fc(input=hidden, size=10, bias_attr=True, act='softmax')
# Set hidden and out as RNN's outputs
drnn.output(hidden, out)
# Get RNN's result
hidden, out = drnn()
# Get RNN's result of the last time step
last = fluid.layers.sequence_last_step(out)
"""
BEFORE_RNN
=
0
IN_RNN
=
1
...
...
@@ -1873,14 +1897,94 @@ class DynamicRNN(object):
def
step_input
(
self
,
x
,
level
=
0
):
"""
Mark a sequence as a dynamic RNN input.
This function is used to set sequence x as DynamicRNN's input.
The maximum sequence length in x determines the number of time steps
the RNN unit will be executed. DynamicRNN can take multiple inputs.
When all inputs' :code:`lod_level` are 1, all inputs should hold the
same LoD. When :code:`x.lod_level >= 2` , the input sequence will be
unfold along specified level, and the slice of each time step is a
LoDTensor whose lod_level is :code:`x.lod_level - level - 1` .
In this case, the specified LoD level of multiple inputs should be the same.
- Case 1:
.. code-block:: text
# input, where Si is slice data of shape [1, N]
level = 0
x.lod = [[2, 1, 3]]
x.shape = [6, N]
x.data = [[S0],
[S0],
[S1],
[S2],
[S2],
[S2]]
# output
# step 0, time step data of 3 sequences
out.lod = [[]]
out.shape = [3, N]
out.data = [[S2],
[S0],
[S1]]
# step 1, time step data of 2 sequences
out.lod = [[]]
out.shape = [2, N]
out.data = [[S2],
[S0]]
# step 2, time step data of 1 sequences
out.lod = [[]]
out.shape = [1, N]
out.data = [[S2]]
Args:
x (Variable): The input sequence which should have lod information.
level (int): The level of lod used to split steps. Default: 0.
x (Variable): The input LoDTensor which holds information of a
minibatch of variable-length sequences and should meet :code:`x.lod_level >= 1` .
When RNN has multiple inputs, the first dimension should match
across all inputs, but other shape components may differ.
Optional data types are: bool, float16, float32, float64, int8, int16, int32, int64, uint8.
level (int, optional): The level of lod used to split steps.
It should be in range :math:`[0, x.lod\_level)` . The default value is 0.
Returns:
The current timestep in the input sequence.
Variable: The current time step in the input sequence. If there are :code:`num_sequences`
\
sequences in x whose length is larger than :code:`step_idx` , the returned Variable
\
will only hold the :code:`step_idx` -th time step of those `num_sequences` sequences.
\
The data type is the same as input. If :code:`x.lod_level == 1` , the return value is
\
a Tensor of shape :math:`\{num\_sequences, x.shape[1], ...\}` , or it will
\
be a variable-length LoDTensor.
Raises:
ValueError: When :code:`step_input()` is called outside :code:`block()` .
TypeError: When x is not a Variable.
Examples:
.. code-block:: python
import paddle.fluid as fluid
sentence = fluid.data(name='sentence', shape=[None, 1], dtype='int64', lod_level=1)
embedding = fluid.layers.embedding(input=sentence, size=[65536, 32], is_sparse=True)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
# Set embedding as RNN's input, each time step processes a word from the sentence
word = drnn.step_input(embedding)
# Initialize memory to a Tensor whose value is 0, shape=[batch_size, 200],
# where batch_size is the number of sequences in embedding.
memory = drnn.memory(shape=[200])
hidden = fluid.layers.fc(input=[word, memory], size=200, act='relu')
# Update memory to hidden
drnn.update_memory(ex_mem=memory, new_mem=hidden)
# Set hidden as RNN's output
drnn.output(hidden)
# Get RNN's result
rnn_output = drnn()
"""
self
.
_assert_in_rnn_block_
(
"step_input"
)
if
not
isinstance
(
x
,
Variable
):
...
...
@@ -1927,38 +2031,129 @@ class DynamicRNN(object):
def
static_input
(
self
,
x
):
"""
Mark a variable as a RNN input. The input will not be scattered into
time steps. It is optional.
This function is used to set x as DynamicRNN's static input. It is optional.
- Case 1, set static input with LoD
.. code-block:: text
# RNN's input is the same as the case listed in step_input
# static input, where Si is slice data of shape [1, M]
x.lod = [[3, 1, 2]]
x.shape = [6, M]
x.data = [[S0],
[S0],
[S0],
[S1],
[S2],
[S2]]
# step 0, batch data corresponding to the 3 input sequences
out.lod = [[2, 3, 1]]
out.shape = [6, M]
out.data = [[S2],
[S2],
[S0],
[S0],
[S0],
[S1]]
# step 1, batch data corresponding to the 2 input sequences
out.lod = [[2, 3]]
out.shape = [5, M]
out.data = [[S2],
[S2],
[S0],
[S0],
[S0]]
# step 2, batch data corresponding to the 1 input sequences
out.lod = [[2]]
out.shape = [2, M]
out.data = [[S2],
[S2]]
- Case 2, set static input without LoD
.. code-block:: text
# RNN's input is the same as the case listed in step_input
# static input, where Si is slice data of shape [1, M]
x.lod = [[]]
x.shape = [3, M]
x.data = [[S0],
[S1],
[S2]]
# step 0, batch data corresponding to the 3 input sequences
out.lod = [[]]
out.shape = [3, M]
out.data = [[S2],
[S0],
[S1]]
# step 1, batch data corresponding to the 2 input sequences
out.lod = [[]]
out.shape = [2, M]
out.data = [[S2],
[S0]]
# step 2, batch data corresponding to the 1 input sequences
out.lod = [[]]
out.shape = [1, M]
out.data = [[S2]]
Args:
x (Variable): The input variable.
x (Variable): The static input LoDTensor which should hold the same number of sequences
as RNN's input (the input LoDTensor set by :code:`step_input()` ). If the LoD is None,
the input x will be treated as a minibatch with :code:`x.shape[0]` sequences of length 1.
Optional data types are: bool, float16, float32, float64, int8, int16, int32, int64, uint8.
Returns:
The input variable that can access in RNN.
Variable: The input LoDTensor after sorted and shrinked. If there are :code:`num_sequences`
\
sequences in RNN's input LoDTensor whose length is larger than :code:`step_idx` ,
\
the static input Tensor will be sorted to the same order as RNN's input and
\
will only retain data corresponding to those :code:`num_sequences` sequences.
\
The data type is the same as input. If :code:`x.lod == None` , the return value is
\
a Tensor of shape :math:`\{num\_sequences, x.shape[1], ...\}` , or it will
\
be a variable-length LoDTensor.
Raises:
ValueError: When :code:`static_input()` is called outside :code:`block()` .
TypeError: When x is not a Variable.
RuntimeError: When :code:`static_input()` is called before :code:`step_input()` .
Examples:
.. code-block:: python
import paddle.fluid as fluid
sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1)
encoder_proj = fluid.layers.data(name='encoder_proj', dtype='float32', shape=[32], lod_level=1)
decoder_boot = fluid.layers.data(name='boot', dtype='float32', shape=[10], lod_level=1)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
current_word = drnn.step_input(sentence)
encoder_word = drnn.static_input(encoder_proj)
hidden_mem = drnn.memory(init=decoder_boot, need_reorder=True)
fc_1 = fluid.layers.fc(input=encoder_word, size=30, bias_attr=False)
fc_2 = fluid.layers.fc(input=current_word, size=30, bias_attr=False)
decoder_inputs = fc_1 + fc_2
h, _, _ = fluid.layers.gru_unit(input=decoder_inputs, hidden=hidden_mem, size=30)
drnn.update_memory(hidden_mem, h)
out = fluid.layers.fc(input=h, size=10, bias_attr=True, act='softmax')
drnn.output(out)
rnn_output = drnn()
import paddle.fluid as fluid
sentence = fluid.data(name='sentence', shape=[None, 32], dtype='float32', lod_level=1)
encoder_proj = fluid.data(name='encoder_proj', shape=[None, 32], dtype='float32', lod_level=1)
decoder_boot = fluid.data(name='boot', shape=[None, 10], dtype='float32')
drnn = fluid.layers.DynamicRNN()
with drnn.block():
# Set sentence as RNN's input, each time step processes a word from the sentence
current_word = drnn.step_input(sentence)
# Set encode_proj as RNN's static input
encoder_word = drnn.static_input(encoder_proj)
# Initialize memory with boot_memory, which need reorder according to RNN's input sequences
memory = drnn.memory(init=decoder_boot, need_reorder=True)
fc_1 = fluid.layers.fc(input=encoder_word, size=30)
fc_2 = fluid.layers.fc(input=current_word, size=30)
decoder_inputs = fc_1 + fc_2
hidden, _, _ = fluid.layers.gru_unit(input=decoder_inputs, hidden=memory, size=30)
# Update memory with hidden
drnn.update_memory(ex_mem=memory, new_mem=hidden)
out = fluid.layers.fc(input=hidden, size=10, bias_attr=True, act='softmax')
# Set out as RNN's output
drnn.output(out)
# Get RNN's result
rnn_output = drnn()
"""
self
.
_assert_in_rnn_block_
(
"static_input"
)
if
not
isinstance
(
x
,
Variable
):
...
...
@@ -1982,7 +2177,12 @@ class DynamicRNN(object):
@
signature_safe_contextmanager
def
block
(
self
):
"""
The block for user to define operators in RNN.
The function is used to list the operations executed during
each time step in RNN. The operation list will be executed :code:`max_sequence_len`
times (where :code:`max_sequence_len` is the maximum length of RNN's input sequences).
Raises:
ValueError: When :code:`block()` is called multi-times.
"""
if
self
.
status
!=
DynamicRNN
.
BEFORE_RNN
:
raise
ValueError
(
"rnn.block() can only be invoke once"
)
...
...
@@ -2011,7 +2211,16 @@ class DynamicRNN(object):
def
__call__
(
self
,
*
args
,
**
kwargs
):
"""
Get the output of RNN. This API should only be invoked after RNN.block()
This function is used to get the output sequneces of DynamicRNN.
Args:
None
Returns:
Variable or Variable list: RNN's output sequences.
Raises:
ValueError: When :code:`__call__()` is called before :code:`block()` .
"""
if
self
.
status
!=
DynamicRNN
.
AFTER_RNN
:
raise
ValueError
((
"Output of the dynamic RNN can only be visited "
...
...
@@ -2028,62 +2237,89 @@ class DynamicRNN(object):
need_reorder
=
False
,
dtype
=
'float32'
):
"""
Create a memory variable for dynamic rnn.
Create a memory Variable for DynamicRNN to deliver data cross time steps.
It can be initialized by an existing Tensor or a constant Tensor of given
dtype and shape.
If the :code:`init` is not None, :code:`memory` will be initialized by
this variable. The :code:`need_reorder` is used to reorder the memory as
the input variable. It should be set to true when the initialized memory
depends on the input sample.
Args:
init (Variable, optional): LoDTensor used to initialize the memory.
If init is not None, it should hold the same number of sequences
as RNN's input (the input LoDTensor set by :code:`step_input()` )
and the memory will be initialized to it. If init's LoD is None,
it will be treated as a minibatch with :code:`init.shape[0]` sequences
of length 1. The default value is None.
shape (list|tuple, optional): When init is None, it is used to specify
the memory's shape. Note that the shape does not include the batch_size.
If setting shape to :math:`\{D_1, D_2, ...\}` , the shape of memory Tensor
will be :math:`\{batch\_size, D_1, D_2, ...\}` , where batch_size is
determined by RNN's input sequences. The default value is None.
value (float, optional): When init is None, it is used as initalized value
of memory. The default value is 0.0.
need_reorder (bool, optional): When init is not None, it determines whether
the memory needs to reorder like the RNN's input sequeneces. It should be
set to True when the initialized memory depends on the order of input samples.
The default value is False.
dtype (str|numpy.dtype, optional): When init is None, it is used to set the
data type of memory. The default value is "float32". Optional data types
are: "float32", "float64", "int32", "int64".
Returns:
Variable: The memory LoDTensor after shrinked. If there are :code:`num_sequences`
\
sequences in RNN's input LoDTensor whose length is larger than :code:`step_idx` ,
\
the memory Tensor also need to be shrinked and will only retain data
\
corresponding to those :code:`num_sequences` sequences.
Raises:
ValueError: When :code:`memory()` is called outside :code:`block()` .
TypeError: When init is set and is not a Variable.
ValueError: When :code:`memory()` is called before :code:`step_input()` .
Examples:
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid as fluid
sentence = fluid.layers.data(name='sentence', shape=[32], dtype='float32', lod_level=1)
boot_memory = fluid.layers.data(name='boot', shape=[10], dtype='float32', lod_level=1)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(sentence)
memory = drnn.memory(init=boot_memory, need_reorder=True)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
drnn.update_memory(ex_mem=memory, new_mem=hidden)
drnn.output(hidden)
sentence = fluid.data(name='sentence', shape=[None, 32], dtype='float32', lod_level=1)
boot_memory = fluid.data(name='boot', shape=[None, 10], dtype='float32')
rnn_output = drnn()
drnn = fluid.layers.DynamicRNN()
with drnn.block():
# Set sentence as RNN's input, each time step processes a word from the sentence
word = drnn.step_input(sentence)
# Initialize memory with boot_memory, which need reorder according to RNN's input sequences
memory = drnn.memory(init=boot_memory, need_reorder=True)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
# Update memory with hidden
drnn.update_memory(ex_mem=memory, new_mem=hidden)
# Set hidden as RNN's output
drnn.output(hidden)
# Get RNN's result
rnn_output = drnn()
Otherwise, if :code:`shape`, :code:`value`, :code:`dtype` are set, the
:code:`memory` will be initialized by this :code:`value`.
Examples:
.. code-block:: python
import paddle.fluid as fluid
sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(sentence)
memory = drnn.memory(shape=[10], dtype='float32', value=0)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
drnn.update_memory(ex_mem=memory, new_mem=hidden)
drnn.output(hidden)
rnn_output = drnn()
Args:
init(Variable|None): The initialized variable.
shape(list|tuple): The memory shape. The shape does not contain batch_size.
value(float): the initalized value.
need_reorder(bool): True if the initialized memory depends on the input sample.
dtype(str|numpy.dtype): The data type of the initialized memory.
import paddle.fluid as fluid
Returns:
The memory variable.
sentence = fluid.data(name='sentence', shape=[None, 32], dtype='float32', lod_level=1)
drnn = fluid.layers.DynamicRNN()
with drnn.block():
# Set sentence as RNN's input, each time step processes a word from the sentence
word = drnn.step_input(sentence)
# Initialize memory to a Tensor whose value is 0, shape=[batch_size, 10],
# where batch_size is the number of sequences in sentence.
memory = drnn.memory(shape=[10], dtype='float32', value=0)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
# Update memory with hidden
drnn.update_memory(ex_mem=memory, new_mem=hidden)
# Set hidden as RNN's output
drnn.output(hidden)
# Get RNN's result
rnn_output = drnn()
"""
self
.
_assert_in_rnn_block_
(
'memory'
)
self
.
_init_zero_idx_
()
...
...
@@ -2154,15 +2390,21 @@ class DynamicRNN(object):
def
update_memory
(
self
,
ex_mem
,
new_mem
):
"""
Update the memory from ex_mem to new_mem. NOTE that the shape and data
type of :code:`ex_mem` and :code:`new_mem` must be same.
Update the memory which need to be delivered across time steps.
Args:
ex_mem(Variable): the memory variable.
new_mem(Variable): the plain variable generated in RNN block.
ex_mem (Variable): The memory data of previous time step.
new_mem (Variable): The new memory data produced in current time step.
The shape and data type of ex_mem and new_mem should be the same.
Returns:
None
Raises:
ValueError: When :code:`update_memory()` is called outside :code:`block()` .
TypeError: When :code:`ex_mem` or :code:`new_mem` is not a Variable.
ValueError: When :code:`ex_mem` is defined by :code:`memory()` .
ValueError: When :code:`update_memory()` is called before :code:`step_input()` .
"""
self
.
_assert_in_rnn_block_
(
'update_memory'
)
if
not
isinstance
(
ex_mem
,
Variable
):
...
...
@@ -2182,13 +2424,17 @@ class DynamicRNN(object):
def
output
(
self
,
*
outputs
):
"""
Mark the RNN output variables
.
This function is used to set :code:`outputs` as RNN's output
.
Args:
outputs: The output variables.
*outputs (Variable ...): The output Tensor. DynamicRNN can mark multiple
Variables as its output.
Returns:
None
Raises:
ValueError: When :code:`output()` is called outside :code:`block()` .
"""
self
.
_assert_in_rnn_block_
(
'output'
)
parent_block
=
self
.
_parent_block_
()
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录