The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.
RNNCell is the base class for abstraction representing the calculations
mapping the input and state to the output and new state. It is suitable to
and mostly used in RNN.
...
...
@@ -221,6 +223,8 @@ class RNNCell(object):
classGRUCell(RNNCell):
"""
:api_attr: Static Graph
Gated Recurrent Unit cell. It is a wrapper for
`fluid.contrib.layers.rnn_impl.BasicGRUUnit` to make it adapt to RNNCell.
...
...
@@ -317,6 +321,8 @@ class GRUCell(RNNCell):
classLSTMCell(RNNCell):
"""
:api_attr: Static Graph
Long-Short Term Memory cell. It is a wrapper for
`fluid.contrib.layers.rnn_impl.BasicLSTMUnit` to make it adapt to RNNCell.
...
...
@@ -431,6 +437,8 @@ def rnn(cell,
is_reverse=False,
**kwargs):
"""
:api_attr: Static Graph
rnn creates a recurrent neural network specified by RNNCell `cell`,
which performs :code:`cell.call()` repeatedly until reaches to the maximum
length of `inputs`.
...
...
@@ -575,6 +583,8 @@ def rnn(cell,
classDecoder(object):
"""
:api_attr: Static Graph
Decoder is the base class for any decoder instance used in `dynamic_decode`.
It provides interface for output generation for one time step, which can be
used to generate sequences.
...
...
@@ -686,6 +696,8 @@ class Decoder(object):
classBeamSearchDecoder(Decoder):
"""
:api_attr: Static Graph
Decoder with beam search decoding strategy. It wraps a cell to get probabilities,
and follows a beam search step to calculate scores and select candidate
token ids for each decoding step.
...
...
@@ -1153,6 +1165,8 @@ def dynamic_decode(decoder,
return_length=False,
**kwargs):
"""
:api_attr: Static Graph
Dynamic decoding performs :code:`decoder.step()` repeatedly until the returned
Tensor indicating finished status contains all True values or the number of
decoding step reaches to :attr:`max_step_num`.
...
...
@@ -1975,6 +1989,8 @@ def dynamic_lstm(input,
dtype='float32',
name=None):
"""
:api_attr: Static Graph
**Note**:
1. This OP only supports LoDTensor as inputs. If you need to deal with Tensor, please use :ref:`api_fluid_layers_lstm` .
2. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
...
...
@@ -2145,6 +2161,8 @@ def lstm(input,
default_initializer=None,
seed=-1):
"""
:api_attr: Static Graph
**Note**:
This OP only supports running on GPU devices.
...
...
@@ -2330,6 +2348,8 @@ def dynamic_lstmp(input,
cell_clip=None,
proj_clip=None):
"""
:api_attr: Static Graph
**Note**:
1. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
...
...
@@ -2539,6 +2559,8 @@ def dynamic_gru(input,
h_0=None,
origin_mode=False):
"""
:api_attr: Static Graph
**Note: The input type of this must be LoDTensor. If the input type to be
processed is Tensor, use** :ref:`api_fluid_layers_StaticRNN` .
...
...
@@ -2691,6 +2713,8 @@ def gru_unit(input,
gate_activation='sigmoid',
origin_mode=False):
"""
:api_attr: Static Graph
Gated Recurrent Unit (GRU) RNN cell. This operator performs GRU calculations for