The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.
The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.
RNNCell is the base class for abstraction representing the calculations
RNNCell is the base class for abstraction representing the calculations
mapping the input and state to the output and new state. It is suitable to
mapping the input and state to the output and new state. It is suitable to
and mostly used in RNN.
and mostly used in RNN.
...
@@ -221,6 +223,8 @@ class RNNCell(object):
...
@@ -221,6 +223,8 @@ class RNNCell(object):
classGRUCell(RNNCell):
classGRUCell(RNNCell):
"""
"""
:api_attr: Static Graph
Gated Recurrent Unit cell. It is a wrapper for
Gated Recurrent Unit cell. It is a wrapper for
`fluid.contrib.layers.rnn_impl.BasicGRUUnit` to make it adapt to RNNCell.
`fluid.contrib.layers.rnn_impl.BasicGRUUnit` to make it adapt to RNNCell.
...
@@ -317,6 +321,8 @@ class GRUCell(RNNCell):
...
@@ -317,6 +321,8 @@ class GRUCell(RNNCell):
classLSTMCell(RNNCell):
classLSTMCell(RNNCell):
"""
"""
:api_attr: Static Graph
Long-Short Term Memory cell. It is a wrapper for
Long-Short Term Memory cell. It is a wrapper for
`fluid.contrib.layers.rnn_impl.BasicLSTMUnit` to make it adapt to RNNCell.
`fluid.contrib.layers.rnn_impl.BasicLSTMUnit` to make it adapt to RNNCell.
...
@@ -431,6 +437,8 @@ def rnn(cell,
...
@@ -431,6 +437,8 @@ def rnn(cell,
is_reverse=False,
is_reverse=False,
**kwargs):
**kwargs):
"""
"""
:api_attr: Static Graph
rnn creates a recurrent neural network specified by RNNCell `cell`,
rnn creates a recurrent neural network specified by RNNCell `cell`,
which performs :code:`cell.call()` repeatedly until reaches to the maximum
which performs :code:`cell.call()` repeatedly until reaches to the maximum
length of `inputs`.
length of `inputs`.
...
@@ -575,6 +583,8 @@ def rnn(cell,
...
@@ -575,6 +583,8 @@ def rnn(cell,
classDecoder(object):
classDecoder(object):
"""
"""
:api_attr: Static Graph
Decoder is the base class for any decoder instance used in `dynamic_decode`.
Decoder is the base class for any decoder instance used in `dynamic_decode`.
It provides interface for output generation for one time step, which can be
It provides interface for output generation for one time step, which can be
used to generate sequences.
used to generate sequences.
...
@@ -686,6 +696,8 @@ class Decoder(object):
...
@@ -686,6 +696,8 @@ class Decoder(object):
classBeamSearchDecoder(Decoder):
classBeamSearchDecoder(Decoder):
"""
"""
:api_attr: Static Graph
Decoder with beam search decoding strategy. It wraps a cell to get probabilities,
Decoder with beam search decoding strategy. It wraps a cell to get probabilities,
and follows a beam search step to calculate scores and select candidate
and follows a beam search step to calculate scores and select candidate
token ids for each decoding step.
token ids for each decoding step.
...
@@ -1153,6 +1165,8 @@ def dynamic_decode(decoder,
...
@@ -1153,6 +1165,8 @@ def dynamic_decode(decoder,
return_length=False,
return_length=False,
**kwargs):
**kwargs):
"""
"""
:api_attr: Static Graph
Dynamic decoding performs :code:`decoder.step()` repeatedly until the returned
Dynamic decoding performs :code:`decoder.step()` repeatedly until the returned
Tensor indicating finished status contains all True values or the number of
Tensor indicating finished status contains all True values or the number of
decoding step reaches to :attr:`max_step_num`.
decoding step reaches to :attr:`max_step_num`.
...
@@ -1975,6 +1989,8 @@ def dynamic_lstm(input,
...
@@ -1975,6 +1989,8 @@ def dynamic_lstm(input,
dtype='float32',
dtype='float32',
name=None):
name=None):
"""
"""
:api_attr: Static Graph
**Note**:
**Note**:
1. This OP only supports LoDTensor as inputs. If you need to deal with Tensor, please use :ref:`api_fluid_layers_lstm` .
1. This OP only supports LoDTensor as inputs. If you need to deal with Tensor, please use :ref:`api_fluid_layers_lstm` .
2. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
2. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
...
@@ -2145,6 +2161,8 @@ def lstm(input,
...
@@ -2145,6 +2161,8 @@ def lstm(input,
default_initializer=None,
default_initializer=None,
seed=-1):
seed=-1):
"""
"""
:api_attr: Static Graph
**Note**:
**Note**:
This OP only supports running on GPU devices.
This OP only supports running on GPU devices.
...
@@ -2330,6 +2348,8 @@ def dynamic_lstmp(input,
...
@@ -2330,6 +2348,8 @@ def dynamic_lstmp(input,
cell_clip=None,
cell_clip=None,
proj_clip=None):
proj_clip=None):
"""
"""
:api_attr: Static Graph
**Note**:
**Note**:
1. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
1. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.
...
@@ -2539,6 +2559,8 @@ def dynamic_gru(input,
...
@@ -2539,6 +2559,8 @@ def dynamic_gru(input,
h_0=None,
h_0=None,
origin_mode=False):
origin_mode=False):
"""
"""
:api_attr: Static Graph
**Note: The input type of this must be LoDTensor. If the input type to be
**Note: The input type of this must be LoDTensor. If the input type to be
processed is Tensor, use** :ref:`api_fluid_layers_StaticRNN` .
processed is Tensor, use** :ref:`api_fluid_layers_StaticRNN` .
...
@@ -2691,6 +2713,8 @@ def gru_unit(input,
...
@@ -2691,6 +2713,8 @@ def gru_unit(input,
gate_activation='sigmoid',
gate_activation='sigmoid',
origin_mode=False):
origin_mode=False):
"""
"""
:api_attr: Static Graph
Gated Recurrent Unit (GRU) RNN cell. This operator performs GRU calculations for
Gated Recurrent Unit (GRU) RNN cell. This operator performs GRU calculations for