提交 aaedc573 编写于 作者: E Eugene Brevdo 提交者: Eugene Brevdo

Make RNN api public.

Change: 124305799
上级 e13cd053
......@@ -12,6 +12,8 @@
* Higher level functionality in contrib.{layers,losses,metrics,learn}
* More features to Tensorboard
* Improved support for string embedding and sparse features
* The RNN api is finally "official" (see, e.g., `tf.nn.dynamic_rnn`,
`tf.nn.rnn`, and the classes in `tf.nn.rnn_cell`).
* TensorBoard now has an Audio Dashboard, with associated audio summaries.
## Big Fixes and Other Changes
......
### `tf.nn.bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None)` {#bidirectional_rnn}
Creates a bidirectional recurrent neural network.
Similar to the unidirectional case above (rnn) but takes input and builds
independent forward and backward RNNs with the final forward and backward
outputs depth-concatenated, such that the output will have the format
[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of
forward and backward cell must match. The initial state for both directions
is zero by default (but can be set optionally) and no intermediate states are
ever returned -- the network is fully unrolled for the given (passed in)
length(s) of the sequence(s) or completely unrolled if length(s) is not given.
##### Args:
* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
[batch_size, input_size].
* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
This must be a tensor of appropriate type and shape
`[batch_size x cell_fw.state_size]`.
If `cell_fw.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
the corresponding properties of `cell_bw`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
either of the initial states are not provided.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
##### Returns:
A tuple (outputs, output_state_fw, output_state_bw) where:
outputs is a length `T` list of outputs (one for each input), which
are depth-concatenated forward and backward outputs.
output_state_fw is the final state of the forward rnn.
output_state_bw is the final state of the backward rnn.
##### Raises:
* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
* <b>`ValueError`</b>: If inputs is None or an empty list.
### `tf.nn.rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#rnn}
Creates a recurrent neural network specified by RNNCell `cell`.
##### The simplest form of RNN network generated is:
state = cell.zero_state(...)
outputs = []
for input_ in inputs:
output, state = cell(input_, state)
outputs.append(output)
return (outputs, state)
However, a few other options are available:
An initial state can be provided.
If the sequence_length vector is provided, dynamic calculation is performed.
This method of calculation does not compute the RNN steps past the maximum
sequence length of the minibatch (thus saving computational time),
and properly propagates the state at an example's sequence length
to the final state output.
The dynamic calculation performed is, at time t for batch row b,
(output, state)(b, t) =
(t >= sequence_length(b))
? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
: cell(input(b, t), state(b, t - 1))
##### Args:
* <b>`cell`</b>: An instance of RNNCell.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
[batch_size, input_size].
* <b>`initial_state`</b>: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a tensor of appropriate type and shape `[batch_size x cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
initial_state is not provided.
* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
- outputs is a length T list of outputs (one for each input)
- state is the final state
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the input depth
(column size) cannot be inferred from inputs via shape inference.
Operator adding input embedding to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your inputs in time,
do the embedding on this batch-concatenated sequence, then split it and
feed into your RNN.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.__init__(cell, embedding_classes, embedding_size, initializer=None)` {#EmbeddingWrapper.__init__}
Create a cell with an added input embedding.
##### Args:
* <b>`cell`</b>: an RNNCell, an embedding will be put before its inputs.
* <b>`embedding_classes`</b>: integer, how many symbols will be embedded.
* <b>`embedding_size`</b>: integer, the size of the vectors we embed into.
* <b>`initializer`</b>: an initializer to use when creating the embedding;
if None, the initializer from variable scope or a default one is used.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if embedding_classes is not positive.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.output_size` {#EmbeddingWrapper.output_size}
Integer: size of outputs produced by this cell.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.state_size` {#EmbeddingWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.zero_state(batch_size, dtype)` {#EmbeddingWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Operator adding an output projection to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your outputs in time,
do the projection on this batch-concatenated sequence, then split it
if needed or directly feed into a softmax.
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.__init__(cell, output_size)` {#OutputProjectionWrapper.__init__}
Create a cell with output projection.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
* <b>`output_size`</b>: integer, the size of the output after projection.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if output_size is not positive.
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.output_size` {#OutputProjectionWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.state_size` {#OutputProjectionWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.zero_state(batch_size, dtype)` {#OutputProjectionWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
The most basic RNN cell.
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.__init__(num_units, input_size=None, activation=tanh)` {#BasicRNNCell.__init__}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.output_size` {#BasicRNNCell.output_size}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.state_size` {#BasicRNNCell.state_size}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.zero_state(batch_size, dtype)` {#BasicRNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Operator adding dropout to inputs and outputs of the given cell.
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.__init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, seed=None)` {#DropoutWrapper.__init__}
Create a cell with added input and/or output dropout.
Dropout is never used on the state.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
* <b>`input_keep_prob`</b>: unit Tensor or float between 0 and 1, input keep
probability; if it is float and 1, no input dropout will be added.
* <b>`output_keep_prob`</b>: unit Tensor or float between 0 and 1, output keep
probability; if it is float and 1, no output dropout will be added.
* <b>`seed`</b>: (optional) integer, the randomness seed.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if keep_prob is not between 0 and 1.
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.output_size` {#DropoutWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.state_size` {#DropoutWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.zero_state(batch_size, dtype)` {#DropoutWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
- - -
#### `tf.nn.rnn_cell.GRUCell.__init__(num_units, input_size=None, activation=tanh)` {#GRUCell.__init__}
- - -
#### `tf.nn.rnn_cell.GRUCell.output_size` {#GRUCell.output_size}
- - -
#### `tf.nn.rnn_cell.GRUCell.state_size` {#GRUCell.state_size}
- - -
#### `tf.nn.rnn_cell.GRUCell.zero_state(batch_size, dtype)` {#GRUCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Operator adding an input projection to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your inputs in time,
do the projection on this batch-concatenated sequence, then split it.
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.__init__(cell, num_proj, input_size=None)` {#InputProjectionWrapper.__init__}
Create a cell with input projection.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection of inputs is added before it.
* <b>`num_proj`</b>: Python integer. The dimension to project to.
* <b>`input_size`</b>: Deprecated and unused.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.output_size` {#InputProjectionWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.state_size` {#InputProjectionWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.zero_state(batch_size, dtype)` {#InputProjectionWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Long short-term memory unit (LSTM) recurrent network cell.
The default non-peephole implementation is based on:
http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
S. Hochreiter and J. Schmidhuber.
"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
The peephole implementation is based on:
https://research.google.com/pubs/archive/43905.pdf
Hasim Sak, Andrew Senior, and Francoise Beaufays.
"Long short-term memory recurrent neural network architectures for
large scale acoustic modeling." INTERSPEECH, 2014.
The class uses optional peep-hole connections, optional cell clipping, and
an optional projection layer.
- - -
#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh)` {#LSTMCell.__init__}
Initialize the parameters for an LSTM cell.
##### Args:
* <b>`num_units`</b>: int, The number of units in the LSTM cell
* <b>`input_size`</b>: Deprecated and unused.
* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
by this value prior to the cell output activation.
* <b>`initializer`</b>: (optional) The initializer to use for the weight and
projection matrices.
* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
matrices. If None, no projection is performed.
* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
matrix is stored across num_unit_shards.
* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
projection matrix is stored across num_proj_shards.
* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
in order to reduce the scale of forgetting at the beginning of
the training.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. By default (False), they are concatenated
along the column axis. This default behavior will soon be deprecated.
* <b>`activation`</b>: Activation function of the inner states.
- - -
#### `tf.nn.rnn_cell.LSTMCell.output_size` {#LSTMCell.output_size}
- - -
#### `tf.nn.rnn_cell.LSTMCell.state_size` {#LSTMCell.state_size}
- - -
#### `tf.nn.rnn_cell.LSTMCell.zero_state(batch_size, dtype)` {#LSTMCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
### `tf.nn.state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None)` {#state_saving_rnn}
RNN that accepts a state saver for time-truncated RNN calculation.
##### Args:
* <b>`cell`</b>: An instance of `RNNCell`.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
`[batch_size, input_size]`.
* <b>`state_saver`</b>: A state saver object with methods `state` and `save_state`.
* <b>`state_name`</b>: Python string or tuple of strings. The name to use with the
state_saver. If the cell returns tuples of states (i.e.,
`cell.state_size` is a tuple) then `state_name` should be a tuple of
strings having the same length as `cell.state_size`. Otherwise it should
be a single string.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
outputs is a length T list of outputs (one for each input)
states is the final state
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the arity and
type of `state_name` does not match that of `cell.state_size`.
Basic LSTM recurrent network cell.
The implementation is based on: http://arxiv.org/abs/1409.2329.
We add forget_bias (default: 1) to the biases of the forget gate in order to
reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not
use peep-hole connections: it is the basic baseline.
For advanced models, please use the full LSTMCell that follows.
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=False, activation=tanh)` {#BasicLSTMCell.__init__}
Initialize the basic LSTM cell.
##### Args:
* <b>`num_units`</b>: int, The number of units in the LSTM cell.
* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
* <b>`input_size`</b>: Deprecated and unused.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. By default (False), they are concatenated
along the column axis. This default behavior will soon be deprecated.
* <b>`activation`</b>: Activation function of the inner states.
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.output_size` {#BasicLSTMCell.output_size}
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.state_size` {#BasicLSTMCell.state_size}
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.zero_state(batch_size, dtype)` {#BasicLSTMCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
### `tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#dynamic_rnn}
Creates a recurrent neural network specified by RNNCell `cell`.
This function is functionally identical to the function `rnn` above, but
performs fully dynamic unrolling of `inputs`.
Unlike `rnn`, the input `inputs` is not a Python list of `Tensors`. Instead,
it is a single `Tensor` where the maximum time is either the first or second
dimension (see the parameter `time_major`). The corresponding output is
a single `Tensor` having the same number of time steps and batch size.
The parameter `sequence_length` is required and dynamic calculation is
automatically performed.
##### Args:
* <b>`cell`</b>: An instance of RNNCell.
* <b>`inputs`</b>: The RNN inputs.
If time_major == False (default), this must be a tensor of shape:
`[batch_size, max_time, input_size]`.
If time_major == True, this must be a tensor of shape:
`[max_time, batch_size, input_size]`.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector sized `[batch_size]`.
* <b>`initial_state`</b>: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a tensor of appropriate type and shape `[batch_size x cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
initial_state is not provided.
* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
Using `time_major = True` is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
* <b>`outputs`</b>: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
* <b>`state`</b>: The final state. If `cell.state_size` is a `Tensor`, this
will be shaped `[batch_size, cell.state_size]`. If it is a tuple,
this be a tuple with shapes `[batch_size, s] for s in cell.state_size`.
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If inputs is None or an empty list.
Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
- - -
#### `tf.nn.rnn_cell.LSTMStateTuple.c` {#LSTMStateTuple.c}
Alias for field number 0
- - -
#### `tf.nn.rnn_cell.LSTMStateTuple.h` {#LSTMStateTuple.h}
Alias for field number 1
RNN cell composed sequentially of multiple simple cells.
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.__init__(cells, state_is_tuple=False)` {#MultiRNNCell.__init__}
Create a RNN cell composed sequentially of a number of RNNCells.
##### Args:
* <b>`cells`</b>: list of RNNCells that will be composed in this order.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
`n = len(cells)`. By default (False), the states are all
concatenated along the column axis.
##### Raises:
* <b>`ValueError`</b>: if cells is empty (not allowed), or at least one of the cells
returns a state tuple but the flag `state_is_tuple` is `False`.
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.output_size` {#MultiRNNCell.output_size}
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.state_size` {#MultiRNNCell.state_size}
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.zero_state(batch_size, dtype)` {#MultiRNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
Abstract object representing an RNN cell.
An RNN cell, in the most abstract setting, is anything that has
a state and performs some operation that takes a matrix of inputs.
This operation results in an output matrix with `self.output_size` columns.
If `self.state_size` is an integer, this operation also results in a new
state matrix with `self.state_size` columns. If `self.state_size` is a
tuple of integers, then it results in a tuple of `len(state_size)` state
matrices, each with the a column size corresponding to values in `state_size`.
This module provides a number of basic commonly used RNN cells, such as
LSTM (Long Short Term Memory) or GRU (Gated Recurrent Unit), and a number
of operators that allow add dropouts, projections, or embeddings for inputs.
Constructing multi-layer cells is supported by the class `MultiRNNCell`,
or by calling the `rnn` ops several times. Every `RNNCell` must have the
properties below and and implement `__call__` with the following signature.
- - -
#### `tf.nn.rnn_cell.RNNCell.output_size` {#RNNCell.output_size}
Integer: size of outputs produced by this cell.
- - -
#### `tf.nn.rnn_cell.RNNCell.state_size` {#RNNCell.state_size}
Integer or tuple of integers: size(s) of state(s) used by this cell.
- - -
#### `tf.nn.rnn_cell.RNNCell.zero_state(batch_size, dtype)` {#RNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
......@@ -416,6 +416,7 @@
* [`avg_pool3d`](../../api_docs/python/nn.md#avg_pool3d)
* [`batch_normalization`](../../api_docs/python/nn.md#batch_normalization)
* [`bias_add`](../../api_docs/python/nn.md#bias_add)
* [`bidirectional_rnn`](../../api_docs/python/nn.md#bidirectional_rnn)
* [`compute_accidental_hits`](../../api_docs/python/nn.md#compute_accidental_hits)
* [`conv2d`](../../api_docs/python/nn.md#conv2d)
* [`conv2d_transpose`](../../api_docs/python/nn.md#conv2d_transpose)
......@@ -424,6 +425,7 @@
* [`depthwise_conv2d_native`](../../api_docs/python/nn.md#depthwise_conv2d_native)
* [`dilation2d`](../../api_docs/python/nn.md#dilation2d)
* [`dropout`](../../api_docs/python/nn.md#dropout)
* [`dynamic_rnn`](../../api_docs/python/nn.md#dynamic_rnn)
* [`elu`](../../api_docs/python/nn.md#elu)
* [`embedding_lookup`](../../api_docs/python/nn.md#embedding_lookup)
* [`embedding_lookup_sparse`](../../api_docs/python/nn.md#embedding_lookup_sparse)
......@@ -444,6 +446,7 @@
* [`normalize_moments`](../../api_docs/python/nn.md#normalize_moments)
* [`relu`](../../api_docs/python/nn.md#relu)
* [`relu6`](../../api_docs/python/nn.md#relu6)
* [`rnn`](../../api_docs/python/nn.md#rnn)
* [`sampled_softmax_loss`](../../api_docs/python/nn.md#sampled_softmax_loss)
* [`separable_conv2d`](../../api_docs/python/nn.md#separable_conv2d)
* [`sigmoid`](../../api_docs/python/nn.md#sigmoid)
......@@ -453,12 +456,26 @@
* [`softplus`](../../api_docs/python/nn.md#softplus)
* [`softsign`](../../api_docs/python/nn.md#softsign)
* [`sparse_softmax_cross_entropy_with_logits`](../../api_docs/python/nn.md#sparse_softmax_cross_entropy_with_logits)
* [`state_saving_rnn`](../../api_docs/python/nn.md#state_saving_rnn)
* [`sufficient_statistics`](../../api_docs/python/nn.md#sufficient_statistics)
* [`tanh`](../../api_docs/python/nn.md#tanh)
* [`top_k`](../../api_docs/python/nn.md#top_k)
* [`uniform_candidate_sampler`](../../api_docs/python/nn.md#uniform_candidate_sampler)
* [`weighted_cross_entropy_with_logits`](../../api_docs/python/nn.md#weighted_cross_entropy_with_logits)
* **[Neural Network RNN Cells](../../api_docs/python/rnn_cell.md)**:
* [`BasicLSTMCell`](../../api_docs/python/rnn_cell.md#BasicLSTMCell)
* [`BasicRNNCell`](../../api_docs/python/rnn_cell.md#BasicRNNCell)
* [`DropoutWrapper`](../../api_docs/python/rnn_cell.md#DropoutWrapper)
* [`EmbeddingWrapper`](../../api_docs/python/rnn_cell.md#EmbeddingWrapper)
* [`GRUCell`](../../api_docs/python/rnn_cell.md#GRUCell)
* [`InputProjectionWrapper`](../../api_docs/python/rnn_cell.md#InputProjectionWrapper)
* [`LSTMCell`](../../api_docs/python/rnn_cell.md#LSTMCell)
* [`LSTMStateTuple`](../../api_docs/python/rnn_cell.md#LSTMStateTuple)
* [`MultiRNNCell`](../../api_docs/python/rnn_cell.md#MultiRNNCell)
* [`OutputProjectionWrapper`](../../api_docs/python/rnn_cell.md#OutputProjectionWrapper)
* [`RNNCell`](../../api_docs/python/rnn_cell.md#RNNCell)
* **[Running Graphs](../../api_docs/python/client.md)**:
* [`AbortedError`](../../api_docs/python/client.md#AbortedError)
* [`AlreadyExistsError`](../../api_docs/python/client.md#AlreadyExistsError)
......
......@@ -1442,6 +1442,232 @@ is the sum of the size of params along dimension 0.
## Recurrent Neural Networks
TensorFlow provides a number of methods for constructing Recurrent
Neural Networks. Most accept an `RNNCell`-subclassed object
(see the documentation for `tf.nn.rnn_cell`).
- - -
### `tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#dynamic_rnn}
Creates a recurrent neural network specified by RNNCell `cell`.
This function is functionally identical to the function `rnn` above, but
performs fully dynamic unrolling of `inputs`.
Unlike `rnn`, the input `inputs` is not a Python list of `Tensors`. Instead,
it is a single `Tensor` where the maximum time is either the first or second
dimension (see the parameter `time_major`). The corresponding output is
a single `Tensor` having the same number of time steps and batch size.
The parameter `sequence_length` is required and dynamic calculation is
automatically performed.
##### Args:
* <b>`cell`</b>: An instance of RNNCell.
* <b>`inputs`</b>: The RNN inputs.
If time_major == False (default), this must be a tensor of shape:
`[batch_size, max_time, input_size]`.
If time_major == True, this must be a tensor of shape:
`[max_time, batch_size, input_size]`.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector sized `[batch_size]`.
* <b>`initial_state`</b>: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a tensor of appropriate type and shape `[batch_size x cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
initial_state is not provided.
* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
Using `time_major = True` is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
* <b>`outputs`</b>: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
* <b>`state`</b>: The final state. If `cell.state_size` is a `Tensor`, this
will be shaped `[batch_size, cell.state_size]`. If it is a tuple,
this be a tuple with shapes `[batch_size, s] for s in cell.state_size`.
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If inputs is None or an empty list.
- - -
### `tf.nn.rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#rnn}
Creates a recurrent neural network specified by RNNCell `cell`.
##### The simplest form of RNN network generated is:
state = cell.zero_state(...)
outputs = []
for input_ in inputs:
output, state = cell(input_, state)
outputs.append(output)
return (outputs, state)
However, a few other options are available:
An initial state can be provided.
If the sequence_length vector is provided, dynamic calculation is performed.
This method of calculation does not compute the RNN steps past the maximum
sequence length of the minibatch (thus saving computational time),
and properly propagates the state at an example's sequence length
to the final state output.
The dynamic calculation performed is, at time t for batch row b,
(output, state)(b, t) =
(t >= sequence_length(b))
? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
: cell(input(b, t), state(b, t - 1))
##### Args:
* <b>`cell`</b>: An instance of RNNCell.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
[batch_size, input_size].
* <b>`initial_state`</b>: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a tensor of appropriate type and shape `[batch_size x cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
initial_state is not provided.
* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
- outputs is a length T list of outputs (one for each input)
- state is the final state
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the input depth
(column size) cannot be inferred from inputs via shape inference.
- - -
### `tf.nn.state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None)` {#state_saving_rnn}
RNN that accepts a state saver for time-truncated RNN calculation.
##### Args:
* <b>`cell`</b>: An instance of `RNNCell`.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
`[batch_size, input_size]`.
* <b>`state_saver`</b>: A state saver object with methods `state` and `save_state`.
* <b>`state_name`</b>: Python string or tuple of strings. The name to use with the
state_saver. If the cell returns tuples of states (i.e.,
`cell.state_size` is a tuple) then `state_name` should be a tuple of
strings having the same length as `cell.state_size`. Otherwise it should
be a single string.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
##### Returns:
A pair (outputs, state) where:
outputs is a length T list of outputs (one for each input)
states is the final state
##### Raises:
* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the arity and
type of `state_name` does not match that of `cell.state_size`.
- - -
### `tf.nn.bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None)` {#bidirectional_rnn}
Creates a bidirectional recurrent neural network.
Similar to the unidirectional case above (rnn) but takes input and builds
independent forward and backward RNNs with the final forward and backward
outputs depth-concatenated, such that the output will have the format
[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of
forward and backward cell must match. The initial state for both directions
is zero by default (but can be set optionally) and no intermediate states are
ever returned -- the network is fully unrolled for the given (passed in)
length(s) of the sequence(s) or completely unrolled if length(s) is not given.
##### Args:
* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
[batch_size, input_size].
* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
This must be a tensor of appropriate type and shape
`[batch_size x cell_fw.state_size]`.
If `cell_fw.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
the corresponding properties of `cell_bw`.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
either of the initial states are not provided.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences.
* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
##### Returns:
A tuple (outputs, output_state_fw, output_state_bw) where:
outputs is a length `T` list of outputs (one for each input), which
are depth-concatenated forward and backward outputs.
output_state_fw is the final state of the forward rnn.
output_state_bw is the final state of the backward rnn.
##### Raises:
* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
* <b>`ValueError`</b>: If inputs is None or an empty list.
## Evaluation
The evaluation ops are useful for measuring the performance of a network.
......
<!-- This file is machine generated: DO NOT EDIT! -->
# Neural Network RNN Cells
[TOC]
Module for constructing RNN Cells.
## Base interface for all RNN Cells
- - -
### `class tf.nn.rnn_cell.RNNCell` {#RNNCell}
Abstract object representing an RNN cell.
An RNN cell, in the most abstract setting, is anything that has
a state and performs some operation that takes a matrix of inputs.
This operation results in an output matrix with `self.output_size` columns.
If `self.state_size` is an integer, this operation also results in a new
state matrix with `self.state_size` columns. If `self.state_size` is a
tuple of integers, then it results in a tuple of `len(state_size)` state
matrices, each with the a column size corresponding to values in `state_size`.
This module provides a number of basic commonly used RNN cells, such as
LSTM (Long Short Term Memory) or GRU (Gated Recurrent Unit), and a number
of operators that allow add dropouts, projections, or embeddings for inputs.
Constructing multi-layer cells is supported by the class `MultiRNNCell`,
or by calling the `rnn` ops several times. Every `RNNCell` must have the
properties below and and implement `__call__` with the following signature.
- - -
#### `tf.nn.rnn_cell.RNNCell.output_size` {#RNNCell.output_size}
Integer: size of outputs produced by this cell.
- - -
#### `tf.nn.rnn_cell.RNNCell.state_size` {#RNNCell.state_size}
Integer or tuple of integers: size(s) of state(s) used by this cell.
- - -
#### `tf.nn.rnn_cell.RNNCell.zero_state(batch_size, dtype)` {#RNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
## RNN Cells for use with TensorFlow's core RNN methods
- - -
### `class tf.nn.rnn_cell.BasicRNNCell` {#BasicRNNCell}
The most basic RNN cell.
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.__init__(num_units, input_size=None, activation=tanh)` {#BasicRNNCell.__init__}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.output_size` {#BasicRNNCell.output_size}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.state_size` {#BasicRNNCell.state_size}
- - -
#### `tf.nn.rnn_cell.BasicRNNCell.zero_state(batch_size, dtype)` {#BasicRNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.BasicLSTMCell` {#BasicLSTMCell}
Basic LSTM recurrent network cell.
The implementation is based on: http://arxiv.org/abs/1409.2329.
We add forget_bias (default: 1) to the biases of the forget gate in order to
reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not
use peep-hole connections: it is the basic baseline.
For advanced models, please use the full LSTMCell that follows.
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=False, activation=tanh)` {#BasicLSTMCell.__init__}
Initialize the basic LSTM cell.
##### Args:
* <b>`num_units`</b>: int, The number of units in the LSTM cell.
* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
* <b>`input_size`</b>: Deprecated and unused.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. By default (False), they are concatenated
along the column axis. This default behavior will soon be deprecated.
* <b>`activation`</b>: Activation function of the inner states.
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.output_size` {#BasicLSTMCell.output_size}
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.state_size` {#BasicLSTMCell.state_size}
- - -
#### `tf.nn.rnn_cell.BasicLSTMCell.zero_state(batch_size, dtype)` {#BasicLSTMCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.GRUCell` {#GRUCell}
Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
- - -
#### `tf.nn.rnn_cell.GRUCell.__init__(num_units, input_size=None, activation=tanh)` {#GRUCell.__init__}
- - -
#### `tf.nn.rnn_cell.GRUCell.output_size` {#GRUCell.output_size}
- - -
#### `tf.nn.rnn_cell.GRUCell.state_size` {#GRUCell.state_size}
- - -
#### `tf.nn.rnn_cell.GRUCell.zero_state(batch_size, dtype)` {#GRUCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.LSTMCell` {#LSTMCell}
Long short-term memory unit (LSTM) recurrent network cell.
The default non-peephole implementation is based on:
http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
S. Hochreiter and J. Schmidhuber.
"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
The peephole implementation is based on:
https://research.google.com/pubs/archive/43905.pdf
Hasim Sak, Andrew Senior, and Francoise Beaufays.
"Long short-term memory recurrent neural network architectures for
large scale acoustic modeling." INTERSPEECH, 2014.
The class uses optional peep-hole connections, optional cell clipping, and
an optional projection layer.
- - -
#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh)` {#LSTMCell.__init__}
Initialize the parameters for an LSTM cell.
##### Args:
* <b>`num_units`</b>: int, The number of units in the LSTM cell
* <b>`input_size`</b>: Deprecated and unused.
* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
by this value prior to the cell output activation.
* <b>`initializer`</b>: (optional) The initializer to use for the weight and
projection matrices.
* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
matrices. If None, no projection is performed.
* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
matrix is stored across num_unit_shards.
* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
projection matrix is stored across num_proj_shards.
* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
in order to reduce the scale of forgetting at the beginning of
the training.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. By default (False), they are concatenated
along the column axis. This default behavior will soon be deprecated.
* <b>`activation`</b>: Activation function of the inner states.
- - -
#### `tf.nn.rnn_cell.LSTMCell.output_size` {#LSTMCell.output_size}
- - -
#### `tf.nn.rnn_cell.LSTMCell.state_size` {#LSTMCell.state_size}
- - -
#### `tf.nn.rnn_cell.LSTMCell.zero_state(batch_size, dtype)` {#LSTMCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
## Classes storing split `RNNCell` state
- - -
### `class tf.nn.rnn_cell.LSTMStateTuple` {#LSTMStateTuple}
Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
- - -
#### `tf.nn.rnn_cell.LSTMStateTuple.c` {#LSTMStateTuple.c}
Alias for field number 0
- - -
#### `tf.nn.rnn_cell.LSTMStateTuple.h` {#LSTMStateTuple.h}
Alias for field number 1
## RNN Cell wrappers (RNNCells that wrap other RNNCells)
- - -
### `class tf.nn.rnn_cell.MultiRNNCell` {#MultiRNNCell}
RNN cell composed sequentially of multiple simple cells.
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.__init__(cells, state_is_tuple=False)` {#MultiRNNCell.__init__}
Create a RNN cell composed sequentially of a number of RNNCells.
##### Args:
* <b>`cells`</b>: list of RNNCells that will be composed in this order.
* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
`n = len(cells)`. By default (False), the states are all
concatenated along the column axis.
##### Raises:
* <b>`ValueError`</b>: if cells is empty (not allowed), or at least one of the cells
returns a state tuple but the flag `state_is_tuple` is `False`.
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.output_size` {#MultiRNNCell.output_size}
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.state_size` {#MultiRNNCell.state_size}
- - -
#### `tf.nn.rnn_cell.MultiRNNCell.zero_state(batch_size, dtype)` {#MultiRNNCell.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.DropoutWrapper` {#DropoutWrapper}
Operator adding dropout to inputs and outputs of the given cell.
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.__init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, seed=None)` {#DropoutWrapper.__init__}
Create a cell with added input and/or output dropout.
Dropout is never used on the state.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
* <b>`input_keep_prob`</b>: unit Tensor or float between 0 and 1, input keep
probability; if it is float and 1, no input dropout will be added.
* <b>`output_keep_prob`</b>: unit Tensor or float between 0 and 1, output keep
probability; if it is float and 1, no output dropout will be added.
* <b>`seed`</b>: (optional) integer, the randomness seed.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if keep_prob is not between 0 and 1.
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.output_size` {#DropoutWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.state_size` {#DropoutWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.DropoutWrapper.zero_state(batch_size, dtype)` {#DropoutWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.EmbeddingWrapper` {#EmbeddingWrapper}
Operator adding input embedding to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your inputs in time,
do the embedding on this batch-concatenated sequence, then split it and
feed into your RNN.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.__init__(cell, embedding_classes, embedding_size, initializer=None)` {#EmbeddingWrapper.__init__}
Create a cell with an added input embedding.
##### Args:
* <b>`cell`</b>: an RNNCell, an embedding will be put before its inputs.
* <b>`embedding_classes`</b>: integer, how many symbols will be embedded.
* <b>`embedding_size`</b>: integer, the size of the vectors we embed into.
* <b>`initializer`</b>: an initializer to use when creating the embedding;
if None, the initializer from variable scope or a default one is used.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if embedding_classes is not positive.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.output_size` {#EmbeddingWrapper.output_size}
Integer: size of outputs produced by this cell.
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.state_size` {#EmbeddingWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.EmbeddingWrapper.zero_state(batch_size, dtype)` {#EmbeddingWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.InputProjectionWrapper` {#InputProjectionWrapper}
Operator adding an input projection to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your inputs in time,
do the projection on this batch-concatenated sequence, then split it.
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.__init__(cell, num_proj, input_size=None)` {#InputProjectionWrapper.__init__}
Create a cell with input projection.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection of inputs is added before it.
* <b>`num_proj`</b>: Python integer. The dimension to project to.
* <b>`input_size`</b>: Deprecated and unused.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.output_size` {#InputProjectionWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.state_size` {#InputProjectionWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.InputProjectionWrapper.zero_state(batch_size, dtype)` {#InputProjectionWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
- - -
### `class tf.nn.rnn_cell.OutputProjectionWrapper` {#OutputProjectionWrapper}
Operator adding an output projection to the given cell.
Note: in many cases it may be more efficient to not use this wrapper,
but instead concatenate the whole sequence of your outputs in time,
do the projection on this batch-concatenated sequence, then split it
if needed or directly feed into a softmax.
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.__init__(cell, output_size)` {#OutputProjectionWrapper.__init__}
Create a cell with output projection.
##### Args:
* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
* <b>`output_size`</b>: integer, the size of the output after projection.
##### Raises:
* <b>`TypeError`</b>: if cell is not an RNNCell.
* <b>`ValueError`</b>: if output_size is not positive.
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.output_size` {#OutputProjectionWrapper.output_size}
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.state_size` {#OutputProjectionWrapper.state_size}
- - -
#### `tf.nn.rnn_cell.OutputProjectionWrapper.zero_state(batch_size, dtype)` {#OutputProjectionWrapper.zero_state}
Return zero-filled state tensor(s).
##### Args:
* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
* <b>`dtype`</b>: the data type to use for the state.
##### Returns:
If `state_size` is an int, then the return value is a `2-D` tensor of
shape `[batch_size x state_size]` filled with zeros.
If `state_size` is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of `2-D` tensors with
the shapes `[batch_size x s]` for each s in `state_size`.
......@@ -48,6 +48,7 @@ def get_module_to_name():
tf.errors: "tf.errors",
tf.image: "tf.image",
tf.nn: "tf.nn",
tf.nn.rnn_cell: "tf.nn.rnn_cell",
tf.train: "tf.train",
tf.python_io: "tf.python_io",
tf.test: "tf.test",
......@@ -131,6 +132,7 @@ def all_libraries(module_to_name, members, documented):
"rnn", "state_saving_rnn", "bidirectional_rnn",
"dynamic_rnn", "seq2seq", "rnn_cell"],
prefix=PREFIX_TEXT),
library("rnn_cell", "Neural Network RNN Cells", tf.nn.rnn_cell),
library("client", "Running Graphs", client_lib),
library("train", "Training", tf.train,
exclude_symbols=["Feature", "Features", "BytesList", "FloatList",
......
......@@ -310,7 +310,9 @@ class SlimRNNCellTest(tf.test.TestCase):
x = tf.zeros([1, 2])
m = tf.zeros([1, 2])
my_cell = functools.partial(basic_rnn_cell, num_units=2)
g, _ = tf.nn.rnn_cell.SlimRNNCell(my_cell)(x, m)
# pylint: disable=protected-access
g, _ = tf.nn.rnn_cell._SlimRNNCell(my_cell)(x, m)
# pylint: enable=protected-access
sess.run([tf.initialize_all_variables()])
res = sess.run([g], {x.name: np.array([[1., 1.]]),
m.name: np.array([[0.1, 0.1]])})
......@@ -325,7 +327,9 @@ class SlimRNNCellTest(tf.test.TestCase):
inputs = tf.random_uniform((batch_size, input_size))
_, initial_state = basic_rnn_cell(inputs, None, num_units)
my_cell = functools.partial(basic_rnn_cell, num_units=num_units)
slim_cell = tf.nn.rnn_cell.SlimRNNCell(my_cell)
# pylint: disable=protected-access
slim_cell = tf.nn.rnn_cell._SlimRNNCell(my_cell)
# pylint: enable=protected-access
slim_outputs, slim_state = slim_cell(inputs, initial_state)
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_units)
outputs, state = rnn_cell(inputs, initial_state)
......
......@@ -209,6 +209,17 @@ tensors.
@@embedding_lookup
@@embedding_lookup_sparse
## Recurrent Neural Networks
TensorFlow provides a number of methods for constructing Recurrent
Neural Networks. Most accept an `RNNCell`-subclassed object
(see the documentation for `tf.nn.rnn_cell`).
@@dynamic_rnn
@@rnn
@@state_saving_rnn
@@bidirectional_rnn
## Evaluation
The evaluation ops are useful for measuring the performance of a network.
......
......@@ -13,7 +13,31 @@
# limitations under the License.
# ==============================================================================
"""Module for constructing RNN Cells."""
"""Module for constructing RNN Cells.
## Base interface for all RNN Cells
@@RNNCell
## RNN Cells for use with TensorFlow's core RNN methods
@@BasicRNNCell
@@BasicLSTMCell
@@GRUCell
@@LSTMCell
## Classes storing split `RNNCell` state
@@LSTMStateTuple
## RNN Cell wrappers (RNNCells that wrap other RNNCells)
@@MultiRNNCell
@@DropoutWrapper
@@EmbeddingWrapper
@@InputProjectionWrapper
@@OutputProjectionWrapper
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
......@@ -827,7 +851,7 @@ class MultiRNNCell(RNNCell):
return cur_inp, new_states
class SlimRNNCell(RNNCell):
class _SlimRNNCell(RNNCell):
"""A simple wrapper for slim.rnn_cells."""
def __init__(self, cell_fn):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册