Layers

fc

paddle.v2.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, name=None)

Fully Connected Layer

The fully connected layer can take multiple tensors as its inputs. It creates a variable (one for each input tensor) called weights for each input tensor, which represents a fully connected weight matrix from each input unit to each output unit. The fully connected layer multiplies each input tensor with its coresponding weight to produce an output Tensor. If multiple input tensors are given, the results of multiple multiplications will be sumed up. If bias_attr is not None, a biases variable will be created and added to the output. Finally, if activation is not None, it will be applied to the output as well.

This process can be formulated as follows:

\[Out = Act({\sum_{i=0}^{N-1}W_iX_i + b})\]

In the above equation:

  • \(N\): Number of the input.
  • \(X_i\): The input tensor.
  • \(W\): The weights created by this layer.
  • \(b\): The bias parameter created by this layer (if needed).
  • \(Act\): The activation funtion.
  • \(Out\): The output tensor.
Parameters:
  • input (Variable|list) – The input tensor(s) to the fully connected layer.
  • size (int) – The number of output units in the fully connected layer.
  • num_flatten_dims (int) – The fc layer can accept an input tensor with more than two dimensions. If this happens, the multidimensional tensor will first be flattened into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input tensor is flattened: the first num_flatten_dims dimensions will be flatten to form the first dimension of the final matrix (height of the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to form the second dimension of the final matrix (width of the matrix). For example, suppose X is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3. Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. By default, num_flatten_dims is set to 1.
  • param_attr (ParamAttr|list) – The parameter attribute for learnable parameters/weights of the fully connected layer.
  • param_initializer (ParamAttr|list) – The initializer used for the weight/parameter. If set None, XavierInitializer() will be used.
  • bias_attr (ParamAttr|list) – The parameter attribute for the bias parameter for this layer. If set None, no bias will be added to the output units.
  • bias_initializer (ParamAttr|list) – The initializer used for the bias. If set None, then ConstantInitializer() will be used.
  • act (str) – Activation to be applied to the output of the fully connected layer.
  • name (str) – Name/alias of the fully connected layer.
Returns:

The output tensor variable.

Return type:

Variable

Raises:

ValueError – If rank of the input tensor is less than 2.

Examples

data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")

embedding

paddle.v2.fluid.layers.embedding(input, size, is_sparse=False, param_attr=None, dtype='float32')

Embedding Layer

This layer is used to lookup a vector of IDs, provided by input, in a lookup table. The result of this lookup is the embedding of each ID in the input.

All the input variables are passed in as local variables to the LayerHelper constructor.

Parameters:
  • input (Variable) – Input to the function
  • size (tuple|list|None) – Shape of the look up table parameter
  • is_sparse (bool) – Boolean flag that specifying whether the input is sparse
  • param_attr (ParamAttr) – Parameters for this layer
  • dtype (np.dtype|core.DataType|str) – The type of data : float32, float_16, int etc
Returns:

The tensor variable storing the embeddings of the supplied inputs.

Return type:

Variable

Examples

dict_size = len(dataset.ids)
data = fluid.layers.data(name='ids', shape=[32, 32], dtype='float32')
fc = fluid.layers.embedding(input=data, size=[dict_size, 16])

dynamic_lstm

paddle.v2.fluid.layers.dynamic_lstm(input, size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32')

Dynamic LSTM Layer

The defalut implementation is diagonal/peephole connection (https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{ch}h_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\end{aligned}\end{align} \]

where the \(W\) terms denote weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input), \(W_{ic}, W_{fc}, W_{oc}\) are diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices. The \(b\) terms denote bias vectors (\(b_i\) is the input gate bias vector), \(\sigma\) is the non-line activations, such as logistic sigmoid function, and \(i, f, o\) and \(c\) are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\).

The \(\odot\) is the element-wise product of the vectors. \(act_g\) and \(act_h\) are the cell input and cell output activation functions and tanh is usually used for them. \(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.

Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect layer before LSTM layer.

Parameters:
  • input (Variable) – The input of dynamic_lstm layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size.
  • size (int) – 4 * hidden size.
  • param_attr (ParamAttr) –

    The parameter attribute for the learnable hidden-hidden weights.

    • The shape is (D x 4D), where D is the hidden size.
    • Weights = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}
  • bias_attr (ParamAttr) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.

    1. use_peepholes = False
    • The shape is (1 x 4D).
    • Biases = {\(b_c, b_i, b_f, b_o\)}.
    1. use_peepholes = True
    • The shape is (1 x 7D).
    • Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}.
  • use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True.
  • is_reverse (bool) – Whether to compute reversed LSTM, default False.
  • gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
  • cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
Returns:

The hidden state, and cell state of LSTM. The shape of both is (T x D), and lod is the same with the input.

Return type:

tuple

Examples

hidden_dim = 512
forward_proj = fluid.layers.fc(input=input_seq, size=hidden_dim * 4,
                               act=None, bias_attr=None)
forward, _ = fluid.layers.dynamic_lstm(
    input=forward_proj, size=hidden_dim * 4, use_peepholes=False)

data

paddle.v2.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)

Data Layer

This function takes in the input and based on whether data has to be returned back as a minibatch, it creates the global variable by using the helper functions. The global variables can be accessed by all the following operators in the graph.

All the input variables of this function are passed in as local variables to the LayerHelper constructor.

Parameters:
  • name (str) – The name/alias of the function
  • shape (list) – Tuple declaring the shape.
  • append_batch_size (bool) – Whether or not to append the data as a batch.
  • dtype (int|float) – The type of data : float32, float_16, int etc
  • type (VarType) – The output type. By default it is LOD_TENSOR.
  • lod_level (int) – The LoD Level. 0 means the input data is not a sequence.
  • main_program (Program) – Name of the main program that calls this
  • startup_program (Program) – Name of the startup program
  • stop_gradient (bool) – A boolean that mentions whether gradient should flow.
Returns:

The global variable that gives access to the data.

Return type:

Variable

Examples

data = fluid.layers.data(name='x', shape=[784], dtype='float32')

mean

paddle.v2.fluid.layers.mean(**kwargs)

Mean Operator.

Out is a scalar which is the mean of all elements in X.

Parameters:x – The input of mean op Duplicable: False Optional: False
Returns:The output of mean op

mul

paddle.v2.fluid.layers.mul(**kwargs)

Mul Operator.

This operator is used to perform matrix multiplication for input $X$ and $Y$.

The equation is:

$$Out = X * Y$$

Both the input $X$ and $Y$ can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input $X$.

Parameters:
  • x – (Tensor), The first input tensor of mul op. Duplicable: False Optional: False
  • y – (Tensor), The second input tensor of mul op. Duplicable: False Optional: False
  • x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two dimensions as its inputs. If the input $X$ is a tensor with more than two dimensions, $X$ will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(X) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of $X$’s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of $X$’s last rank(x) - num_col_dims dimensions’ size. For example, suppose $X$ is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
  • y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two, dimensions as its inputs. If the input $Y$ is a tensor with more than two dimensions, $Y$ will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how $Y$ is flattened. See comments of x_num_col_dims for more details.
Returns:

(Tensor), The output tensor of mul op.

elementwise_add

paddle.v2.fluid.layers.elementwise_add(**kwargs)

Limited Elementwise Add Operator.

The equation is:

$$Out = X + Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a subset of $X$.

For case 2: $Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0

Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters:
  • x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False
  • y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_sub

paddle.v2.fluid.layers.elementwise_sub(**kwargs)

Limited Elementwise Sub Operator.

The equation is:

$$Out = X - Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a subset of $X$.

For case 2: $Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0

Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters:
  • x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False
  • y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_mul

paddle.v2.fluid.layers.elementwise_mul(**kwargs)

Limited Elementwise Mul Operator.

The equation is:

$$Out = X odotY$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a subset of $X$.

For case 2: $Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0

Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters:
  • x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False
  • y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_div

paddle.v2.fluid.layers.elementwise_div(**kwargs)

Limited Elementwise Div Operator.

The equation is:

$$Out = X / Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a subset of $X$.

For case 2: $Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0

Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters:
  • x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False
  • y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

dropout

paddle.v2.fluid.layers.dropout(x, dropout_prob, is_test=False, seed=0, **kwargs)

reshape

paddle.v2.fluid.layers.reshape(**kwargs)

Reshape Operator.

Reshape Input(X) into the shape specified by Attr(shape).

An example: Given a 2-D tensor X with 2 rows and 2 columns

[[1, 2], [3, 4]]

and target shape = [1, 4], the reshape operator will transform the tensor X into a 2-D tensor:

[[1, 2, 3, 4]]

One dimension in the target shape can be set -1, representing that its size is unknown. In this case, the real dimension will be infered from the original shape of Input(X) and other dimensions in the target shape.

Parameters:
  • x – The input tensor of reshape operator. Duplicable: False Optional: False
  • shape (INTS) – (vector<int>) Target shape of reshape operator.
Returns:

The output tensor of reshape operator.

sigmoid

paddle.v2.fluid.layers.sigmoid(**kwargs)

Sigmoid Activation Operator

$$out = frac{1}{1 + e^{-x}}$$

Parameters:x – Input of Sigmoid operator Duplicable: False Optional: False
Returns:Output of Sigmoid operator

scale

paddle.v2.fluid.layers.scale(**kwargs)

Scale operator

$$Out = scale*X$$

Parameters:
  • x – (Tensor) Input tensor of scale operator. Duplicable: False Optional: False
  • scale (FLOAT) – (float, default 1.0)The scaling factor of the scale operator.
Returns:

(Tensor) Output tensor of scale operator.

transpose

paddle.v2.fluid.layers.transpose(**kwargs)

Transpose Operator.

The input tensor will be permuted according to the axis values given. The op functions is similar to how numpy.transpose works in python.

For example:

input = numpy.arange(6).reshape((2,3))

the input is:

array([[0, 1, 2],
       [3, 4, 5]])

given axis is:

[1, 0]

output = input.transpose(axis)

then the output is:

array([[0, 3],
       [1, 4],
       [2, 5]])

So, given a input tensor of shape(N, C, H, W) and the axis is {0, 2, 3, 1}, the output tensor shape will be (N, H, W, C)

Parameters:
  • x – (Tensor)The input tensor, tensors with rank at most 6 are supported Duplicable: False Optional: False
  • axis (INTS) – (vector<int>)A list of values, and the size of the list should be the same with the input tensor rank, the tensor will permute the axes according the the values given
Returns:

(Tensor)The output tensor

sigmoid_cross_entropy_with_logits

cast

paddle.v2.fluid.layers.cast(x, dtype)

This function takes in the input with input_dtype and casts it to the output_dtype as the output.

concat

paddle.v2.fluid.layers.concat(input, axis=0)

Concat

This function concatenates the input along the axis mentioned and returns that as the output.

Parameters:
  • input (list) – List of tensors to be concatenated
  • axis (int) – Integer axis along which the tensors will be concatenated
Returns:

Output variable of the concatenation

Return type:

Variable

Examples

sums

paddle.v2.fluid.layers.sums(input, out=None)

This function performs the sum operation on the input and returns the result as the output.

Parameters:input (Variable|list) – The input tensor that has the elements that need to be summed up.
Returns:
The tensor type variable that has the sum of input
written to it.
Return type:Variable

Examples

linear_chain_crf

paddle.v2.fluid.layers.linear_chain_crf(input, label, param_attr=None)

assign

paddle.v2.fluid.layers.embedding(input, size, is_sparse=False, param_attr=None, dtype='float32')

Embedding Layer

This layer is used to lookup a vector of IDs, provided by input, in a lookup table. The result of this lookup is the embedding of each ID in the input.

All the input variables are passed in as local variables to the LayerHelper constructor.

Parameters:
  • input (Variable) – Input to the function
  • size (tuple|list|None) – Shape of the look up table parameter
  • is_sparse (bool) – Boolean flag that specifying whether the input is sparse
  • param_attr (ParamAttr) – Parameters for this layer
  • dtype (np.dtype|core.DataType|str) – The type of data : float32, float_16, int etc
Returns:

The tensor variable storing the embeddings of the supplied inputs.

Return type:

Variable

Examples

dict_size = len(dataset.ids)
data = fluid.layers.data(name='ids', shape=[32, 32], dtype='float32')
fc = fluid.layers.embedding(input=data, size=[dict_size, 16])

split_lod_tensor

paddle.v2.fluid.layers.split_lod_tensor(input, mask, level=0)

split_lod_tensor

This function takes in an input that contains the complete lod information, and takes in a mask which is used to mask certain parts of the input. The output is the true branch and the false branch with the mask applied to the input at a certain level in the tensor.

Parameters:
  • input (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output.
  • mask (list) – A bool column vector which masks the input.
  • level (int) – The specific lod level to rank.
Returns:

The true branch of tensor as per the mask applied to input. Variable: The false branch of tensor as per the mask applied to input.

Return type:

Variable

Examples

x = layers.data(name='x', shape=[1])
x.persistable = True

y = layers.data(name='y', shape=[1])
y.persistable = True

out_true, out_false = layers.split_lod_tensor(
      input=x, mask=y, level=level)

merge_lod_tensor

paddle.v2.fluid.layers.merge_lod_tensor(in_true, in_false, x, mask, level=0)

merge_lod_tensor

This function takes in an input \(x\), the True branch, the False branch and a binary \(mask\). Using this information, this function merges the True and False branches of the tensor into a single Output at a certain lod level indiacted by \(level\).

Parameters:
  • in_true (tuple|list|None) – The True branch to be merged.
  • in_false (tuple|list|None) – The False branch to be merged.
  • x (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output.
  • mask (list) – A bool column vector which masks the input.
  • level (int) – The specific lod level to rank.
Returns:

The merged output tensor.

Return type:

Variable

Examples

x = layers.data(
            name='x', shape=[1], dtype='float32', stop_gradient=False)
y = layers.data(
      name='y', shape=[1], dtype='bool', stop_gradient=False)

level = 0

out_true, out_false = layers.split_lod_tensor(
      input=x, mask=y, level=level)
out = layers.merge_lod_tensor(
      in_true=out_true, in_false=out_false, mask=y, x=x, level=level)

cos_sim

paddle.v2.fluid.layers.cos_sim(X, Y, **kwargs)

This function performs the cosine similarity between two tensors X and Y and returns that as the output.

cross_entropy

paddle.v2.fluid.layers.cross_entropy(input, label, **kwargs)

Cross Entropy Layer

This layer computes the cross entropy between input and label. It supports both standard cross-entropy and soft-label cross-entropy loss computation.

  1. One-hot cross-entropy:

    soft_label = False, Label[i, 0] indicates the class index for sample i:

    \[Y[i] = -\log(X[i, Label[i]])\]
  2. Soft-label cross-entropy:

    soft_label = True, Label[i, j] indicates the soft label of class j for sample i:

    \[Y[i] = \sum_j{-Label[i, j] * log(X[i, j])}\]

    Please make sure that in this case the summation of each row of label equals one.

  3. One-hot cross-entropy with vecterized label:

    As a special case of 2), when each row of ‘label’ has only one non-zero element which is equal to 1, soft-label cross-entropy degenerates to a one-hot cross-entropy with one-hot label representation.

Parameters:
  • input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is a probability computed by the previous operator, which is almost always the result of a softmax operator.
  • label (Variable|list) – the ground truth which is a 2-D tensor. When soft_label is set to False, label is a tensor<int64> with shape [N x 1]. When soft_label is set to True, label is a tensor<float/double> with shape [N x D].
  • soft_label (bool, via **kwargs) – a flag indicating whether to interpretate the given labels as soft labels, default False.
Returns:

A 2-D tensor with shape [N x 1], the cross entropy loss.

Raises:

ValueError – 1) the 1st dimension of input and label are not equal; 2) when soft_label == True, and the 2nd dimension of input and label are not equal; 3) when soft_label == False, and the 2nd dimension of label is not 1.

Examples

predict = fluid.layers.fc(input=net, size=classdim, act='softmax')
cost = fluid.layers.cross_entropy(input=predict, label=label)

square_error_cost

paddle.v2.fluid.layers.square_error_cost(input, label, **kwargs)

Square error cost layer

This layer accepts input predictions and target label and returns the squared error cost. For predictions, \(X\), and target labels, \(Y\), the equation is:

\[Out = (X - Y)^2\]

In the above equation:

  • \(X\): Input predictions, a tensor.
  • \(Y\): Input labels, a tensor.
  • \(Out\): Output value, same shape with \(X\).
Parameters:
  • input (Variable) – Input tensor, has predictions.
  • label (Variable) – Label tensor, has target labels.
Returns:

The tensor variable storing the element-wise squared error difference of input and label.

Return type:

Variable

Examples

y = layers.data(name='y', shape=[1], dtype='float32')
y_predict = layers.data(name='y_predict', shape=[1], dtype='float32')
cost = layers.square_error_cost(input=y_predict, label=y)

accuracy

paddle.v2.fluid.layers.accuracy(input, label, k=1, correct=None, total=None, **kwargs)

This function computes the accuracy using the input and label. The output is the top_k inputs and their indices.

sequence_conv

paddle.v2.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=None, bias_attr=None, param_attr=None, act=None)

This function creates the op for sequence_conv, using the inputs and other convolutional configurations for the filters and stride as given in the input parameters to the function.

conv2d

paddle.v2.fluid.layers.conv2d(input, num_filters, filter_size, stride=None, padding=None, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None)

Convlution2D Layer

The convolution2D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. The details of convolution layer, please refer UFLDL’s convolution, . If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result. For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

  • \(X\): Input value, a tensor with NCHW format.
  • \(W\): Filter value, a tensor with MCHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

Input:

Input shape: $(N, C_{in}, H_{in}, W_{in})$

Filter shape: $(C_{out}, C_{in}, H_f, W_f)$

Output:
Output shape: $(N, C_{out}, H_{out}, W_{out})$

Where

\[\begin{split}H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of filter. It is as same as the output image channel.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
  • groups (int) – The groups number of the Conv2d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr) – The parameters to the Conv2d Layer. Default: None
  • bias_attr (ParamAttr) – Bias parameter for the Conv2d layer. Default: None
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type. Default: None
Returns:

The tensor variable storing the convolution and non-linearity activation result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")

sequence_pool

paddle.v2.fluid.layers.sequence_pool(input, pool_type, **kwargs)

This function add the operator for sequence pooling. It pools features of all time-steps of each instance, and is applied on top of the input using pool_type mentioned in the parameters.

It supports four pool_type:

  • average: \(Out[i] = \frac{\sum_i X_i}{N}\)
  • sum: \(Out[i] = \sum_jX_{ij}\)
  • sqrt: \(Out[i] = \frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}\)
  • max: \(Out[i] = max(X_i)\)
x is a 1-level LoDTensor:
  x.lod = [[0, 2, 5, 7]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) - 1 == out.dims[0]

for different pool_type:
  average: out.data = [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
  sum    : out.data = [4, 12, 6], where 4=1+3, 12=2+4+6, 6=5+1
  sqrt   : out.data = [2.82, 6.93, 4.24], where 2.82=(1+3)/sqrt(2),
             6.93=(2+4+6)/sqrt(3), 4.24=(5+1)/sqrt(2)
  max    : out.data = [3, 6, 5], where 3=max(1,3), 6=max(2,4,6), 5=max(5,1)
Parameters:
  • input (variable) – The input variable which is a LoDTensor.
  • pool_type (string) – The pooling type of sequence_pool. It supports average, sum, sqrt and max.
Returns:

The sequence pooling variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
sqrt_x = fluid.layers.sequence_pool(input=x, pool_type='sqrt')
max_x = fluid.layers.sequence_pool(input=x, pool_type='max')

sequence_first_step

paddle.v2.fluid.layers.sequence_first_step(input, **kwargs)

This funciton get the first step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[0, 2, 5, 7]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) - 1 == out.dims[0]
  out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s first step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)

sequence_last_step

paddle.v2.fluid.layers.sequence_last_step(input, **kwargs)

This funciton get the last step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[0, 2, 5, 7]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) - 1 == out.dims[0]
  out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s last step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)

pool2d

paddle.v2.fluid.layers.pool2d(input, pool_size, pool_type, pool_stride=None, pool_padding=None, global_pooling=False, use_cudnn=True, name=None)

This function adds the operator for pooling in 2 dimensions, using the pooling configurations mentioned in input parameters.

batch_norm

paddle.v2.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', name=None)

This function helps create an operator to implement the BatchNorm layer using the configurations from the input parameters.

beam_search_decode

paddle.v2.fluid.layers.beam_search_decode(ids, scores, name=None)

lod_rank_table

paddle.v2.fluid.layers.lod_rank_table(x, level=0)

LoD Rank Table Operator. Given an input variable x and a level number of LoD, this layer creates a LodRankTable object. A LoDRankTable object contains a list of bi-element tuples. Each tuple consists of an index and a length, both of which are int type. Refering to specified level of LoD, the index is the sequence index number and the length representes the sequence length. Please note that the list is ranked in descending order by the length. The following is an example:

x is a LoDTensor:
    x.lod = [[0,                2, 3],
             [0,             5, 6, 7]]
    x.data = [a, b, c, d, e, f, g]

1. set level to 0:
    Create lod rank table:
        lod_rank_table_obj = lod_rank_table(x, level=0)

    Get:
        lod_rank_table_obj.items() = [(0, 2), (1, 1)]

2. set level to 1:
    Create lod rank table:
        lod_rank_table_obj = lod_rank_table(x, level=1)

    Get:
        lod_rank_table_obj.items() = [(0, 5), (1, 1), (2, 1)]
Parameters:
  • x (Variable) – Input variable, a LoDTensor based which to create the lod rank table.
  • level (int) – Specify the LoD level, on which to create the lod rank table.
Returns:

The created LoDRankTable object.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10],
                dtype='float32', lod_level=1)
out = layers.lod_rank_table(x=x, level=0)

max_sequence_len

paddle.v2.fluid.layers.max_sequence_len(rank_table)

Max Sequence Len Operator. Given a LoDRankTable object, this layer returns the max length of a batch of sequences. In fact, a LoDRankTable object contains a list of tuples(<sequence index, sequence length>) and the list is already sorted by sequence length in descending order, so the operator just returns the sequence length of the first tuple element.

Parameters:rank_table (Variable) – Input variable which is a LoDRankTable object.
Returns:The max length of sequence.
Return type:Variable

Examples

x = fluid.layers.data(name='x', shape=[10],
                dtype='float32', lod_level=1)
rank_table = layers.lod_rank_table(x=x, level=0)
max_seq_len = layers.max_sequence_len(rank_table)

topk

paddle.v2.fluid.layers.topk(input, k)

topk

This function performs the operation that selects the k entries in the input vector and outputs their values and indices as vectors. Thus topk_out[j] is the j-th largest entry in input, and its index is topk_indices[j]

Parameters:
  • input (Variable|list) – The input tensor that has all the data.
  • k (int) – The number of top elements that the function will pick.
Returns:

The variable of type array that contains the k largest entries

from input.

Variable: The variable of type array that contains the indices of k

largest entries from input.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
k = 5
array = fluid.layers.topk(x, k)

lod_tensor_to_array

paddle.v2.fluid.layers.lod_tensor_to_array(x, table)

Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.

Parameters:
  • x (Variable|list) – The LOD tensor to be converted to a LOD tensor array.
  • table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order.
Returns:

The variable of type array that has been converted from a

tensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)

array_to_lod_tensor

paddle.v2.fluid.layers.array_to_lod_tensor(x, table)

Convert a LoD_Tensor_Aarry to an LoDTensor.

Parameters:
  • x (Variable|list) – The lod tensor array to be converted to a tensor.
  • table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order.
Returns:

The variable of type tensor that has been converted

from an array.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)
lod_tensor = fluid.layers.array_to_lod_tensor(array, table)

fill_constant

paddle.v2.fluid.layers.fill_constant(shape, dtype, value, force_cpu=False, out=None)

fill_constant

This function creates a tensor with specified shape and dtype, and initializes it with a constant specifed by value.

The attribute stop_gradient of the created tensor is set to True.

Parameters:
  • shape (tuple|list|None) – Shape of the output tensor.
  • dtype (np.dtype|core.DataType|str) – Data type of the output tensor.
  • value (float) – The constant value used to initialize the output tensor.
  • out (Variable) – The output tensor.
Returns:

The tensor variable storing the output.

Return type:

Variable

Examples

data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')

fill_constant_batch_size_like

paddle.v2.fluid.layers.fill_constant_batch_size_like(input, shape, dtype, value, input_dim_idx=0, output_dim_idx=0)

fill_constant_batch_size_like

This function creates a tensor of specified shape, dtype and batch size, and initializes this with a constant supplied in value. The batch size is obtained from the input tensor.

It also sets stop_gradient to True.

Parameters:
  • input (Variable) – Tensor whose dimensions will be used to get batch size
  • shape (tuple|list|None) – Shape of output tensor
  • dtype (np.dtype|core.DataType|str) – Data type of output tensor
  • value (float) – Constant value to initialize the output tensor
  • input_dim_idx (int) – Index of input’s batch size dimension
  • output_dim_idx (int) – Index of output’s batch size dimension
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

data = fluid.layers.fill_constant_batch_size_like(
    input=like, shape=[1], value=0, dtype='int64')

ones

paddle.v2.fluid.layers.ones(shape, dtype)

ones

This function creates a tensor of specified shape and dtype, and initializes this with 1.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor
  • dtype (np.dtype|core.DataType|str) – Data type of output tensor
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

data = fluid.layers.ones(shape=[1], dtype='int64')

zeros

paddle.v2.fluid.layers.zeros(shape, dtype)

zeros

This function creates a tensor of specified shape and dtype, and initializes this with 0.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor
  • dtype (np.dtype|core.DataType|str) – Data type of output tensor
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

data = fluid.layers.zeros(shape=[1], dtype='int64')

increment

paddle.v2.fluid.layers.increment(x, value=1.0, in_place=True)

This function performs an operation that increments each value in the input \(x\) by an amount: \(value\) as mentioned in the input parameter. This operation is performed in-place by default.

Parameters:
  • x (Variable|list) – The tensor that has the input values.
  • value (float) – The amount by which the values should be incremented.
  • in_place (bool) – If the increment should be performed in-place.
Returns:

The tensor variable storing the transformation of

element-wise increment of each value in the input.

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
data = fluid.layers.increment(x=data, value=3.0, in_place=True)

array_write

paddle.v2.fluid.layers.array_write(x, i, array=None)

This function writes the given input variable to the specified position indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the output LOD_TENSOR_ARRAY is not given(None), a new one will be created and returned.

Parameters:
  • x (Variable|list) – The input tensor from which the data will be read.
  • i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to the position to which the input tensor will be written.
  • array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input tensor will be written. If this parameter is NONE, a new LOD_TENSOR_ARRAY will be created and returned.
Returns:

The output LOD_TENSOR_ARRAY where the input tensor is written.

Return type:

Variable

Examples

create_array

paddle.v2.fluid.layers.create_array(dtype)

This function creates an array of type \(LOD_TENSOR_ARRAY\) using the LayerHelper.

Parameters:dtype (int|float) – The data type of the elements in the array.
Returns:The tensor variable storing the elements of data type.
Return type:Variable

Examples

data = fluid.layers.create_array(dtype='float32')

less_than

paddle.v2.fluid.layers.less_than(x, y, cond=None, **ignored)

Less than

This layer returns the truth value of \(x < y\) elementwise.

Parameters:
  • x (Variable) – First operand of less_than
  • y (Variable) – Second operand of less_than
  • cond (Variable|None) – Optional output variable to store the result of less_than
Returns:

The tensor variable storing the output of less_than.

Return type:

Variable

Examples

less = fluid.layers.less_than(x=label, y=limit)

array_read

paddle.v2.fluid.layers.array_read(array, i)

This function performs the operation to read the data in as an LOD_TENSOR_ARRAY. :param array: The input tensor that will be written to an array. :type array: Variable|list :param i: The subscript index in tensor array, that points the

place where data will be written to.
Returns:The tensor type variable that has the data written to it.
Return type:Variable

Examples

shrink_memory

paddle.v2.fluid.layers.shrink_memory(x, i, table)

This function creates an operator to shrink_rnn_memory using the RankTable as mentioned in the input parameter.

array_length

paddle.v2.fluid.layers.array_length(array)

This function performs the operation to find the length of the input LOD_TENSOR_ARRAY.

Parameters:array (LOD_TENSOR_ARRAY) – The input array that will be used to compute the length.
Returns:The length of the input LoDTensorArray.
Return type:Variable

Examples

conv2d_transpose

paddle.v2.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=None, stride=None, dilation=None, param_attr=None, use_cudnn=True, name=None)

The transpose of conv2d layer.

This layer is also known as deconvolution layer.

Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of filter. It is as same as the output image channel.
  • output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain two integers, (image_H, image_W). This parameter only works when filter_size is None.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation.
  • param_attr – Parameter Attribute.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output image.

Return type:

Variable

sequence_expand

paddle.v2.fluid.layers.sequence_expand(x, y, name=None)

Sequence Expand Layer. This layer will expand the input variable x according to LoD information of y. And the following examples will explain how sequence_expand works:

* Case 1
    x is a LoDTensor:
        x.lod = [[0,       2, 3],
                 [0, 1,    3, 4]]
        x.data = [a, b, c, d]
        x.dims = [4, 1]

    y is a LoDTensor:
        y.lod = [[0,    2,    4],
                 [0, 3, 6, 7, 8]]

    with condition len(y.lod[-1]) - 1 == x.dims[0]

    then output is a 2-level LoDTensor:
        out.lod = [[0,                2,    4],
                   [0,       3,       6, 7, 8]]
        out.data = [a, a, a, b, b, b, c, d]
        out.dims = [8, 1]

* Case 2
    x is a Tensor:
        x.data = [a, b, c]
        x.dims = [3, 1]

    y is a LoDTensor:
        y.lod = [[0, 2, 3, 6]]

    with condition len(y.lod[-1]) - 1 == x.dims[0]

    then output is a 1-level LoDTensor:
        out.lod = [[0,    2, 3,      6]]
        out.data = [a, a, b, c, c, c]
        out.dims = [6, 1]
Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a LoDTensor.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The expanded variable which is a LoDTensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
                 dtype='float32', lod_level=1)
out = layers.sequence_expand(x=x, y=y)

gru_unit

paddle.v2.fluid.layers.gru_unit(input, hidden, size, weight=None, bias=None, activation='tanh', gate_activation='sigmoid')

GRU unit layer. The equation of a gru step is:

\[ \begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot((1-u_t), m_t) + dot(u_t, h_{t-1})\end{aligned}\end{align} \]

The inputs of gru unit includes \(z_t\), \(h_{t-1}\). In terms of the equation above, the \(z_t\) is split into 3 parts - \(xu_t\), \(xr_t\) and \(xm_t\). This means that in order to implement a full GRU unit operator for an input, a fully connected layer has to be applied, such that \(z_t = W_{fc}x_t\).

The terms \(u_t\) and \(r_t\) represent the update and reset gates of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is an intermediate candidate hidden output, which is denoted by \(m_t\). This layer has three outputs \(h_t\), \(dot(r_t, h_{t-1})\) and concatenation of \(u_t\), \(r_t\) and \(m_t\).

Parameters:
  • input (Variable) – The fc transformed input value of current step.
  • hidden (Variable) – The hidden value of lstm unit from previous step.
  • size (integer) – The input dimension value.
  • weight (ParamAttr) – The weight parameters for gru unit. Default: None
  • bias (ParamAttr) – The bias parameters for gru unit. Default: None
  • activation (string) – The activation type for cell (actNode). Default: ‘tanh’
  • gate_activation (string) – The activation type for gates (actGate). Default: ‘sigmoid’
Returns:

The hidden value, reset-hidden value and gate values.

Return type:

tuple

Examples

# assuming we have x_t_data and prev_hidden of size=10
x_t = fluid.layers.fc(input=x_t_data, size=30)
hidden_val, r_h_val, gate_val = fluid.layers.gru_unit(input=x_t,
                                       hidden = prev_hidden)

lstm_unit

paddle.v2.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)

Lstm unit layer. The equation of a lstm step is:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]

The inputs of lstm unit include \(x_t\), \(h_{t-1}\) and \(c_{t-1}\). The 2nd dimensions of \(h_{t-1}\) and \(c_{t-1}\) should be same. The implementation separates the linear transformation and non-linear transformation apart. Here, we take \(i_t\) as an example. The linear transformation is applied by calling a fc layer and the equation is:

\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]

The non-linear transformation is applied by calling lstm_unit_op and the equation is:

\[i_t = \sigma(L_{i_t})\]

This layer has two outputs including \(h_t\) and \(o_t\).

Parameters:
  • x_t (Variable) – The input value of current step, a 2-D tensor with shape M x N, M for batch size and N for input size.
  • hidden_t_prev (Variable) – The hidden value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • cell_t_prev (Variable) – The cell value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • forget_bias (float) – The forget bias of lstm unit.
  • param_attr (ParamAttr) – The attributes of parameter weights, used to set initializer, name etc.
  • bias_attr (ParamAttr) – The attributes of bias weights, if not False, bias weights will be created and be set to default value.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The hidden value and cell value of lstm unit.

Return type:

tuple

Raises:

ValueError – The ranks of x_t, hidden_t_prev and cell_t_prev not be 2 or the 1st dimensions of x_t, hidden_t_prev and cell_t_prev not be the same or the 2nd dimensions of hidden_t_prev and cell_t_prev not be the same.

Examples

x_t = fluid.layers.fc(input=x_t_data, size=10)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=30)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
                                       hidden_t_prev=prev_hidden,
                                       cell_t_prev=prev_cell)

sequence_softmax

paddle.v2.fluid.layers.sequence_softmax(**kwargs)

Sequence Softmax Operator.

SequenceSoftmaxOp computes the softmax activation among all time-steps for each sequence. The dimension of each time-step should be 1. Thus, the shape of input Tensor can be either [N, 1] or [N], where N is the sum of the length of all sequences.

The algorithm works as follows:

for i-th sequence in a mini-batch:

$$ Out(X[lod[i]:lod[i+1]], :) = frac{exp(X[lod[i]:lod[i+1], :])} {sum(exp(X[lod[i]:lod[i+1], :]))} $$

For example, for a mini-batch of 3 sequences with variable-length, each containing 2, 3, 2 time-steps, the lod of which is [0, 2, 5, 7], then softmax will be computed among X[0:2, :], X[2:5, :], X[5:7, :] and N turns out to be 7.

Parameters:x – (LoDTensor) 1-D or 2-D input LoDTensor with the 2-nd dimension of length 1. Duplicable: False Optional: False
Returns:(LoDTensor) 1-D or 2-D output LoDTensor with the 2-nd dimension of length 1.

reduce_sum

paddle.v2.fluid.layers.reduce_sum(input, dim=None, keep_dim=False, name=None)

Computes the sum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (int|None) – The dimension along which the sum is performed. If None, sum all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim < 0\), the dimension to reduce is \(rank + dim\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_sum(x)  # [3.5]
fluid.layers.reduce_sum(x, dim=0)  # [0.3, 0.5, 1.1, 1.6]
fluid.layers.reduce_sum(x, dim=-1)  # [1.9, 1.6]
fluid.layers.reduce_sum(x, dim=1, keep_dim=True)  # [[1.9], [1.6]]

reduce_mean

paddle.v2.fluid.layers.reduce_mean(input, dim=None, keep_dim=False, name=None)

Computes the mean of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (int|None) – The dimension along which the mean is computed. If None, compute the mean over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim < 0\), the dimension to reduce is \(rank + dim\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x)  # [0.4375]
fluid.layers.reduce_mean(x, dim=0)  # [0.15, 0.25, 0.55, 0.8]
fluid.layers.reduce_mean(x, dim=-1)  # [0.475, 0.4]
fluid.layers.reduce_mean(x, dim=1, keep_dim=True)  # [[0.475], [0.4]]

reduce_max

paddle.v2.fluid.layers.reduce_max(input, dim=None, keep_dim=False, name=None)

Computes the maximum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (int|None) – The dimension along which the maximum is computed. If None, compute the maximum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim < 0\), the dimension to reduce is \(rank + dim\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x)  # [0.9]
fluid.layers.reduce_max(x, dim=0)  # [0.2, 0.3, 0.6, 0.9]
fluid.layers.reduce_max(x, dim=-1)  # [0.9, 0.7]
fluid.layers.reduce_max(x, dim=1, keep_dim=True)  # [[0.9], [0.7]]

reduce_min

paddle.v2.fluid.layers.reduce_min(input, dim=None, keep_dim=False, name=None)

Computes the minimum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (int|None) – The dimension along which the minimum is computed. If None, compute the minimum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim < 0\), the dimension to reduce is \(rank + dim\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x)  # [0.1]
fluid.layers.reduce_min(x, dim=0)  # [0.1, 0.2, 0.5, 0.7]
fluid.layers.reduce_min(x, dim=-1)  # [0.2, 0.1]
fluid.layers.reduce_min(x, dim=1, keep_dim=True)  # [[0.2], [0.1]]

split

paddle.v2.fluid.layers.split(input, num_or_sections, dim=-1, name=None)

Split the input tensor into multiple sub-tensors.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • num_or_sections (int|list) – If num_or_sections is an integer, then the integer indicates the number of equal sized sub-tensors that the tensor will be divided into. If num_or_sections is a list of integers, the length of list indicates the number of sub-tensors and the integers indicate the sizes of sub-tensors’ dim dimension orderly.
  • dim (int) – The dimension along which to split. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\).
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The list of segmented tensor variables.

Return type:

List

Examples

# x is a Tensor variable with shape [3, 9, 5]:
x0, x1, x2 = fluid.layers.split(x, num_or_sections=3, dim=1)
x0.shape  # [3, 3, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 3, 5]
x0, x1, x2 = fluid.layers.split(x, num_or_sections=[2, 3, 4], dim=1)
x0.shape  # [3, 2, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 4, 5]

matmul

paddle.v2.fluid.layers.matmul(x, y, transpose_x=False, transpose_y=False, name=None)

Applies matrix multiplication to two tensors. Currently, the input tensors’ rank can be any, but when the rank of anyone inputs is bigger than 3, this two inputs’ rank should be equal.

The actual behavior depends on the shapes of \(x\), \(y\) and the flag values of transpose_x, transpose_y. Specifically:

  • If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is rank-1 of shape \([D]\), then for \(x\) it is treated as \([1, D]\) in nontransposed form and as \([D, 1]\) in transposed form, whereas for \(y\) it is the opposite: It is treated as \([D, 1]\) in nontransposed form and as \([1, D]\) in transposed form.
  • After transpose, the two tensors are 2-D or n-D and matrix multiplication performs in the following way.
    • If both are 2-D, they are multiplied like conventional matrices.
    • If either is n-D, it is treated as a stack of matrices residing in the last two dimensions and a batched matrix multiply supporting broadcast applies on the two tensors.

Also note that if the raw tensor \(x\) or \(y\) is rank-1 and nontransposed, the prepended or appended dimension \(1\) will be removed after matrix multiplication.

Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a Tensor or LoDTensor.
  • transpose_x (bool) – Whether to transpose \(x\) before multiplication.
  • transpose_y (bool) – Whether to transpose \(y\) before multiplication.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The product Tensor variable.

Return type:

Variable

Examples

# Examples to clarify shapes of the inputs and output
# x: [B, ..., M, K], y: [B, ..., K, N]
fluid.layers.matmul(x, y)  # out: [B, ..., M, N]
# x: [B, M, K], y: [B, K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]
# x: [B, M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]
# x: [B, M, K], y: [K]
fluid.layers.matmul(x, y)  # out: [B, M]
# x: [M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [M, N]
# x: [K], y: [K]
fluid.layers.matmul(x, y)  # out: [1]
# x: [M], y: [N]

fluid.layers.matmul(x, y, True, True)  # out: [M, N]

logsigmoid

paddle.v2.fluid.layers.logsigmoid(**kwargs)

Logsigmoid Activation Operator

$$out = log frac{1}{1 + e^{-x}}$$

Parameters:x – Input of LogSigmoid operator Duplicable: False Optional: False
Returns:Output of LogSigmoid operator

exp

paddle.v2.fluid.layers.exp(**kwargs)

Exp Activation Operator.

$out = e^x$

Parameters:x – Input of Exp operator Duplicable: False Optional: False
Returns:Output of Exp operator

relu

paddle.v2.fluid.layers.relu(**kwargs)

Relu Activation Operator.

$out = max(x, 0)$

Parameters:x – Input of Relu operator Duplicable: False Optional: False
Returns:Output of Relu operator

tanh

paddle.v2.fluid.layers.tanh(**kwargs)

Tanh Activation Operator.

$$out = frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$

Parameters:x – Input of Tanh operator Duplicable: False Optional: False
Returns:Output of Tanh operator

tanh_shrink

paddle.v2.fluid.layers.tanh_shrink(**kwargs)

TanhShrink Activation Operator.

$$out = x - frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$

Parameters:x – Input of TanhShrink operator Duplicable: False Optional: False
Returns:Output of TanhShrink operator

softshrink

paddle.v2.fluid.layers.softshrink(**kwargs)

Softshrink Activation Operator.

$$ out = begin{cases}

x - lambda, text{if } x > lambda \ x + lambda, text{if } x < -lambda \ 0, text{otherwise} end{cases}

$$

Parameters:
  • x – Input of Softshrink operator Duplicable: False Optional: False
  • lambda (FLOAT) – non-negative offset
Returns:

Output of Softshrink operator

sqrt

paddle.v2.fluid.layers.sqrt(**kwargs)

Sqrt Activation Operator.

$out = sqrt{x}$

Parameters:x – Input of Sqrt operator Duplicable: False Optional: False
Returns:Output of Sqrt operator

abs

paddle.v2.fluid.layers.abs(**kwargs)

Abs Activation Operator.

$out = |x|$

Parameters:x – Input of Abs operator Duplicable: False Optional: False
Returns:Output of Abs operator

ceil

paddle.v2.fluid.layers.ceil(**kwargs)

Ceil Activation Operator.

$out = ceil(x)$

Parameters:x – Input of Ceil operator Duplicable: False Optional: False
Returns:Output of Ceil operator

floor

paddle.v2.fluid.layers.floor(**kwargs)

Floor Activation Operator.

$out = floor(x)$

Parameters:x – Input of Floor operator Duplicable: False Optional: False
Returns:Output of Floor operator

round

paddle.v2.fluid.layers.round(**kwargs)

Round Activation Operator.

$out = [x]$

Parameters:x – Input of Round operator Duplicable: False Optional: False
Returns:Output of Round operator

reciprocal

paddle.v2.fluid.layers.reciprocal(**kwargs)

Reciprocal Activation Operator.

$$out = frac{1}{x}$$

Parameters:x – Input of Reciprocal operator Duplicable: False Optional: False
Returns:Output of Reciprocal operator

log

paddle.v2.fluid.layers.log(**kwargs)

Log Activation Operator.

$out = ln(x)$

Natural logarithm of x.

Parameters:x – Input of Log operator Duplicable: False Optional: False
Returns:Output of Log operator

square

paddle.v2.fluid.layers.square(**kwargs)

Square Activation Operator.

$out = x^2$

Parameters:x – Input of Square operator Duplicable: False Optional: False
Returns:Output of Square operator

softplus

paddle.v2.fluid.layers.softplus(**kwargs)

Softplus Activation Operator.

$out = ln(1 + e^{x})$

Parameters:x – Input of Softplus operator Duplicable: False Optional: False
Returns:Output of Softplus operator

softsign

paddle.v2.fluid.layers.softsign(**kwargs)

Softsign Activation Operator.

$$out = frac{x}{1 + |x|}$$

Parameters:x – Input of Softsign operator Duplicable: False Optional: False
Returns:Output of Softsign operator

brelu

paddle.v2.fluid.layers.brelu(**kwargs)

BRelu Activation Operator.

$out = max(min(x, t_{min}), t_{max})$

Parameters:
  • x – Input of BRelu operator Duplicable: False Optional: False
  • t_min (FLOAT) – The min marginal value of BRelu
  • t_max (FLOAT) – The max marginal value of BRelu
Returns:

Output of BRelu operator

leaky_relu

paddle.v2.fluid.layers.leaky_relu(**kwargs)

LeakyRelu Activation Operator.

$out = max(x, alpha * x)$

Parameters:
  • x – Input of LeakyRelu operator Duplicable: False Optional: False
  • alpha (FLOAT) – The small negative slope
Returns:

Output of LeakyRelu operator

soft_relu

paddle.v2.fluid.layers.soft_relu(**kwargs)

SoftRelu Activation Operator.

$out = ln(1 + exp(max(min(x, threshold), threshold))$

Parameters:
  • x – Input of SoftRelu operator Duplicable: False Optional: False
  • threshold (FLOAT) – The threshold value of SoftRelu
Returns:

Output of SoftRelu operator

elu

paddle.v2.fluid.layers.elu(**kwargs)

ELU Activation Operator.

Applies the following element-wise computation on the input according to https://arxiv.org/abs/1511.07289.

$out = max(0, x) + min(0, alpha * (e^x - 1))$

Parameters:
  • x – Input of ELU operator Duplicable: False Optional: False
  • alpha (FLOAT) – The alpha value of ELU
Returns:

Output of ELU operator

relu6

paddle.v2.fluid.layers.relu6(**kwargs)

Relu6 Activation Operator.

$out = min(max(0, x), 6)$

Parameters:
  • x – Input of Relu6 operator Duplicable: False Optional: False
  • threshold (FLOAT) – The threshold value of Relu6
Returns:

Output of Relu6 operator

pow

paddle.v2.fluid.layers.pow(**kwargs)

Pow Activation Operator.

$out = x^{factor}$

Parameters:
  • x – Input of Pow operator Duplicable: False Optional: False
  • factor (FLOAT) – The exponential factor of Pow
Returns:

Output of Pow operator

hard_shrink

paddle.v2.fluid.layers.hard_shrink(**kwargs)

HardShrink Activation Operator.

$$ out = begin{cases}

x, text{if } x > lambda \ x, text{if } x < -lambda \ 0, text{otherwise} end{cases}

$$

Parameters:
  • x – Input of HardShrink operator Duplicable: False Optional: False
  • threshold (FLOAT) – The value of threshold for HardShrink
Returns:

Output of HardShrink operator

thresholded_relu

paddle.v2.fluid.layers.thresholded_relu(**kwargs)

ThresholdedRelu Activation Operator.

$$ out = begin{cases}

x, text{if } x > threshold \ 0, text{otherwise} end{cases}

$$

Parameters:
  • x – Input of ThresholdedRelu operator Duplicable: False Optional: False
  • threshold (FLOAT) – The threshold location of activation
Returns:

Output of ThresholdedRelu operator

hard_sigmoid

paddle.v2.fluid.layers.hard_sigmoid(**kwargs)

HardSigmoid Activation Operator.

Segment-wise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.

$out = max(0, min(1, slope * x + shift))$

The slope should be positive. The offset can be either positive or negative. The default slope and shift are set according to the above reference. It is recommended to use the defaults for this activation.

Parameters:
  • x – Input of HardSigmoid operator Duplicable: False Optional: False
  • slope (FLOAT) – Slope for linear approximation of sigmoid
  • offset (FLOAT) – Offset for linear approximation of sigmoid
Returns:

Output of HardSigmoid operator

swish

paddle.v2.fluid.layers.swish(**kwargs)

Swish Activation Operator.

$$out = frac{x}{1 + e^{- beta x}}$$

Parameters:
  • x – Input of Swish operator Duplicable: False Optional: False
  • beta (FLOAT) – Constant beta of swish operator
Returns:

Output of Swish operator

edit_distance

ctc_greedy_decoder

paddle.v2.fluid.layers.ctc_greedy_decoder(input, blank, name=None)

This op is used to decode sequences by greedy policy by below steps: 1. Get the indexes of max value for each row in input. a.k.a. numpy.argmax(input, axis=0). 2. For each sequence in result of step1, merge repeated tokens between two blanks and delete all blanks.

A simple example as below:

Given:

input.data = [[0.6, 0.1, 0.3, 0.1],
              [0.3, 0.2, 0.4, 0.1],
              [0.1, 0.5, 0.1, 0.3],
              [0.5, 0.1, 0.3, 0.1],

              [0.5, 0.1, 0.3, 0.1],
              [0.2, 0.2, 0.2, 0.4],
              [0.2, 0.2, 0.1, 0.5],
              [0.5, 0.1, 0.3, 0.1]]

input.lod = [[0, 4, 8]]

Then:

output.data = [[2],
               [1],
               [3]]

output.lod = [[0, 2, 3]]
Parameters:
  • input (Variable) – (LoDTensor<float>), the probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
  • blank (int) – the blank label index of Connectionist Temporal Classification (CTC) loss, which is in thehalf-opened interval [0, num_classes + 1).
Returns:

CTC greedy decode result.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')

cost = fluid.layers.ctc_greedy_decoder(input=x, blank=0)

l2_normalize

paddle.v2.fluid.layers.l2_normalize(x, axis, epsilon=1e-12, name=None)

L2 normalize Layer

The l2 normalize layer normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes

output = x / sqrt(max(sum(x**2), epsilon))

For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.

Parameters:
  • x (Variable|list) – The input tensor to l2_normalize layer.
  • axis (int) – Dimension along which to normalize the input.
  • epsilon (float) – A lower bound value for x‘s l2 norm. sqrt(epsilon) will be used as the divisor if the l2 norm of x is less than sqrt(epsilon).
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The output tensor variable.

Return type:

Variable

Examples

data = fluid.layers.data(name="data",
                         shape=(3, 17, 13),
                         dtype="float32")
fc = fluid.layers.l2_normalize(x=data, axis=1)

sequence_reshape

paddle.v2.fluid.layers.sequence_reshape(input, new_dim)

Sequence Reshape Layer

This layer will rearrange the input sequences. The new dimension is set by user. Length of each sequence is computed according to original length, original dimension and new dimension. The following example will help to illustrate the function of this layer:

x is a LoDTensor:
    x.lod  = [[0, 2, 6]]
    x.data = [[1, 2], [3, 4],
              [5, 6], [7, 8], [9, 10], [11, 12]]
    x.dims = [6, 2]

set new_dim = 4

then out is a LoDTensor:
    out.lod  = [[0, 1, 3]]
    out.data = [[1, 2, 3, 4],
                [5, 6, 7, 8], [9, 10, 11, 12]]
    out.dims = [3, 4]

Currently, only 1-level LoDTensor is supported and please make sure (original length * original dimension) can be divided by new dimension with no remainder for each sequence.

Parameters:
  • input (Variable) – (LodTensor, default: LoDTensor<float>), a 2-D LoDTensor with shape being [N, M] where M for dimension.
  • new_dim (int) – New dimension which the input LoDTensor is reshaped to.
Returns:

Reshaped LoDTensor according to new dimension.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[5, 20],
                  dtype='float32', lod_level=1)
x_reshaped = layers.sequence_reshape(input=x, new_dim=10)