Layers

Data layer

data

paddle.v2.layer.data

alias of name

Fully Connected Layers

fc

class paddle.v2.layer.fc

The fully connected layer.

The example usage is:

fc = fc(input=layer,
              size=1024,
              act=paddle.v2.activation.Linear(),
              bias_attr=False)

which is equal to:

with mixed(size=1024) as fc:
    fc += full_matrix_projection(input=layer)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer | list | tuple) – The input of this layer.
  • size (int) – The dimension of this layer.
  • act (paddle.v2.activation.Base) – Activation Type. paddle.v2.activation.Tanh is the default activation.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

selective_fc

class paddle.v2.layer.selective_fc

Selectived fully connected layer. Different from fc, the output of this layer can be sparse. It requires an additional input to indicate several selected columns for output. If the selected columns is not specified, selective_fc acts exactly like fc.

The simple usage is:

sel_fc = selective_fc(input=input, size=128, act=paddle.v2.activation.Tanh())
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer | list | tuple) – The input of this layer.
  • select (paddle.v2.config_base.Layer) – The layer to select columns to output. It should be a sparse binary matrix, and is treated as the mask of selective fc. If it is not set or set to None, selective_fc acts exactly like fc.
  • size (int) – The dimension of this layer, which should be equal to that of the layer ‘select’.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Tanh is the default activation.
  • pass_generation (bool) – The flag which indicates whether it is during generation.
  • has_selected_colums (bool) – The flag which indicates whether the parameter ‘select’ has been set. True is the default.
  • mul_ratio (float) – A ratio helps to judge how sparse the output is and determine the computation method for speed consideration.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The parameter attribute for bias. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Conv Layers

conv_operator

class paddle.v2.layer.conv_operator

Different from img_conv, conv_op is an Operator, which can be used in mixed. And conv_op takes two inputs to perform convolution. The first input is the image and the second is filter kernel. It only supports GPU mode.

The example usage is:

op = conv_operator(img=input1,
                   filter=input2,
                   filter_size=3,
                   num_filters=64,
                   num_channels=64)
Parameters:
  • img (paddle.v2.config_base.Layer) – The input image.
  • filter (paddle.v2.config_base.Layer) – The input filter.
  • filter_size (int) – The dimension of the filter kernel on the x axis.
  • filter_size_y (int) – The dimension of the filter kernel on the y axis. If the parameter is not set or set to None, it will set to ‘filter_size’ automatically.
  • num_filters (int) – The number of the output channels.
  • num_channels (int) – The number of the input channels. If the parameter is not set or set to None, it will be automatically set to the channel number of the ‘img’.
  • stride (int) – The stride on the x axis.
  • stride_y (int) – The stride on the y axis. If the parameter is not set or set to None, it will be set to ‘stride’ automatically.
  • padding (int) – The padding size on the x axis.
  • padding_y (int) – The padding size on the y axis. If the parameter is not set or set to None, it will be set to ‘padding’ automatically.
Returns:

A ConvOperator Object.

Return type:

ConvOperator

conv_projection

class paddle.v2.layer.conv_projection

Different from img_conv and conv_op, conv_projection is a Projection, which can be used in mixed and concat. It uses cudnn to implement convolution and only supports GPU mode.

The example usage is:

proj = conv_projection(input=input1,
                       filter_size=3,
                       num_filters=64,
                       num_channels=64)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • filter_size (int | tuple | list) – The dimensions of the filter kernel. If the parameter is set to one integer, the two dimensions on x and y axises will be same when filter_size_y is not set. If it is set to a list, the first element indicates the dimension on the x axis, and the second is used to specify the dimension on the y axis when filter_size_y is not provided.
  • filter_size_y (int) – The dimension of the filter kernel on the y axis. If the parameter is not set, it will be set automatically according to filter_size.
  • num_filters (int) – The number of filters.
  • num_channels (int) – The number of the input channels.
  • stride (int | tuple | list) – The strides. If the parameter is set to one integer, the strides on x and y axises will be same when stride_y is not set. If it is set to a list, the first element indicates the stride on the x axis, and the second is used to specify the stride on the y axis when stride_y is not provided.
  • stride_y (int) – The stride on the y axis.
  • padding (int | tuple | list) – The padding sizes. If the parameter is set to one integer, the padding sizes on x and y axises will be same when padding_y is not set. If it is set to a list, the first element indicates the padding size on the x axis, and the second is used to specify the padding size on the y axis when padding_y is not provided.
  • padding_y (int) – The padding size on the y axis.
  • groups (int) – The group number.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute of the convolution. See paddle.v2.attr.ParameterAttribute for details.
  • trans (bool) – Whether it is ConvTransProjection or ConvProjection
Returns:

A Projection Object.

Return type:

ConvTransProjection | ConvProjection

conv_shift

class paddle.v2.layer.conv_shift
This layer performs cyclic convolution on two inputs. For example:
  • a[in]: contains M elements.
  • b[in]: contains N elements (N should be odd).
  • c[out]: contains M elements.
\[c[i] = \sum_{j=-(N-1)/2}^{(N-1)/2}a_{i+j} * b_{j}\]
In this formula:
  • a’s index is computed modulo M. When it is negative, then get item from the right side (which is the end of array) to the left.
  • b’s index is computed modulo N. When it is negative, then get item from the right size (which is the end of array) to the left.

The example usage is:

conv_shift = conv_shift(a=layer1, b=layer2)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • a (paddle.v2.config_base.Layer) – The first input of this layer.
  • b (paddle.v2.config_base.Layer) – The second input of this layer.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

img_conv

class paddle.v2.layer.img_conv

Convolution layer for image. Paddle can support both square and non-square input currently.

The details of convolution layer, please refer UFLDL’s convolution .

Convolution Transpose (deconv) layer for image. Paddle can support both square and non-square input currently.

The details of convolution transpose layer, please refer to the following explanation and references therein <http://datascience.stackexchange.com/questions/6107/ what-are-deconvolutional-layers/>`_ . The num_channel means input image’s channel number. It may be 1 or 3 when input is raw pixels of image(mono or RGB), or it may be the previous layer’s num_filters.

There are several groups of filters in PaddlePaddle implementation. If the groups attribute is greater than 1, for example groups=2, the input will be splitted into 2 parts along the channel axis, and the filters will also be splitted into 2 parts. The first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input. After the computation of convolution for each part of input, the output will be obtained by concatenating the two results.

The details of grouped convolution, please refer to: ImageNet Classification with Deep Convolutional Neural Networks

The example usage is:

conv = img_conv(input=data, filter_size=1, filter_size_y=1,
                      num_channels=8,
                      num_filters=16, stride=1,
                      bias_attr=False,
                      act=paddle.v2.activation.Relu())
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • filter_size (int | tuple | list) – The dimensions of the filter kernel. If the parameter is set to one integer, the two dimensions on x and y axises will be same when filter_size_y is not set. If it is set to a list, the first element indicates the dimension on the x axis, and the second is used to specify the dimension on the y axis when filter_size_y is not provided.
  • filter_size_y (int) – The dimension of the filter kernel on the y axis. If the parameter is not set, it will be set automatically according to filter_size.
  • num_filters (int) – The number of filters. It is as same as the output image channel.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Relu is the default activation.
  • groups (int) – The group number. 1 is the default group number.
  • stride (int | tuple | list) – The strides. If the parameter is set to one integer, the strides on x and y axises will be same when stride_y is not set. If it is set to a list, the first element indicates the stride on the x axis, and the second is used to specify the stride on the y axis when stride_y is not provided. 1 is the default value.
  • stride_y (int) – The stride on the y axis.
  • padding (int | tuple | list) – The padding sizes. If the parameter is set to one integer, the padding sizes on x and y axises will be same when padding_y is not set. If it is set to a list, the first element indicates the padding size on the x axis, and the second is used to specify the padding size on the y axis when padding_y is not provided. 0 is the default padding size.
  • padding_y (int) – The padding size on the y axis.
  • dilation (int | tuple | list) – The dimensions of the dilation. If the parameter is set to one integer, the two dimensions on x and y axises will be same when dilation_y is not set. If it is set to a list, the first element indicates the dimension on the x axis, and the second is used to specify the dimension on the y axis when dilation_y is not provided. 1 is the default dimension.
  • dilation_y (int) – The dimension of the dilation on the y axis.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channel number of the input.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • shared_biases (bool) – Whether biases will be shared between filters or not.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attributes. See paddle.v2.attr.ExtraAttribute for details.
  • trans (bool) – True if it is a convTransLayer, False if it is a convLayer
  • layer_type (basestring) – Specify the layer type. If the dilation’s dimension on one axis is larger than 1, layer_type has to be “cudnn_conv” or “cudnn_convt”. If trans=True, layer_type has to be “exconvt” or “cudnn_convt”, otherwise layer_type has to be either “exconv” or “cudnn_conv”.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

context_projection

class paddle.v2.layer.context_projection

Context Projection.

It just reorganizes input sequence, combines “context_len” elements of the sequence to one context from context_start. “context_start” will be set to -(context_len - 1) / 2 by default. When context position is out of sequence length, padding will be filled as zero if padding_attr = False, otherwise it is trainable.

For example, origin sequence is [A B C D E F G], context len is 3, padding_attr is not set, then after context projection, sequence will be [ 0AB ABC BCD CDE DEF EFG FG0 ].

Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer, which should be a sequence.
  • context_len (int) – The length of the context.
  • context_start (int) – The start position of the context. The default value is -(context_len - 1)/2
  • padding_attr (bool | paddle.v2.attr.ParameterAttribute) – Parameter attribute of the padding. If the parameter is set to False, padding will be zero. In other cases, the padding is trainable, and its parameter attribute is set by this parameter.
Returns:

Projection object.

Return type:

Projection

row_conv

class paddle.v2.layer.row_conv

The row convolution is called lookahead convolution. It is firstly introduced in paper of Deep Speech 2: End-to-End Speech Recognition in English and Mandarin .

The bidirectional RNN that learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting. The lookahead convolution incorporates information from future subsequences in a computationally efficient manner to improve unidirectional RNNs.

The connection of row convolution is different from the 1D sequence convolution. Assumed that, the future context-length is k, that is to say, it can get the output at timestep t by using the the input feature from t-th timestep to (t+k+1)-th timestep. Assumed that the hidden dim of input activations are d, the activations r_t for the new layer at time-step t are:

\[r_{t,r} = \sum_{j=1}^{k + 1} {w_{i,j}h_{t+j-1, i}} \quad \text{for} \quad (1 \leq i \leq d)\]

Note

The context_len is k + 1. That is to say, the lookahead step number plus one equals context_len.

row_conv = row_conv(input=input, context_len=3)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • context_len (int) – The context length equals the lookahead step number plus one.
  • act (paddle.v2.activation.Base) – Activation Type. paddle.v2.activation.Linear is the default activation.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Image Pooling Layer

img_pool

class paddle.v2.layer.img_pool

Image pooling Layer.

The details of pooling layer, please refer to ufldl’s pooling .

  • ceil_mode=True:
\[ \begin{align}\begin{aligned}w & = 1 + \frac{ceil(input\_width + 2 * padding - pool\_size)}{stride}\\h & = 1 + \frac{ceil(input\_height + 2 * padding\_y - pool\_size\_y)}{stride\_y}\end{aligned}\end{align} \]
  • ceil_mode=False:
\[ \begin{align}\begin{aligned}w & = 1 + \frac{floor(input\_width + 2 * padding - pool\_size)}{stride}\\h & = 1 + \frac{floor(input\_height + 2 * padding\_y - pool\_size\_y)}{stride\_y}\end{aligned}\end{align} \]

The example usage is:

maxpool = img_pool(input=conv,
                         pool_size=3,
                         pool_size_y=5,
                         num_channels=8,
                         stride=1,
                         stride_y=2,
                         padding=1,
                         padding_y=2,
                         pool_type=MaxPooling())
Parameters:
  • padding (int) – The padding size on the x axis. 0 is the default padding size.
  • padding_y – The padding size on the y axis. If the parameter is not set or set to None, it will be set to ‘padding’ automatically.
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • pool_size (int) – The pooling window length on the x axis.
  • pool_size_y (int) – The pooling window length on the y axis. If the parameter is not set or set to None, its actual value will be automatically set to pool_size.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • pool_type (BasePoolingType) – Pooling type. MaxPooling is the default pooling.
  • stride (int) – The stride on the x axis. 1 is the default value.
  • stride_y (int) – The stride on the y axis. If the parameter is not set or set to None, its actual value will be automatically set to ‘stride’.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
  • ceil_mode (bool) – Whether to use the ceil function to calculate output height and width. True is the default. If it is set to False, the floor function will be used.
  • exclude_mode (bool) – Whether to exclude the padding cells when calculating, but only work when pool_type is AvgPooling. If None, also exclude the padding cells. If use cudnn, use CudnnAvgPooling or CudnnAvgInclPadPooling as pool_type to identify the mode.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

spp

class paddle.v2.layer.spp

A layer performs spatial pyramid pooling.

Reference:
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition

The example usage is:

spp = spp(input=data,
                pyramid_height=2,
                num_channels=16,
                pool_type=MaxPooling())
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • pool_type – Pooling type. MaxPooling is the default pooling.
  • pyramid_height (int) – The pyramid height of this pooling.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

maxout

class paddle.v2.layer.maxout
A layer to do max out on convolutional layer output.
  • Input: the output of a convolutional layer.
  • Output: feature map size same as the input’s, and its channel number is (input channel) / groups.

So groups should be larger than 1, and the num of channels should be able to be devided by groups.

Reference:
Maxout Networks Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks
\[ \begin{align}\begin{aligned}& out = \max_k (in[n, k, o_c , s])\\& out_{i * s + j} = \max_k in_{ k * o_{c} * s + i * s + j}\\& s = \frac{input.size}{ num\_channels}\\& o_{c} = \frac{num\_channels}{groups}\\& 0 \le i < o_{c}\\& 0 \le j < s\\& 0 \le k < groups\end{aligned}\end{align} \]

The simple usage is:

maxout = maxout(input,
                      num_channels=128,
                      groups=4)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • groups (int) – The group number of input layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

roi_pool

class paddle.v2.layer.roi_pool

A layer used by Fast R-CNN to extract feature maps of ROIs from the last feature map.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer.) – The input layer.
  • rois (paddle.v2.config_base.Layer.) – The input ROIs’ data.
  • pooled_width (int) – The width after pooling.
  • pooled_height (int) – The height after pooling.
  • spatial_scale (float) – The spatial scale between the image and feature map.
  • num_channels (int) – The number of the input channels.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

pad

class paddle.v2.layer.pad

This operation pads zeros to the input data according to pad_c,pad_h and pad_w. pad_c, pad_h, pad_w specify the size in the corresponding dimension. And the input data shape is NCHW.

For example, pad_c=[2,3] means padding 2 zeros before the input data and 3 zeros after the input data in the channel dimension. pad_h means padding zeros in the height dimension. pad_w means padding zeros in the width dimension.

For example,

input(2,2,2,3)  = [
                    [ [[1,2,3], [3,4,5]],
                      [[2,3,5], [1,6,7]] ],
                    [ [[4,3,1], [1,8,7]],
                      [[3,8,9], [2,3,5]] ]
                  ]

pad_c=[1,1], pad_h=[0,0], pad_w=[0,0]

output(2,4,2,3) = [
                    [ [[0,0,0], [0,0,0]],
                      [[1,2,3], [3,4,5]],
                      [[2,3,5], [1,6,7]],
                      [[0,0,0], [0,0,0]] ],
                    [ [[0,0,0], [0,0,0]],
                      [[4,3,1], [1,8,7]],
                      [[3,8,9], [2,3,5]],
                      [[0,0,0], [0,0,0]] ]
                  ]

The simply usage is:

pad = pad(input=ipt,
                pad_c=[4,4],
                pad_h=[0,0],
                pad_w=[2,2])
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • pad_c (list | None) – The padding size in the channel dimension.
  • pad_h (list | None) – The padding size in the height dimension.
  • pad_w (list | None) – The padding size in the width dimension.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
  • name (basestring) – The name of this layer. It is optional.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Norm Layer

img_cmrnorm

class paddle.v2.layer.img_cmrnorm

Response normalization across feature maps.

Reference:
ImageNet Classification with Deep Convolutional Neural Networks

The example usage is:

norm = img_cmrnorm(input=net, size=5)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • size (int) – Normalize in number of \(size\) feature maps.
  • scale (float) – The hyper-parameter.
  • power (float) – The hyper-parameter.
  • num_channels – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attributes. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

batch_norm

class paddle.v2.layer.batch_norm

Batch Normalization Layer. The notation of this layer is as follows.

\(x\) is the input features over a mini-batch.

\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad &//\ \ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \ \mu_{\beta})^2 \qquad &//\ mini-batch\ variance \\ \hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]
Reference:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

The example usage is:

norm = batch_norm(input=net, act=paddle.v2.activation.Relu())
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – This layer’s input which is to be performed batch normalization on.
  • batch_norm_type (None | string, None or "batch_norm" or "cudnn_batch_norm" or "mkldnn_batch_norm") – We have batch_norm, mkldnn_batch_norm and cudnn_batch_norm. batch_norm supports CPU, MKLDNN and GPU. cudnn_batch_norm requires cuDNN version greater or equal to v4 (>=v4). But cudnn_batch_norm is faster and needs less memory than batch_norm. mkldnn_batch_norm requires use_mkldnn is enabled. By default (None), we will automatically select cudnn_batch_norm for GPU, mkldnn_batch_norm for MKLDNN and batch_norm for CPU. Users can specify the batch norm type. If you use cudnn_batch_norm, we suggested you use latest version, such as v5.1.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Relu is the default activation.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – \(\beta\). The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – \(\gamma\). The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
  • use_global_stats (bool | None.) – Whether use moving mean/variance statistics during testing peroid. If the parameter is set to None or True, it will use moving mean/variance statistics during testing. If the parameter is set to False, it will use the mean and variance of the current batch of test data.
  • epsilon (float.) – The small constant added to the variance to improve numeric stability.
  • moving_average_fraction (float.) – Factor used in the moving average computation. \(runningMean = newMean*(1-factor) + runningMean*factor\)
  • mean_var_names (string list) – [mean name, variance name]
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

sum_to_one_norm

class paddle.v2.layer.sum_to_one_norm

A layer for sum-to-one normalization, which is used in NEURAL TURING MACHINE.

\[out[i] = \frac {in[i]} {\sum_{k=1}^N in[k]}\]

where \(in\) is a (batchSize x dataDim) input vector, and \(out\) is a (batchSize x dataDim) output vector.

The example usage is:

sum_to_one_norm = sum_to_one_norm(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

cross_channel_norm

class paddle.v2.layer.cross_channel_norm

Normalize a layer’s output. This layer is necessary for ssd. This layer applys normalization across the channels of each sample to a convolutional layer’s output and scales the output by a group of trainable factors whose dimensions equal to the channel’s number.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

row_l2_norm

class paddle.v2.layer.row_l2_norm

A layer for L2-normalization in each row.

\[out[i] = \frac{in[i]} {\sqrt{\sum_{k=1}^N in[k]^{2}}}\]

where the size of \(in\) is (batchSize x dataDim) , and the size of \(out\) is a (batchSize x dataDim) .

The example usage is:

row_l2_norm = row_l2_norm(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Recurrent Layers

recurrent

class paddle.v2.layer.recurrent

Simple recurrent unit layer. It is just a fully connect layer through both time and neural network.

For each sequence [start, end] it performs the following computation:

\[\begin{split}out_{i} = act(in_{i}) \ \ \text{for} \ i = start \\ out_{i} = act(in_{i} + out_{i-1} * W) \ \ \text{for} \ start < i <= end\end{split}\]

If reversed is true, the order is reversed:

\[\begin{split}out_{i} = act(in_{i}) \ \ \text{for} \ i = end \\ out_{i} = act(in_{i} + out_{i+1} * W) \ \ \text{for} \ start <= i < end\end{split}\]
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Tanh is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The parameter attribute for bias. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

lstmemory

class paddle.v2.layer.lstmemory

Long Short-term Memory Cell.

The memory cell was implemented as follow equations.

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{xi}x_{t} + W_{hi}h_{t-1} + W_{ci}c_{t-1} + b_i)\\f_t & = \sigma(W_{xf}x_{t} + W_{hf}h_{t-1} + W_{cf}c_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{xc}x_t+W_{hc}h_{t-1} + b_c)\\o_t & = \sigma(W_{xo}x_{t} + W_{ho}h_{t-1} + W_{co}c_t + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]

NOTE: In PaddlePaddle’s implementation, the multiplications \(W_{xi}x_{t}\) , \(W_{xf}x_{t}\), \(W_{xc}x_t\), \(W_{xo}x_{t}\) are not done in the lstmemory layer, so an additional mixed with full_matrix_projection or a fc must be included in the configuration file to complete the input-to-hidden mappings before lstmemory is called.

NOTE: This is a low level user interface. You can use network.simple_lstm to config a simple plain lstm layer.

Reference:
Generating Sequences With Recurrent Neural Networks
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • size (int) – DEPRECATED. The dimension of the lstm cell.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • reverse (bool) – Whether the input sequence is processed in a reverse order.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Tanh is the default activation.
  • gate_act (paddle.v2.activation.Base) – Activation type of this layer’s gates. paddle.v2.activation.Sigmoid is the default activation.
  • state_act (paddle.v2.activation.Base) – Activation type of the state. paddle.v2.activation.Tanh is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

grumemory

class paddle.v2.layer.grumemory

Gate Recurrent Unit Layer.

The memory cell was implemented as follow equations.

1. update gate \(z\): defines how much of the previous memory to keep around or the unit updates its activations. The update gate is computed by:

\[z_t = \sigma(W_{z}x_{t} + U_{z}h_{t-1} + b_z)\]

2. reset gate \(r\): determines how to combine the new input with the previous memory. The reset gate is computed similarly to the update gate:

\[r_t = \sigma(W_{r}x_{t} + U_{r}h_{t-1} + b_r)\]

3. The candidate activation \(\tilde{h_t}\) is computed similarly to that of the traditional recurrent unit:

\[{\tilde{h_t}} = tanh(W x_{t} + U (r_{t} \odot h_{t-1}) + b)\]

4. The hidden activation \(h_t\) of the GRU at time t is a linear interpolation between the previous activation \(h_{t-1}\) and the candidate activation \(\tilde{h_t}\):

\[h_t = (1 - z_t) h_{t-1} + z_t {\tilde{h_t}}\]

NOTE: In PaddlePaddle’s implementation, the multiplication operations \(W_{r}x_{t}\), \(W_{z}x_{t}\) and \(W x_t\) are not performed in gate_recurrent layer. Consequently, an additional mixed with full_matrix_projection or a fc must be included before grumemory is called.

Reference:
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

The simple usage is:

gru = grumemory(input)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer.) – The input of this layer.
  • size (int) – DEPRECATED. The dimension of the gru cell.
  • reverse (bool) – Whether the input sequence is processed in a reverse order.
  • act (paddle.v2.activation.Base) – Activation type, paddle.v2.activation.Tanh is the default. This activation affects the \({\tilde{h_t}}\).
  • gate_act (paddle.v2.activation.Base) – Activation type of this layer’s two gates. paddle.v2.activation.Sigmoid is the default activation. This activation affects the \(z_t\) and \(r_t\). It is the \(\sigma\) in the above formula.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

gated_unit

class paddle.v2.layer.gated_unit

The gated unit layer implements a simple gating mechanism over the input. The input \(X\) is first projected into a new space \(X'\), and it is also used to produce a gate weight \(\sigma\). Element-wise product between \(X'\) and \(\sigma\) is finally returned.

Reference:
Language Modeling with Gated Convolutional Networks
\[y=\text{act}(X \cdot W + b)\otimes \sigma(X \cdot V + c)\]

The example usage is:

Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • size (int) – The dimension of this layer’s output.
  • act (paddle.v2.activation.Base) – Activation type of the projection. paddle.v2.activation.Linear is the default activation.
  • name (basestring) – The name of this layer. It is optional.
  • gate_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute of the gate. See paddle.v2.attr.ExtraAttribute for details.
  • gate_param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute of the gate. See paddle.v2.attr.ParameterAttribute for details.
  • gate_bias_attr (paddle.v2.attr.ParameterAttribute | bool | None | Any) – The bias attribute of the gate. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • inproj_attr (paddle.v2.attr.ExtraAttribute | None) – Extra layer attributes of the projection. See paddle.v2.attr.ExtraAttribute for details.
  • inproj_param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute of the projection. See paddle.v2.attr.ParameterAttribute for details.
  • inproj_bias_attr (paddle.v2.attr.ParameterAttribute | bool | None | Any) – The bias attribute of the projection. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – Extra layer attribute of the product. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Recurrent Layer Group

memory

class paddle.v2.layer.memory

The memory takes a layer’s output at previous time step as its own output.

If boot_bias, the activation of the bias is the initial value of the memory.

If boot_with_const_id is set, then the memory’s output at the first time step is a IndexSlot, the Arguments.ids()[0] is this cost_id.

If boot is specified, the memory’s output at the first time step will be the boot’s output.

In other case, the default memory’s output at the first time step is zero.

mem = memory(size=256, name='state')
state = fc(input=mem, size=256, name='state')

If you do not want to specify the name, you can also use set_input() to specify the layer to be remembered as the following:

mem = memory(size=256)
state = fc(input=mem, size=256)
mem.set_input(mem)
Parameters:
  • name (basestring) – The name of the layer which this memory remembers. If name is None, user should call set_input() to specify the name of the layer which this memory remembers.
  • size (int) – The dimensionality of memory.
  • memory_name (basestring) – The name of the memory. It is ignored when name is provided.
  • is_seq (bool) – DEPRECATED. is sequence for boot
  • boot (paddle.v2.config_base.Layer | None) – This parameter specifies memory’s output at the first time step and the output is boot’s output.
  • boot_bias (paddle.v2.attr.ParameterAttribute | None) – The bias attribute of memory’s output at the first time step. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • boot_bias_active_type (paddle.v2.activation.Base) – Activation type for memory’s bias at the first time step. paddle.v2.activation.Linear is the default activation.
  • boot_with_const_id (int) – This parameter specifies memory’s output at the first time step and the output is an index.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

recurrent_group

class paddle.v2.layer.recurrent_group

Recurrent layer group is an extremely flexible recurrent unit in PaddlePaddle. As long as the user defines the calculation done within a time step, PaddlePaddle will iterate such a recurrent calculation over sequence input. This is useful for attention-based models, or Neural Turning Machine like models.

The basic usage (time steps) is:

def step(input):
    output = fc(input=layer,
                      size=1024,
                      act=paddle.v2.activation.Linear(),
                      bias_attr=False)
    return output

group = recurrent_group(input=layer,
                        step=step)

You can see following configs for further usages:

  • time steps: lstmemory_group, paddle/gserver/tests/sequence_group.conf, demo/seqToseq/seqToseq_net.py
  • sequence steps: paddle/gserver/tests/sequence_nest_group.conf
Parameters:
  • step (callable) –

    A step function which takes the input of recurrent_group as its own input and returns values as recurrent_group’s output every time step.

    The recurrent group scatters a sequence into time steps. And for each time step, it will invoke step function, and return a time step result. Then gather outputs of each time step into layer group’s output.

  • name (basestring) – The recurrent_group’s name. It is optional.
  • input (paddle.v2.config_base.Layer | StaticInput | SubsequenceInput | list | tuple) –

    Input links array.

    paddle.v2.config_base.Layer will be scattered into time steps. SubsequenceInput will be scattered into sequence steps. StaticInput will be imported to each time step, and doesn’t change over time. It’s a mechanism to access layer outside step function.

  • reverse (bool) – If reverse is set to True, the recurrent unit will process the input sequence in a reverse order.
  • targetInlink (paddle.v2.config_base.Layer | SubsequenceInput) –

    DEPRECATED. The input layer which share info with layer group’s output

    Param input specifies multiple input layers. For SubsequenceInput inputs, config should assign one input layer that share info(the number of sentences and the number of words in each sentence) with all layer group’s outputs. targetInlink should be one of the layer group’s input.

Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

lstm_step

class paddle.v2.layer.lstm_step

LSTM Step Layer. This function is used only in recurrent_group. The lstm equations are shown as follows.

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + W_{c_i}c_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + W_{c_f}c_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t+W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + W_{c_o}c_t + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]

The input of lstm step is \(Wx_t + Wh_{t-1}\), and user should use mixed and full_matrix_projection to calculate these input vectors.

The state of lstm step is \(c_{t-1}\). And lstm step layer will do

\[ \begin{align}\begin{aligned}i_t = \sigma(input + W_{ci}c_{t-1} + b_i)\\...\end{aligned}\end{align} \]

This layer has two outputs. The default output is \(h_t\). The other output is \(o_t\), whose name is ‘state’ and users can use get_output to extract this output.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • size (int) – The dimension of this layer’s output, which must be equal to the dimension of the state.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • state (paddle.v2.config_base.Layer) – The state of the LSTM unit.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Tanh is the default activation.
  • gate_act (paddle.v2.activation.Base) – Activation type of the gate. paddle.v2.activation.Sigmoid is the default activation.
  • state_act (paddle.v2.activation.Base) – Activation type of the state. paddle.v2.activation.Tanh is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

gru_step

class paddle.v2.layer.gru_step
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer, whose dimension can be divided by 3.
  • output_mem (paddle.v2.config_base.Layer) – A memory which memorizes the output of this layer at previous time step.
  • size (int) – The dimension of this layer’s output. If it is not set or set to None, it will be set to one-third of the dimension of the input automatically.
  • act (paddle.v2.activation.Base) – Activation type of this layer’s output. paddle.v2.activation.Tanh is the default activation.
  • name (basestring) – The name of this layer. It is optional.
  • gate_act (paddle.v2.activation.Base) – Activation type of this layer’s two gates. paddle.v2.activation.Sigmoid is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The parameter attribute for bias. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

get_output

class paddle.v2.layer.get_output

Get layer’s output by name. In PaddlePaddle, a layer might return multiple values, but returns one layer’s output. If the user wants to use another output besides the default one, please use get_output first to get the output from input.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input layer. And this layer should contain multiple outputs.
  • arg_name (basestring) – The name of the output to be extracted from the input layer.
  • layer_attr – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Mixed Layer

mixed

class paddle.v2.layer.mixed

Mixed Layer. A mixed layer will add all inputs together, then activate the sum. Each input is a projection or operator.

There are two styles of usages.

  1. When the parameter input is not set, use mixed like this:
with mixed(size=256) as m:
    m += full_matrix_projection(input=layer1)
    m += identity_projection(input=layer2)
  1. You can also set all inputs when invoke mixed as follows:
m = mixed(size=256,
                input=[full_matrix_projection(input=layer1),
                       full_matrix_projection(input=layer2)])
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • size (int) – The dimension of this layer.
  • input – The input of this layer. It is an optional parameter.
  • act (paddle.v2.activation.Base) – Activation Type. paddle.v2.activation.Linear is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

MixedLayerType object.

Return type:

MixedLayerType

embedding

class paddle.v2.layer.embedding

Define a embedding Layer.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer, whose type must be Index Data.
  • size (int) – The dimension of the embedding vector.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The embedding parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

scaling_projection

class paddle.v2.layer.scaling_projection

scaling_projection multiplies the input with a scalar parameter.

\[out += w * in\]

The example usage is:

proj = scaling_projection(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

ScalingProjection object.

Return type:

ScalingProjection

dotmul_projection

class paddle.v2.layer.dotmul_projection

DotMulProjection takes a layer as input and performs element-wise multiplication with weight.

\[out.row[i] += in.row[i] .* weight\]

where \(.*\) means element-wise multiplication.

The example usage is:

proj = dotmul_projection(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

DotMulProjection object.

Return type:

DotMulProjection

dotmul_operator

class paddle.v2.layer.dotmul_operator

DotMulOperator takes two inputs and performs element-wise multiplication:

\[out.row[i] += scale * (a.row[i] .* b.row[i])\]

where \(.*\) means element-wise multiplication, and scale is a config scalar, its default value is 1.

The example usage is:

op = dotmul_operator(a=layer1, b=layer2, scale=0.5)
Parameters:
  • a (paddle.v2.config_base.Layer) – The first input of this layer.
  • b (paddle.v2.config_base.Layer) – The second input of this layer.
  • scale (float) – A scalar to scale the product. Its default value is 1.
Returns:

DotMulOperator object.

Return type:

DotMulOperator

full_matrix_projection

class paddle.v2.layer.full_matrix_projection

Full Matrix Projection. It performs full matrix multiplication.

\[out.row[i] += in.row[i] * weight\]

There are two styles of usage.

  1. When used in mixed like this, you can only set the input:
with mixed(size=100) as m:
    m += full_matrix_projection(input=layer)
  1. When used as an independent object like this, you must set the size:
proj = full_matrix_projection(input=layer,
                              size=100,
                              param_attr=ParamAttr(name='_proj'))
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • size (int) – The dimension of this layer.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

FullMatrixProjection Object.

Return type:

FullMatrixProjection

identity_projection

class paddle.v2.layer.identity_projection
  1. If offset=None, it performs IdentityProjection as follows:
\[out.row[i] += in.row[i]\]

The example usage is:

proj = identity_projection(input=layer)
  1. If offset!=None, It executes IdentityOffsetProjection and takes the elements of the input in the range [offset, offset+size) as output.
\[out.row[i] += in.row[i + \textrm{offset}]\]

The example usage is:

proj = identity_projection(input=layer,
                           offset=10)

Note that neither of the projections have trainable parameter.

Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • offset (int) – The offset from the start of the input. The input’s elements in the range [offset, offset+size) will be taken as output. If this parameter is not set or set to None, the output will be the same as the input.
  • size (int) – The dimension of this layer. It will be neglected when offset is None or not set.
Returns:

IdentityProjection or IdentityOffsetProjection object

Return type:

IdentityProjection | IdentityOffsetProjection

slice_projection

class paddle.v2.layer.slice_projection

slice_projection slices the input value into multiple parts, then selects and merges some of them into a new output.

\[output = [input.slices()]\]

The example usage is:

proj = slice_projection(input=layer, slices=[(0, 10), (20, 30)])

Note that slice_projection has no trainable parameter.

Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • slices (list of tuple) – A list of start and end offsets of each slice.
Returns:

SliceProjection object.

Return type:

SliceProjection

table_projection

class paddle.v2.layer.table_projection

Table Projection. It selects rows from parameter where row_id is in input_ids.

\[out.row[i] += table.row[ids[i]]\]

where \(out\) is output, \(table\) is parameter, \(ids\) is input_ids, and \(i\) is row_id.

There are two styles of usage.

  1. When used in mixed like this, you can only set the input:
with mixed(size=100) as m:
    m += table_projection(input=layer)
  1. When used as an independent object like this, you must set the size:
proj = table_projection(input=layer,
                        size=100,
                        param_attr=ParamAttr(name='_proj'))
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer, which must contains id fields.
  • size (int) – The dimension of the output.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

TableProjection Object.

Return type:

TableProjection

trans_full_matrix_projection

class paddle.v2.layer.trans_full_matrix_projection

Different from full_matrix_projection, this projection performs matrix multiplication, using the transpose of weight.

\[out.row[i] += in.row[i] * w^\mathrm{T}\]

\(w^\mathrm{T}\) means the transpose of weight. The simply usage is:

proj = trans_full_matrix_projection(input=layer,
                                    size=100,
                                    param_attr=ParamAttr(
                                         name='_proj',
                                         initial_mean=0.0,
                                         initial_std=0.01))
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • size (int) – The parameter size. Means the width of parameter.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
Returns:

TransposedFullMatrixProjection Object.

Return type:

TransposedFullMatrixProjection

Aggregate Layers

AggregateLevel

class paddle.v2.layer.AggregateLevel

PaddlePaddle supports three sequence types:

  • SequenceType.NO_SEQUENCE means the sample is not a sequence.
  • SequenceType.SEQUENCE means the sample is a sequence.
  • SequenceType.SUB_SEQUENCE means the sample is a nested sequence, each timestep of which is also a sequence.

Accordingly, AggregateLevel supports two modes:

  • AggregateLevel.TO_NO_SEQUENCE means the aggregation acts on each timestep of a sequence, both SUB_SEQUENCE and SEQUENCE will be aggregated to NO_SEQUENCE.
  • AggregateLevel.TO_SEQUENCE means the aggregation acts on each sequence of a nested sequence, SUB_SEQUENCE will be aggregated to SEQUENCE.

pooling

class paddle.v2.layer.pooling

Pooling layer for sequence inputs, not used for Image.

If stride > 0, this layer slides a window whose size is determined by stride, and returns the pooling value of the sequence in the window as the output. Thus, a long sequence will be shortened. Note that for sequence with sub-sequence, the default value of stride is -1.

The example usage is:

seq_pool = pooling(input=layer,
                         pooling_type=AvgPooling(),
                         agg_level=AggregateLevel.TO_NO_SEQUENCE)
Parameters:
  • agg_level (AggregateLevel) – AggregateLevel.TO_NO_SEQUENCE or AggregateLevel.TO_SEQUENCE
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • pooling_type (BasePoolingType | None) – Type of pooling. MaxPooling is the default pooling.
  • stride (int) – The step size between successive pooling regions.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

last_seq

class paddle.v2.layer.last_seq

Get Last Timestamp Activation of a sequence.

If stride > 0, this layer will slide a window whose size is determined by stride, and return the last value of the sequence in the window as the output. Thus, a long sequence will be shortened. Note that for sequence with sub-sequence, the default value of stride is -1.

The simple usage is:

seq = last_seq(input=layer)
Parameters:
  • agg_level (AggregateLevel) – Aggregated level
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • stride (int) – The step size between successive pooling regions.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

first_seq

class paddle.v2.layer.first_seq

Get First Timestamp Activation of a sequence.

If stride > 0, this layer will slide a window whose size is determined by stride, and return the first value of the sequence in the window as the output. Thus, a long sequence will be shortened. Note that for sequence with sub-sequence, the default value of stride is -1.

The simple usage is:

seq = first_seq(input=layer)
Parameters:
  • agg_level (AggregateLevel) – aggregation level
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • stride (int) – The step size between successive pooling regions.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

sub_seq

class paddle.v2.layer.sub_seq

sub_seq will return sub-sequences from the input sequences. For each sequence in the input sequence layer, sub_seq will slice it by given offset and size. Please notice that, number of offset value and size value both are equal to the number of sequence in the input layer.

sub_seq = sub_seq(input=input_seq, offsets=offsets, sizes=sizes)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer, which should be sequence.
  • offsets (paddle.v2.config_base.Layer) – The offset indices to slice the input sequence, which should be sequence type.
  • sizes (paddle.v2.config_base.Layer) – The sizes of the sub-sequences, which should be sequence type.
  • act (paddle.v2.activation.Base.) – Activation type, paddle.v2.activation.Linear is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

concat

class paddle.v2.layer.concat

Concatenate all input vectors to one vector. Inputs can be a list of paddle.v2.config_base.Layer or a list of projection.

The example usage is:

concat = concat(input=[layer1, layer2])
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (list | tuple | collections.Sequence) – The input layers or projections
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Identity is the default activation.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

seq_concat

class paddle.v2.layer.seq_concat

Concatenate sequence a and sequence b.

Inputs:
  • a = [a1, a2, ..., am]
  • b = [b1, b2, ..., bn]

Output: [a1, ..., am, b1, ..., bn]

Note that the above computation is for one sample. Multiple samples are processed in one batch.

The example usage is:

concat = seq_concat(a=layer1, b=layer2)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • a (paddle.v2.config_base.Layer) – The first input sequence layer
  • b (paddle.v2.config_base.Layer) – The second input sequence layer
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Identity is the default activation.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

seq_slice

class paddle.v2.layer.seq_slice

seq_slice will return one or several sub-sequences from the input sequence layer given start and end indices.

  • If only start indices are given, and end indices are set to None, this layer slices the input sequence from the given start indices to its end.
  • If only end indices are given, and start indices are set to None, this layer slices the input sequence from its beginning to the given end indices.
  • If start and end indices are both given, they should have the same number of elements.

If start or end indices contains more than one elements, the input sequence will be sliced for multiple times.

seq_silce = seq_slice(input=input_seq,
                            starts=start_pos, ends=end_pos)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer, which should be a sequence.
  • starts (paddle.v2.config_base.Layer | None) – The start indices to slice the input sequence.
  • ends (paddle.v2.config_base.Layer | None) – The end indices to slice the input sequence.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

kmax_sequence_score

sub_nested_seq

class paddle.v2.layer.sub_nested_seq

The sub_nested_seq accepts two inputs: the first one is a nested sequence; the second one is a set of selceted indices in the nested sequence.

Then sub_nest_seq trims the first nested sequence input according to the selected indices to form a new output. This layer is useful in beam training.

The example usage is:

sub_nest_seq = sub_nested_seq(input=data, selected_indices=selected_ids)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer. It is a nested sequence.
  • selected_indices – A set of sequence indices in the nested sequence.
  • name (basestring) – The name of this layer. It is optional.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Reshaping Layers

block_expand

class paddle.v2.layer.block_expand
Expand feature map to minibatch matrix.
  • matrix width is: block_y * block_x * num_channels
  • matirx height is: outputH * outputW
\[ \begin{align}\begin{aligned}outputH = 1 + (2 * padding_y + imgSizeH - block_y + stride_y - 1) / stride_y\\outputW = 1 + (2 * padding_x + imgSizeW - block_x + stride_x - 1) / stride_x\end{aligned}\end{align} \]

The expanding method is the same with ExpandConvLayer, but saved the transposed value. After expanding, output.sequenceStartPositions will store timeline. The number of time steps is outputH * outputW and the dimension of each time step is block_y * block_x * num_channels. This layer can be used after convolutional neural network, and before recurrent neural network.

The simple usage is:

block_expand = block_expand(input=layer,
                                  num_channels=128,
                                  stride_x=1,
                                  stride_y=1,
                                  block_x=1,
                                  block_x=3)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • num_channels (int) – The number of input channels. If the parameter is not set or set to None, its actual value will be automatically set to the channels number of the input.
  • block_x (int) – The width of sub block.
  • block_y (int) – The width of sub block.
  • stride_x (int) – The stride size in horizontal direction.
  • stride_y (int) – The stride size in vertical direction.
  • padding_x (int) – The padding size in horizontal direction.
  • padding_y (int) – The padding size in vertical direction.
  • name (basestring.) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

ExpandLevel

class paddle.v2.layer.ExpandLevel

Please refer to AggregateLevel first.

ExpandLevel supports two modes:

  • ExpandLevel.FROM_NO_SEQUENCE means the expansion acts on NO_SEQUENCE, which will be expanded to SEQUENCE or SUB_SEQUENCE.
  • ExpandLevel.FROM_SEQUENCE means the expansion acts on SEQUENCE, which will be expanded to SUB_SEQUENCE.

expand

class paddle.v2.layer.expand

A layer for expanding dense data or (sequence data where the length of each sequence is one) to sequence data.

The example usage is:

expand = expand(input=layer1,
                      expand_as=layer2,
                      expand_level=ExpandLevel.FROM_NO_SEQUENCE)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • expand_as (paddle.v2.config_base.Layer) – Expand the input according to this layer’s sequence infomation. And after the operation, the input expanded will have the same number of elememts as this layer.
  • name (basestring) – The name of this layer. It is optional.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • expand_level (ExpandLevel) – Whether the input layer is a sequence or the element of a sequence.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

repeat

class paddle.v2.layer.repeat

A layer for repeating the input for num_repeats times.

If as_row_vector:

\[y = [x_1,\cdots, x_n, \cdots, x_1, \cdots, x_n]\]

If not as_row_vector:

\[y = [x_1,\cdots, x_1, \cdots, x_n, \cdots, x_n]\]

The example usage is:

expand = repeat(input=layer, num_repeats=4)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • num_repeats (int) – The times of repeating the input.
  • name (basestring) – The name of this layer. It is optional.
  • as_row_vector (bool) – Whether to treat the input as row vectors or not. If the parameter is set to True, the repeating operation will be performed in the column direction. Otherwise, it will be performed in the row direction.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Identity is the default activation.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

rotate

class paddle.v2.layer.rotate

A layer for rotating 90 degrees (clock-wise) for each feature channel, usually used when the input sample is some image or feature map.

\[y(j,i,:) = x(M-i-1,j,:)\]

where \(x\) is (M x N x C) input, and \(y\) is (N x M x C) output.

The example usage is:

rot = rotate(input=layer,
                   height=100,
                   width=100)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • height (int) – The height of the sample matrix.
  • width (int) – The width of the sample matrix.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

seq_reshape

class paddle.v2.layer.seq_reshape

A layer for reshaping the sequence. Assume the input sequence has T instances, the dimension of each instance is M, and the input reshape_size is N, then the output sequence has T*M/N instances, the dimension of each instance is N.

Note that T*M/N must be an integer.

The example usage is:

reshape = seq_reshape(input=layer, reshape_size=4)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • reshape_size (int) – The dimension of the reshaped sequence.
  • name (basestring) – The name of this layer. It is optional.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Identity is the default activation.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Math Layers

addto

class paddle.v2.layer.addto

AddtoLayer.

\[y = f(\sum_{i} x_i + b)\]

where \(y\) is output, \(x\) is input, \(b\) is bias, and \(f\) is activation function.

The example usage is:

addto = addto(input=[layer1, layer2],
                    act=paddle.v2.activation.Relu(),
                    bias_attr=False)

This layer just simply adds all input layers together, then activates the sum. All inputs should share the same dimension, which is also the dimension of this layer’s output.

There is no weight matrix for each input, because it just a simple add operation. If you want a complicated operation before add, please use mixed.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer | list | tuple) – The input layers. It could be a paddle.v2.config_base.Layer or list/tuple of paddle.v2.config_base.Layer.
  • act (paddle.v2.activation.Base) – Activation Type. paddle.v2.activation.Linear is the default activation.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

linear_comb

class paddle.v2.layer.linear_comb
A layer for weighted sum of vectors takes two inputs.
  • Input: size of weights is M
    size of vectors is M*N
  • Output: a vector of size=N
\[z(i) = \sum_{j=0}^{M-1} x(j) y(i+Nj)\]

where \(0 \le i \le N-1\)

Or in the matrix notation:

\[z = x^\mathrm{T} Y\]
In this formular:
  • \(x\): weights
  • \(y\): vectors.
  • \(z\): the output.

Note that the above computation is for one sample. Multiple samples are processed in one batch.

The simple usage is:

linear_comb = linear_comb(weights=weight, vectors=vectors,
                                size=elem_dim)
Parameters:
  • weights (paddle.v2.config_base.Layer) – The weight layer.
  • vectors (paddle.v2.config_base.Layer) – The vector layer.
  • size (int) – The dimension of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

interpolation

class paddle.v2.layer.interpolation

This layer performs linear interpolation on two inputs, which is used in NEURAL TURING MACHINE.

\[y.row[i] = w[i] * x_1.row[i] + (1 - w[i]) * x_2.row[i]\]

where \(x_1\) and \(x_2\) are two (batchSize x dataDim) inputs, \(w\) is (batchSize x 1) weight vector, and \(y\) is (batchSize x dataDim) output.

The example usage is:

interpolation = interpolation(input=[layer1, layer2], weight=layer3)
Parameters:
  • input (list | tuple) – The input of this layer.
  • weight (paddle.v2.config_base.Layer) – Weight layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

bilinear_interp

class paddle.v2.layer.bilinear_interp

This layer implements bilinear interpolation on convolutional layer’s output.

Please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation

The simple usage is:

bilinear = bilinear_interp(input=layer1, out_size_x=64, out_size_y=64)
Parameters:
  • input (paddle.v2.config_base.Layer.) – The input of this layer.
  • out_size_x (int) – The width of the output.
  • out_size_y (int) – The height of the output.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

dropout

class paddle.v2.layer.dropout

The example usage is:

dropout = dropout(input=input, dropout_rate=0.5)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • dropout_rate (float) – The probability of dropout.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

dot_prod

class paddle.v2.layer.dot_prod

A layer for computing the dot product of two vectors.

The example usage is:

dot_prod = dot_prod(input1=vec1, input2=vec2)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input1 (paddle.v2.config_base.Layer) – The first input layer.
  • input2 (paddle.v2.config_base.Layer) – The second input layer.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

out_prod

class paddle.v2.layer.out_prod

A layer for computing the outer product of two vectors The result is a matrix of size(input1) x size(input2)

The example usage is:

out_prod = out_prod(input1=vec1, input2=vec2)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input1 – The first input layer.
  • input2 (paddle.v2.config_base.Layer) – The second input layer.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

power

class paddle.v2.layer.power

This layer applies a power function to a vector element-wise, which is used in NEURAL TURING MACHINE.

\[y = x^w\]

where \(x\) is an input vector, \(w\) is a scalar exponent, and \(y\) is an output vector.

The example usage is:

power = power(input=layer1, weight=layer2)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • weight (paddle.v2.config_base.Layer) – The exponent of the power.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

scaling

class paddle.v2.layer.scaling

A layer for multiplying input vector by weight scalar.

\[y = w x\]

where \(x\) is size=dataDim input, \(w\) is size=1 weight, and \(y\) is size=dataDim output.

Note that the above computation is for one sample. Multiple samples are processed in one batch.

The example usage is:

scale = scaling(input=layer1, weight=layer2)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • weight (paddle.v2.config_base.Layer) – The weight of each sample.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

clip

class paddle.v2.layer.clip

A layer for clipping the input value by the threshold.

\[out[i] = \min (\max (in[i],p_{1} ),p_{2} )\]
clip = clip(input=input, min=-10, max=10)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer.) – The input of this layer.
  • min (float) – The lower threshold for clipping.
  • max (float) – The upper threshold for clipping.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

resize

class paddle.v2.layer.resize

The resize layer resizes the input matrix with a shape of [Height, Width] into the output matrix with a shape of [Height x Width / size, size], where size is the parameter of this layer indicating the output dimension.

Parameters:
  • input (paddle.v2.config_base.Layer.) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • size (int) – The resized output dimension of this layer.
Returns:

A paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

slope_intercept

class paddle.v2.layer.slope_intercept

This layer for applying a slope and an intercept to the input.

\[y = slope * x + intercept\]

The simple usage is:

scale = slope_intercept(input=input, slope=-1.0, intercept=1.0)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • slope (float) – The scale factor.
  • intercept (float) – The offset.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

tensor

class paddle.v2.layer.tensor

This layer performs tensor operation on two inputs. For example:

\[y_{i} = a * W_{i} * {b^\mathrm{T}}, i=0,1,...,K-1\]
In this formular:
  • \(a\): the first input contains M elements.
  • \(b\): the second input contains N elements.
  • \(y_{i}\): the i-th element of y.
  • \(W_{i}\): the i-th learned weight, shape if [M, N]
  • \(b^\mathrm{T}\): the transpose of \(b_{2}\).

The simple usage is:

tensor = tensor(a=layer1, b=layer2, size=1000)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • a (paddle.v2.config_base.Layer) – The first input of this layer.
  • b (paddle.v2.config_base.Layer) – The second input of this layer.
  • size (int) – The dimension of this layer.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Linear is the default activation.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The parameter attribute for bias. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

cos_sim

class paddle.v2.layer.cos_sim

Cosine Similarity Layer. The cosine similarity equation is here.

\[similarity = cos(\theta) = {\mathbf{a} \cdot \mathbf{b} \over \|\mathbf{a}\| \|\mathbf{b}\|}\]

The size of a is M, size of b is M*N, Similarity will be calculated N times by step M. The output size is N. The scale will be multiplied to similarity.

Note that the above computation is for one sample. Multiple samples are processed in one batch.

The example usage is:

cos = cos_sim(a=layer1, b=layer2, size=3)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • a (paddle.v2.config_base.Layer) – The first input of this layer.
  • b (paddle.v2.config_base.Layer) – The second input of this layer.
  • scale (float) – The scale of the cosine similarity. 1 is the default value.
  • size (int) – The dimension of this layer. NOTE size_a * size should equal size_b.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

l2_distance

class paddle.v2.layer.l2_distance

This layer calculates and returns the Euclidean distance between two input vectors x and y. The equation is as follows:

\[l2_distance(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_{i=1}^D(x_i - y_i)}\]

The output size of this layer is fixed to be 1. Note that the above computation is for one sample. Multiple samples are processed in one batch.

The example usage is:

l2_sim = l2_distance(x=layer1, y=layer2)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • x (paddle.v2.config_base.Layer) – The first input x for this layer, whose output is a matrix with dimensionality N x D. N is the sample number in a mini-batch. D is the dimensionality of x’s output.
  • y (paddle.v2.config_base.Layer) – The second input y for this layer, whose output is a matrix with dimensionality N x D. N is the sample number in a mini-batch. D is the dimensionality of y’s output.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attributes, for example, drop rate. See paddle.v2.attr.ExtraAttribute for more details.
Returns:

The returned paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

trans

class paddle.v2.layer.trans

A layer for transposing a minibatch matrix.

\[y = x^\mathrm{T}\]

where \(x\) is (M x N) input, and \(y\) is (N x M) output.

The example usage is:

trans = trans(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

scale_shift

class paddle.v2.layer.scale_shift

A layer applies a linear transformation to each element in each row of the input matrix. For each element, the layer first re-scales it and then adds a bias to it.

This layer is very like the SlopeInterceptLayer, except the scale and bias are trainable.

\[y = w * x + b\]
scale_shift = scale_shift(input=input, bias_attr=False)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute of scaling. See paddle.v2.attr.ParameterAttribute for details.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

factorization_machine

class paddle.v2.layer.factorization_machine

The Factorization Machine models pairwise feature interactions as inner product of the learned latent vectors corresponding to each input feature. The Factorization Machine can effectively capture feature interactions especially when the input is sparse.

This implementation only consider the 2-order feature interactions using Factorization Machine with the formula:

\[y = \sum_{i=1}^{n-1}\sum_{j=i+1}^n\langle v_i, v_j \rangle x_i x_j\]

Note

X is the input vector with size n. V is the factor matrix. Each row of V is the latent vector corresponding to each input dimesion. The size of each latent vector is k.

For details of Factorization Machine, please refer to the paper: Factorization machines.

Parameters:
  • input (paddle.v2.config_base.Layer) – The input layer. Supported input types: all input data types on CPU, and only dense input types on GPU.
  • factor_size – The hyperparameter that defines the dimensionality of the latent vector size.
  • act (paddle.v2.activation.Base) – Activation Type. Default is linear activation.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttributeNone) – Extra Layer config.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Sampling Layers

maxid

class paddle.v2.layer.max_id

A layer for finding the id which has the maximal value for each sample. The result is stored in output.ids.

The example usage is:

maxid = maxid(input=layer)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

sampling_id

class paddle.v2.layer.sampling_id

A layer for sampling id from a multinomial distribution from the input layer. Sampling one id for one sample.

The simple usage is:

samping_id = sampling_id(input=input)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

multiplex

class paddle.v2.layer.multiplex

This layer multiplex multiple layers according to the indexes, which are provided by the first input layer. inputs[0]: the indexes of the layers to form the output of size batchSize. inputs[1:N]; the candidate output data. For each index i from 0 to batchSize - 1, the i-th row of the output is the the same to the i-th row of the (index[i] + 1)-th layer.

For each i-th row of output: .. math:

y[i][j] = x_{x_{0}[i] + 1}[i][j], j = 0,1, ... , (x_{1}.width - 1)

where, y is output. \(x_{k}\) is the k-th input layer and \(k = x_{0}[i] + 1\).

The example usage is:

maxid = multiplex(input=layers)
Parameters:
  • input (list of paddle.v2.config_base.Layer) – Input layers.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Cost Layers

cross_entropy_cost

class paddle.v2.layer.cross_entropy_cost

A loss layer for multi class entropy.

The example usage is:

cost = cross_entropy(input=input,
                     label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • weight (LayerOutout) – The weight layer defines a weight for each sample in the mini-batch. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

cross_entropy_with_selfnorm_cost

class paddle.v2.layer.cross_entropy_with_selfnorm_cost

A loss layer for multi class entropy with selfnorm. Input should be a vector of positive numbers, without normalization.

The example usage is:

cost = cross_entropy_with_selfnorm(input=input,
                                   label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • softmax_selfnorm_alpha (float) – The scale factor affects the cost.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

multi_binary_label_cross_entropy_cost

class paddle.v2.layer.multi_binary_label_cross_entropy_cost

A loss layer for multi binary label cross entropy.

The example usage is:

cost = multi_binary_label_cross_entropy(input=input,
                                        label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

huber_regression_cost

class paddle.v2.layer.huber_regression_cost

In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. Given a prediction f(x), a label y and \(\delta\), the loss function is defined as:

\[ \begin{align}\begin{aligned}loss = 0.5*(y-f(x))^{2}, | y-f(x) | < \delta\\loss = \delta | y-f(x) | - 0.5 \delta ^2, otherwise\end{aligned}\end{align} \]

The example usage is:

cost = huber_regression_cost(input=input, label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • delta (float) – The difference between the observed and predicted values.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer.

huber_classification_cost

class paddle.v2.layer.huber_classification_cost

For classification purposes, a variant of the Huber loss called modified Huber is sometimes used. Given a prediction f(x) (a real-valued classifier score) and a true binary class label \(y\in \{-1, 1 \}\), the modified Huber loss is defined as:

The example usage is:

cost = huber_classification_cost(input=input, label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

lambda_cost

class paddle.v2.layer.lambda_cost

lambdaCost for lambdaRank LTR approach.

The example usage is:

cost = lambda_cost(input=input,
                   score=score,
                   NDCG_num=8,
                   max_sort_size=-1)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input of this layer, which is often a document samples list of the same query and whose type must be sequence.
  • score – The scores of the samples.
  • NDCG_num (int) – The size of NDCG (Normalized Discounted Cumulative Gain), e.g., 5 for NDCG@5. It must be less than or equal to the minimum size of the list.
  • max_sort_size (int) – The size of partial sorting in calculating gradient. If max_sort_size is equal to -1 or greater than the number of the samples in the list, then the algorithm will sort the entire list to compute the gradient. In other cases, max_sort_size must be greater than or equal to NDCG_num.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

square_error_cost

class paddle.v2.layer.square_error_cost

sum of square error cost:

\[cost = \sum_{i=1}^N(t_i-y_i)^2\]
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • weight (paddle.v2.config_base.Layer) – The weight layer defines a weight for each sample in the mini-batch. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

rank_cost

class paddle.v2.layer.rank_cost

A cost Layer for learning to rank using gradient descent.

Reference:
Learning to Rank using Gradient Descent
\[ \begin{align}\begin{aligned}C_{i,j} & = -\tilde{P_{ij}} * o_{i,j} + log(1 + e^{o_{i,j}})\\o_{i,j} & = o_i - o_j\\\tilde{P_{i,j}} & = \{0, 0.5, 1\} \ or \ \{0, 1\}\end{aligned}\end{align} \]
In this formula:
  • \(C_{i,j}\) is the cross entropy cost.
  • \(\tilde{P_{i,j}}\) is the label. 1 means positive order and 0 means reverse order.
  • \(o_i\) and \(o_j\): the left output and right output. Their dimension is one.

The example usage is:

cost = rank_cost(left=out_left,
                 right=out_right,
                 label=label)
Parameters:
  • left (paddle.v2.config_base.Layer) – The first input, the size of this layer is 1.
  • right (paddle.v2.config_base.Layer) – The right input, the size of this layer is 1.
  • label (paddle.v2.config_base.Layer) – Label is 1 or 0, means positive order and reverse order.
  • weight (paddle.v2.config_base.Layer) – The weight layer defines a weight for each sample in the mini-batch. It is optional.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

sum_cost

class paddle.v2.layer.sum_cost

A loss layer which calculates the sum of the input as loss.

The example usage is:

cost = sum_cost(input=input)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer.

crf

class paddle.v2.layer.crf

A layer for calculating the cost of sequential conditional random field model.

The example usage is:

crf = crf(input=input,
                label=label,
                size=label_dim)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • size (int) – The category number.
  • weight (paddle.v2.config_base.Layer) – The weight layer defines a weight for each sample in the mini-batch. It is optional.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

crf_decoding

class paddle.v2.layer.crf_decoding

A layer for calculating the decoding sequence of sequential conditional random field model. The decoding sequence is stored in output.ids. If the input ‘label’ is provided, it is treated as the ground-truth label, and this layer will also calculate error. output.value[i] is 1 for an incorrect decoding and 0 for the correct.

The example usage is:

crf_decoding = crf_decoding(input=input,
                                  size=label_dim)
Parameters:
  • input (paddle.v2.config_base.Layer) – The first input layer.
  • size (int) – The dimension of this layer.
  • label (paddle.v2.config_base.Layer | None) – The input label.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • name (basestring) – The name of this layer. It is optional.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

ctc

class paddle.v2.layer.ctc

Connectionist Temporal Classification (CTC) is designed for temporal classication task. e.g. sequence labeling problems where the alignment between the inputs and the target labels is unknown.

Reference:
Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks

Note

Considering the ‘blank’ label needed by CTC, you need to use (num_classes + 1) as the size of the input, where num_classes is the category number. And the ‘blank’ is the last category index. So the size of ‘input’ layer (e.g. fc with softmax activation) should be (num_classes + 1). The size of ctc should also be (num_classes + 1).

The example usage is:

ctc = ctc(input=input,
                label=label,
                size=9055,
                norm_by_times=True)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • size (int) – The dimension of this layer, which must be equal to (category number + 1).
  • name (basestring) – The name of this layer. It is optional.
  • norm_by_times (bool) – Whether to do normalization by times. False is the default.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

warp_ctc

class paddle.v2.layer.warp_ctc

A layer intergrating the open-source warp-ctc library, which is used in Deep Speech 2: End-toEnd Speech Recognition in English and Mandarin, to compute Connectionist Temporal Classification (CTC) loss. Besides, another warp-ctc repository, which is forked from the official one, is maintained to enable more compiling options. During the building process, PaddlePaddle will clone the source codes, build and install it to third_party/install/warpctc directory.

Reference:
Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks

Note

  • Let num_classes represents the category number. Considering the ‘blank’ label needed by CTC, you need to use (num_classes + 1) as the size of warp_ctc layer.
  • You can set ‘blank’ to any value ranged in [0, num_classes], which should be consistent with those used in your labels.
  • As a native ‘softmax’ activation is interated to the warp-ctc library, ‘linear’ activation is expected to be used instead in the ‘input’ layer.

The example usage is:

ctc = warp_ctc(input=input,
                     label=label,
                     size=1001,
                     blank=1000,
                     norm_by_times=False)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • size (int) – The dimension of this layer, which must be equal to (category number + 1).
  • name (basestring) – The name of this layer. It is optional.
  • blank (int) – The ‘blank’ label used in ctc.
  • norm_by_times (bool) – Whether to do normalization by times. False is the default.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

nce

class paddle.v2.layer.nce

Noise-contrastive estimation.

Reference:
A fast and simple algorithm for training neural probabilistic language models.

The example usage is:

cost = nce(input=[layer1, layer2], label=layer2,
                 param_attr=[attr1, attr2], weight=layer3,
                 num_classes=3, neg_distribution=[0.1,0.3,0.6])
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer | list | tuple | collections.Sequence) – The first input of this layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • weight (paddle.v2.config_base.Layer) – The weight layer defines a weight for each sample in the mini-batch. It is optional.
  • num_classes (int) – The number of classes.
  • act (paddle.v2.activation.Base) – Activation type. paddle.v2.activation.Sigmoid is the default activation.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • num_neg_samples (int) – The number of sampled negative labels. 10 is the default value.
  • neg_distribution (list | tuple | collections.Sequence | None) – The discrete noisy distribution over the output space from which num_neg_samples negative labels are sampled. If this parameter is not set, a uniform distribution will be used. A user-defined distribution is a list whose length must be equal to the num_classes. Each member of the list defines the probability of a class given input x.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The parameter attribute for bias. If this parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If this parameter is set to True, the bias is initialized to zero.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

hsigmoid

class paddle.v2.layer.hsigmoid

Organize the classes into a binary tree. At each node, a sigmoid function is used to calculate the probability of belonging to the right branch.

Reference:
Hierarchical Probabilistic Neural Network Language Model

The example usage is:

cost = hsigmoid(input=[layer1, layer2],
                label=data)
Parameters:
  • input (paddle.v2.config_base.Layer | list | tuple) – The input of this layer.
  • label (paddle.v2.config_base.Layer) – The input label.
  • num_classes (int) – The number of classes. And it should be larger than 2. If the parameter is not set or set to None, its actual value will be automatically set to the number of labels.
  • name (basestring) – The name of this layer. It is optional.
  • bias_attr (paddle.v2.attr.ParameterAttribute | None | bool | Any) – The bias attribute. If the parameter is set to False or an object whose type is not paddle.v2.attr.ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

smooth_l1_cost

class paddle.v2.layer.smooth_l1_cost

This is a L1 loss but more smooth. It requires that the sizes of input and label are equal. The formula is as follows,

\[L = \sum_{i} smooth_{L1}(input_i - label_i)\]

in which

\[\begin{split}smooth_{L1}(x) = \begin{cases} 0.5x^2& \text{if} \ |x| < 1 \\ |x|-0.5& \text{otherwise} \end{cases}\end{split}\]
Reference:
Fast R-CNN

The example usage is:

cost = smooth_l1_cost(input=input,
                      label=label)
Parameters:
  • input (paddle.v2.config_base.Layer) – The input layer.
  • label – The input label.
  • name (basestring) – The name of this layer. It is optional.
  • coeff (float) – The weight of the gradient in the back propagation. 1.0 is the default value.
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

multibox_loss

class paddle.v2.layer.multibox_loss

Compute the location loss and the confidence loss for ssd.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input_loc (paddle.v2.config_base.Layer | List of paddle.v2.config_base.Layer) – The input predicted locations.
  • input_conf (paddle.v2.config_base.Layer | List of paddle.v2.config_base.Layer) – The input priorbox confidence.
  • priorbox (paddle.v2.config_base.Layer) – The input priorbox location and the variance.
  • label (paddle.v2.config_base.Layer) – The input label.
  • num_classes (int) – The number of the classification.
  • overlap_threshold (float) – The threshold of the overlap.
  • neg_pos_ratio (float) – The ratio of the negative bounding box to the positive bounding box.
  • neg_overlap (float) – The negative bounding box overlap threshold.
  • background_id (int) – The background class index.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

detection_output

class paddle.v2.layer.detection_output

Apply the NMS to the output of network and compute the predict bounding box location. The output’s shape of this layer could be zero if there is no valid bounding box.

Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input_loc (paddle.v2.config_base.Layer | List of paddle.v2.config_base.Layer.) – The input predict locations.
  • input_conf (paddle.v2.config_base.Layer | List of paddle.v2.config_base.Layer.) – The input priorbox confidence.
  • priorbox (paddle.v2.config_base.Layer) – The input priorbox location and the variance.
  • num_classes (int) – The number of the classes.
  • nms_threshold (float) – The Non-maximum suppression threshold.
  • nms_top_k (int) – The bounding boxes number kept of the NMS’s output.
  • keep_top_k (int) – The bounding boxes number kept of the layer’s output.
  • confidence_threshold (float) – The classification confidence threshold.
  • background_id (int) – The background class index.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Check Layer

eos

class paddle.v2.layer.eos

A layer for checking EOS for each sample: - output_id = (input_id == conf.eos_id)

The result is stored in output_.ids. It is used by recurrent layer group.

The example usage is:

eos = eos(input=layer, eos_id=id)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • eos_id (int) – End id of sequence
  • layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer

Activation

prelu

class paddle.v2.layer.prelu

The Parametric Relu activation that actives outputs with a learnable weight.

Reference:
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
\[\begin{split}z_i &\quad if \quad z_i > 0 \\ a_i * z_i &\quad \mathrm{otherwise}\end{split}\]

The example usage is:

prelu = prelu(input=layers, partial_sum=1)
Parameters:
  • name (basestring) – The name of this layer. It is optional.
  • input (paddle.v2.config_base.Layer) – The input of this layer.
  • partial_sum (int) –

    this parameter makes a group of inputs share the same weight.

    • partial_sum = 1, indicates the element-wise activation: each element has a weight.
    • partial_sum = number of elements in one channel, indicates the channel-wise activation, elements in a channel share the same weight.
    • partial_sum = number of outputs, indicates all elements share the same weight.
  • channel_shared (bool) –

    whether or not the parameter are shared across channels.

    • channel_shared = True, we set the partial_sum to the number of outputs.
    • channel_shared = False, we set the partial_sum to the number of elements in one channel.
  • num_channels (int) – number of input channel.
  • param_attr (paddle.v2.attr.ParameterAttribute) – The parameter attribute. See paddle.v2.attr.ParameterAttribute for details.
  • layer_attr (paddle.v2.attr.ExtraAttribute | None) – The extra layer attribute. See paddle.v2.attr.ExtraAttribute for details.
Returns:

paddle.v2.config_base.Layer object.

Return type:

paddle.v2.config_base.Layer