Nets¶
simple_img_conv_pool¶
-
paddle.v2.fluid.nets.
simple_img_conv_pool
(input, num_filters, filter_size, pool_size, pool_stride, act, param_attr=None, pool_type='max', use_cudnn=True)
img_conv_group¶
-
paddle.v2.fluid.nets.
img_conv_group
(input, conv_num_filter, pool_size, conv_padding=1, conv_filter_size=3, conv_act=None, param_attr=None, conv_with_batchnorm=False, conv_batchnorm_drop_rate=0.0, pool_stride=1, pool_type=None, use_cudnn=True) Image Convolution Group, Used for vgg net.
sequence_conv_pool¶
-
paddle.v2.fluid.nets.
sequence_conv_pool
(input, num_filters, filter_size, param_attr=None, act='sigmoid', pool_type='max')
glu¶
-
paddle.v2.fluid.nets.
glu
(input, dim=-1) The gated linear unit composed by split, sigmoid activation and elementwise multiplication. Specifically, Split the input into two equal sized parts \(a\) and \(b\) along the given dimension and then compute as following:
\[{GLU}(a, b)= a \otimes \sigma(b)\]Refer to Language Modeling with Gated Convolutional Networks.
Parameters: - input (Variable) – The input variable which is a Tensor or LoDTensor.
- dim (int) – The dimension along which to split. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\).
Returns: The Tensor variable with half the size of input.
Return type: Variable
Examples
# x is a Tensor variable with shape [3, 6, 9] fluid.nets.glu(input=x, dim=1) # shape of output: [3, 3, 9]
scaled_dot_product_attention¶
-
paddle.v2.fluid.nets.
scaled_dot_product_attention
(queries, keys, values, num_heads=1, dropout_rate=0.0) The dot-product attention.
Attention mechanism can be seen as mapping a query and a set of key-value pairs to an output. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function (dot-product here) of the query with the corresponding key.
The dot-product attention can be implemented through (batch) matrix multipication as follows:
\[Attention(Q, K, V)= softmax(QK^\mathrm{T})V\]Refer to Attention Is All You Need.
Parameters: - queries (Variable) – The input variable which should be a 3-D Tensor.
- keys (Variable) – The input variable which should be a 3-D Tensor.
- values (Variable) – The input variable which should be a 3-D Tensor.
- num_heads (int) – Head number to compute the scaled dot product attention. Default value is 1.
- dropout_rate (float) – The dropout rate to drop the attention weight. Default value is 0.
Returns: - A 3-D Tensor computed by multi-head scaled dot product
attention.
Return type: Variable
Raises: ValueError
– If input queries, keys, values are not 3-D Tensors.Note
1. When num_heads > 1, three linear projections are learned respectively to map input queries, keys and values into queries’, keys’ and values’. queries’, keys’ and values’ have the same shapes with queries, keys and values.
1. When num_heads == 1, scaled_dot_product_attention has no learnable parameters.
Examples
# Suppose q, k, v are Tensors with the following shape: # q: [3, 5, 9], k: [3, 6, 9], v: [3, 6, 10] contexts = fluid.nets.scaled_dot_product_attention(q, k, v) contexts.shape # [3, 5, 10]