is_test (bool, Default False): A flag indicating whether it is in
is_test (bool, Default False): A flag indicating whether it is in
test phrase or not.
test phrase or not.
...
@@ -1032,8 +1039,6 @@ class BatchNorm(layers.Layer):
...
@@ -1032,8 +1039,6 @@ class BatchNorm(layers.Layer):
is not set, the bias is initialized zero. Default: None.
is not set, the bias is initialized zero. Default: None.
data_layout(string, default NCHW): NCHW|NHWC
data_layout(string, default NCHW): NCHW|NHWC
in_place(bool, Default False): Make the input and output of batch norm reuse memory.
in_place(bool, Default False): Make the input and output of batch norm reuse memory.
name(string, Default None): A name for this layer(optional). If set None, the layer
will be named automatically.
moving_mean_name(string, Default None): The name of moving_mean which store the global Mean.
moving_mean_name(string, Default None): The name of moving_mean which store the global Mean.
moving_variance_name(string, Default None): The name of the moving_variance which store the global Variance.
moving_variance_name(string, Default None): The name of the moving_variance which store the global Variance.
do_model_average_for_mean_and_var(bool, Default False): Do model average for mean and variance or not.
do_model_average_for_mean_and_var(bool, Default False): Do model average for mean and variance or not.
...
@@ -1050,12 +1055,12 @@ class BatchNorm(layers.Layer):
...
@@ -1050,12 +1055,12 @@ class BatchNorm(layers.Layer):
Variable: A tensor variable which is the result after applying batch normalization on the input.
Variable: A tensor variable which is the result after applying batch normalization on the input.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
fc = fluid.FC('fc', size=200, param_attr='fc1.w')
hidden1 = fc(x)
fc = fluid.FC('fc', size=200, param_attr='fc1.w')
batch_norm = fluid.BatchNorm("batch_norm", 10)
hidden1 = fc(x)
hidden2 = batch_norm(hidden1)
batch_norm = fluid.BatchNorm("batch_norm", 10)
hidden2 = batch_norm(hidden1)
"""
"""
def__init__(self,
def__init__(self,
...
@@ -1193,11 +1198,13 @@ class Embedding(layers.Layer):
...
@@ -1193,11 +1198,13 @@ class Embedding(layers.Layer):
Args:
Args:
name_scope: See base class.
name_scope: See base class.
size(tuple|list): The shape of the look up table parameter. It should have two elements which indicate the size of the dictionary of embeddings and the size of each embedding vector respectively.
size(tuple|list): The shape of the look up table parameter. It should have two elements which indicate the size
of the dictionary of embeddings and the size of each embedding vector respectively.
is_sparse(bool): The flag indicating whether to use sparse update.
is_sparse(bool): The flag indicating whether to use sparse update.
is_distributed(bool): Whether to run lookup table from remote parameter server.
is_distributed(bool): Whether to run lookup table from remote parameter server.
padding_idx(int|long|None): If :attr:`None`, it makes no effect to lookup. Otherwise the given :attr:`padding_idx` indicates padding the output with zeros whenever lookup encounters it in :attr:`input`. If :math:`padding_idx < 0`, the :attr:`padding_idx` to use in lookup is :math:`size[0] + dim`.
padding_idx(int|long|None): If :attr:`None`, it makes no effect to lookup.
Otherwise the given :attr:`padding_idx` indicates padding the output with zeros whenever lookup encounters
it in :attr:`input`. If :math:`padding_idx < 0`, the :attr:`padding_idx` to use in lookup is :math:`size[0] + dim`.
param_attr(ParamAttr): Parameters for this layer
param_attr(ParamAttr): Parameters for this layer
dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc
dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc
...
@@ -1209,15 +1216,19 @@ class Embedding(layers.Layer):
...
@@ -1209,15 +1216,19 @@ class Embedding(layers.Layer):
.. code-block:: python
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.dygraph.base as base
import numpy as np
inp_word = np.array([[[1]]]).astype('int64')
inp_word = np.array([[[1]]]).astype('int64')
dict_size = 20
dict_size = 20
with fluid.dygraph.guard():
with fluid.dygraph.guard():
emb = fluid.Embedding(
emb = fluid.dygraph.Embedding(
name_scope='embedding',
name_scope='embedding',
size=[dict_size, 32],
size=[dict_size, 32],
param_attr='emb.w',
param_attr='emb.w',
is_sparse=False)
is_sparse=False)
static_rlt3 = emb2(base.to_variable(inp_word))
static_rlt3 = emb(base.to_variable(inp_word))
"""
"""
def__init__(self,
def__init__(self,
...
@@ -1228,7 +1239,6 @@ class Embedding(layers.Layer):
...
@@ -1228,7 +1239,6 @@ class Embedding(layers.Layer):
padding_idx=None,
padding_idx=None,
param_attr=None,
param_attr=None,
dtype='float32'):
dtype='float32'):
super(Embedding,self).__init__(name_scope,dtype)
super(Embedding,self).__init__(name_scope,dtype)
self._size=size
self._size=size
self._is_sparse=is_sparse
self._is_sparse=is_sparse
...
@@ -1481,6 +1491,26 @@ class GRUUnit(layers.Layer):
...
@@ -1481,6 +1491,26 @@ class GRUUnit(layers.Layer):
Returns:
Returns:
tuple: The hidden value, reset-hidden value and gate values.
tuple: The hidden value, reset-hidden value and gate values.
@@ -2329,7 +2356,6 @@ class GroupNorm(layers.Layer):
...
@@ -2329,7 +2356,6 @@ class GroupNorm(layers.Layer):
If it is set to None, the bias is initialized zero. Default: None.
If it is set to None, the bias is initialized zero. Default: None.
act(str): Activation to be applied to the output of group normalizaiton.
act(str): Activation to be applied to the output of group normalizaiton.
data_layout(string|NCHW): Only NCHW is supported.
data_layout(string|NCHW): Only NCHW is supported.
dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc
Returns:
Returns:
Variable: A tensor variable which is the result after applying group normalization on the input.
Variable: A tensor variable which is the result after applying group normalization on the input.
...
@@ -2536,7 +2562,9 @@ class TreeConv(layers.Layer):
...
@@ -2536,7 +2562,9 @@ class TreeConv(layers.Layer):
out(Variable): (Tensor) The feature vector of subtrees. The shape of the output tensor is [max_tree_node_size, output_size, num_filters]. The output tensor could be a new feature vector for next tree convolution layers
out(Variable): (Tensor) The feature vector of subtrees. The shape of the output tensor is [max_tree_node_size, output_size, num_filters]. The output tensor could be a new feature vector for next tree convolution layers
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid as fluid
import numpy
import numpy
...
@@ -2587,6 +2615,7 @@ class TreeConv(layers.Layer):
...
@@ -2587,6 +2615,7 @@ class TreeConv(layers.Layer):