Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
89bbc4f6
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
大约 1 年 前同步成功
通知
695
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
89bbc4f6
编写于
1月 03, 2018
作者:
Y
Yang yaming
提交者:
GitHub
1月 03, 2018
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #7157 from pkuyym/fix-7156
Doc fix and enhancement for lstm_unit python wrapper.
上级
19541468
60fecce4
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
74 addition
and
65 deletion
+74
-65
python/paddle/v2/fluid/layers/nn.py
python/paddle/v2/fluid/layers/nn.py
+72
-63
python/paddle/v2/fluid/tests/test_layers.py
python/paddle/v2/fluid/tests/test_layers.py
+2
-2
未找到文件。
python/paddle/v2/fluid/layers/nn.py
浏览文件 @
89bbc4f6
...
@@ -151,7 +151,7 @@ def embedding(input, size, is_sparse=False, param_attr=None, dtype='float32'):
...
@@ -151,7 +151,7 @@ def embedding(input, size, is_sparse=False, param_attr=None, dtype='float32'):
Args:
Args:
input(Variable): Input to the function
input(Variable): Input to the function
size(tuple|list|None): Shape of the look up table parameter
size(tuple|list|None): Shape of the look up table parameter
is_sparse(bool): Boolean flag that specifying whether the input is sparse
is_sparse(bool): Boolean flag that specifying whether the input is sparse
param_attr(ParamAttr): Parameters for this layer
param_attr(ParamAttr): Parameters for this layer
dtype(np.dtype|core.DataType|str): The type of data : float32, float_16, int etc
dtype(np.dtype|core.DataType|str): The type of data : float32, float_16, int etc
...
@@ -366,9 +366,9 @@ def cross_entropy(input, label, **kwargs):
...
@@ -366,9 +366,9 @@ def cross_entropy(input, label, **kwargs):
1) One-hot cross-entropy:
1) One-hot cross-entropy:
`soft_label = False`, `Label[i, 0]` indicates the class index for sample i:
`soft_label = False`, `Label[i, 0]` indicates the class index for sample i:
.. math::
.. math::
Y[i] = -\log(X[i, Label[i]])
Y[i] = -\log(X[i, Label[i]])
2) Soft-label cross-entropy:
2) Soft-label cross-entropy:
...
@@ -386,15 +386,15 @@ def cross_entropy(input, label, **kwargs):
...
@@ -386,15 +386,15 @@ def cross_entropy(input, label, **kwargs):
As a special case of 2), when each row of 'label' has only one
As a special case of 2), when each row of 'label' has only one
non-zero element which is equal to 1, soft-label cross-entropy degenerates
non-zero element which is equal to 1, soft-label cross-entropy degenerates
to a one-hot cross-entropy with one-hot label representation.
to a one-hot cross-entropy with one-hot label representation.
Args:
Args:
input (Variable|list): a 2-D tensor with shape [N x D], where N is the
input (Variable|list): a 2-D tensor with shape [N x D], where N is the
batch size and D is the number of classes. This input is a probability
batch size and D is the number of classes. This input is a probability
computed by the previous operator, which is almost always the result
computed by the previous operator, which is almost always the result
of a softmax operator.
of a softmax operator.
label (Variable|list): the ground truth which is a 2-D tensor. When
label (Variable|list): the ground truth which is a 2-D tensor. When
`soft_label` is set to `False`, `label` is a tensor<int64> with shape
`soft_label` is set to `False`, `label` is a tensor<int64> with shape
[N x 1]. When `soft_label` is set to `True`, `label` is a
[N x 1]. When `soft_label` is set to `True`, `label` is a
tensor<float/double> with shape [N x D].
tensor<float/double> with shape [N x D].
soft_label (bool, via `**kwargs`): a flag indicating whether to interpretate
soft_label (bool, via `**kwargs`): a flag indicating whether to interpretate
the given labels as soft labels, default `False`.
the given labels as soft labels, default `False`.
...
@@ -403,7 +403,7 @@ def cross_entropy(input, label, **kwargs):
...
@@ -403,7 +403,7 @@ def cross_entropy(input, label, **kwargs):
A 2-D tensor with shape [N x 1], the cross entropy loss.
A 2-D tensor with shape [N x 1], the cross entropy loss.
Raises:
Raises:
`ValueError`: 1) the 1st dimension of `input` and `label` are not equal; 2) when \
`ValueError`: 1) the 1st dimension of `input` and `label` are not equal; 2) when
\
`soft_label == True`, and the 2nd dimension of `input` and `label` are not
\
`soft_label == True`, and the 2nd dimension of `input` and `label` are not
\
equal; 3) when `soft_label == False`, and the 2nd dimension of `label` is not 1.
equal; 3) when `soft_label == False`, and the 2nd dimension of `label` is not 1.
...
@@ -727,9 +727,9 @@ def conv2d(input,
...
@@ -727,9 +727,9 @@ def conv2d(input,
def
sequence_pool
(
input
,
pool_type
,
**
kwargs
):
def
sequence_pool
(
input
,
pool_type
,
**
kwargs
):
"""
"""
This function add the operator for sequence pooling.
This function add the operator for sequence pooling.
It pools features of all time-steps of each instance, and is applied
It pools features of all time-steps of each instance, and is applied
on top of the input using pool_type mentioned in the parameters.
on top of the input using pool_type mentioned in the parameters.
It supports four pool_type:
It supports four pool_type:
...
@@ -758,7 +758,7 @@ def sequence_pool(input, pool_type, **kwargs):
...
@@ -758,7 +758,7 @@ def sequence_pool(input, pool_type, **kwargs):
Args:
Args:
input(variable): The input variable which is a LoDTensor.
input(variable): The input variable which is a LoDTensor.
pool_type (string): The pooling type of sequence_pool.
pool_type (string): The pooling type of sequence_pool.
It supports average, sum, sqrt and max.
It supports average, sum, sqrt and max.
Returns:
Returns:
...
@@ -768,7 +768,7 @@ def sequence_pool(input, pool_type, **kwargs):
...
@@ -768,7 +768,7 @@ def sequence_pool(input, pool_type, **kwargs):
.. code-block:: python
.. code-block:: python
x = fluid.layers.data(name='x', shape=[7, 1],
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
dtype='float32', lod_level=1)
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
...
@@ -816,7 +816,7 @@ def sequence_first_step(input, **kwargs):
...
@@ -816,7 +816,7 @@ def sequence_first_step(input, **kwargs):
.. code-block:: python
.. code-block:: python
x = fluid.layers.data(name='x', shape=[7, 1],
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)
x_first_step = fluid.layers.sequence_first_step(input=x)
"""
"""
...
@@ -849,7 +849,7 @@ def sequence_last_step(input, **kwargs):
...
@@ -849,7 +849,7 @@ def sequence_last_step(input, **kwargs):
.. code-block:: python
.. code-block:: python
x = fluid.layers.data(name='x', shape=[7, 1],
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)
x_last_step = fluid.layers.sequence_last_step(input=x)
"""
"""
...
@@ -1168,25 +1168,26 @@ def lstm_unit(x_t,
...
@@ -1168,25 +1168,26 @@ def lstm_unit(x_t,
.. math::
.. math::
i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} +
W_{c_i}c_{t-1} +
b_i)
i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)
f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} +
W_{c_f}c_{t-1} +
b_f)
f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)
c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t
+
W_{h_c}h_{t-1} + b_c)
c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t
+
W_{h_c}h_{t-1} + b_c)
o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} +
W_{c_o}c_t +
b_o)
o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)
h_t & = o_t tanh(c_t)
h_t & = o_t tanh(c_t)
The inputs of lstm unit includes :math:`x_t`, :math:`h_{t-1}` and
The inputs of lstm unit include :math:`x_t`, :math:`h_{t-1}` and
:math:`c_{t-1}`. The implementation separates the linear transformation
:math:`c_{t-1}`. The 2nd dimensions of :math:`h_{t-1}` and :math:`c_{t-1}`
and non-linear transformation apart. Here, we take :math:`i_t` as an
should be same. The implementation separates the linear transformation and
example. The linear transformation is applied by calling a `fc` layer and
non-linear transformation apart. Here, we take :math:`i_t` as an example.
the equation is:
The linear transformation is applied by calling a `fc` layer and the
equation is:
.. math::
.. math::
L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} +
W_{c_i}c_{t-1} +
b_i
L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i
The non-linear transformation is applied by calling `lstm_unit_op` and the
The non-linear transformation is applied by calling `lstm_unit_op` and the
equation is:
equation is:
...
@@ -1198,9 +1199,12 @@ def lstm_unit(x_t,
...
@@ -1198,9 +1199,12 @@ def lstm_unit(x_t,
This layer has two outputs including :math:`h_t` and :math:`o_t`.
This layer has two outputs including :math:`h_t` and :math:`o_t`.
Args:
Args:
x_t (Variable): The input value of current step.
x_t (Variable): The input value of current step, a 2-D tensor with shape
hidden_t_prev (Variable): The hidden value of lstm unit.
M x N, M for batch size and N for input size.
cell_t_prev (Variable): The cell value of lstm unit.
hidden_t_prev (Variable): The hidden value of lstm unit, a 2-D tensor
with shape M x S, M for batch size and S for size of lstm unit.
cell_t_prev (Variable): The cell value of lstm unit, a 2-D tensor with
shape M x S, M for batch size and S for size of lstm unit.
forget_bias (float): The forget bias of lstm unit.
forget_bias (float): The forget bias of lstm unit.
param_attr (ParamAttr): The attributes of parameter weights, used to set
param_attr (ParamAttr): The attributes of parameter weights, used to set
initializer, name etc.
initializer, name etc.
...
@@ -1213,14 +1217,15 @@ def lstm_unit(x_t,
...
@@ -1213,14 +1217,15 @@ def lstm_unit(x_t,
Raises:
Raises:
ValueError: The ranks of **x_t**, **hidden_t_prev** and **cell_t_prev**
\
ValueError: The ranks of **x_t**, **hidden_t_prev** and **cell_t_prev**
\
not be 2 or the 1st dimensions of **x_t**, **hidden_t_prev**
\
not be 2 or the 1st dimensions of **x_t**, **hidden_t_prev**
\
and **cell_t_prev** not be the same.
and **cell_t_prev** not be the same or the 2nd dimensions of
\
**hidden_t_prev** and **cell_t_prev** not be the same.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
x_t = fluid.layers.fc(input=x_t_data, size=10)
x_t = fluid.layers.fc(input=x_t_data, size=10)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=
2
0)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=
3
0)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
hidden_t_prev=prev_hidden,
hidden_t_prev=prev_hidden,
...
@@ -1239,7 +1244,11 @@ def lstm_unit(x_t,
...
@@ -1239,7 +1244,11 @@ def lstm_unit(x_t,
if
x_t
.
shape
[
0
]
!=
hidden_t_prev
.
shape
[
0
]
or
x_t
.
shape
[
if
x_t
.
shape
[
0
]
!=
hidden_t_prev
.
shape
[
0
]
or
x_t
.
shape
[
0
]
!=
cell_t_prev
.
shape
[
0
]:
0
]
!=
cell_t_prev
.
shape
[
0
]:
raise
ValueError
(
"The 1s dimension of x_t, hidden_t_prev and "
raise
ValueError
(
"The 1st dimensions of x_t, hidden_t_prev and "
"cell_t_prev must be the same."
)
if
hidden_t_prev
.
shape
[
1
]
!=
cell_t_prev
.
shape
[
1
]:
raise
ValueError
(
"The 2nd dimensions of hidden_t_prev and "
"cell_t_prev must be the same."
)
"cell_t_prev must be the same."
)
if
bias_attr
is
None
:
if
bias_attr
is
None
:
...
@@ -1268,17 +1277,17 @@ def lstm_unit(x_t,
...
@@ -1268,17 +1277,17 @@ def lstm_unit(x_t,
def
reduce_sum
(
input
,
dim
=
None
,
keep_dim
=
False
):
def
reduce_sum
(
input
,
dim
=
None
,
keep_dim
=
False
):
"""
"""
Computes the sum of tensor elements over the given dimension.
Computes the sum of tensor elements over the given dimension.
Args:
Args:
input (Variable): The input variable which is a Tensor or LoDTensor.
input (Variable): The input variable which is a Tensor or LoDTensor.
dim (int|None): The dimension along which the sum is performed. If
dim (int|None): The dimension along which the sum is performed. If
:attr:`None`, sum all elements of :attr:`input` and return a
:attr:`None`, sum all elements of :attr:`input` and return a
Tensor variable with a single element, otherwise must be in the
Tensor variable with a single element, otherwise must be in the
range :math:`[-rank(input), rank(input))`. If :math:`dim < 0`,
range :math:`[-rank(input), rank(input))`. If :math:`dim < 0`,
the dimension to reduce is :math:`rank + dim`.
the dimension to reduce is :math:`rank + dim`.
keep_dim (bool): Whether to reserve the reduced dimension in the
keep_dim (bool): Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
output Tensor. The result tensor will have one fewer dimension
than the :attr:`input` unless :attr:`keep_dim` is true.
than the :attr:`input` unless :attr:`keep_dim` is true.
Returns:
Returns:
...
@@ -1312,17 +1321,17 @@ def reduce_sum(input, dim=None, keep_dim=False):
...
@@ -1312,17 +1321,17 @@ def reduce_sum(input, dim=None, keep_dim=False):
def
reduce_mean
(
input
,
dim
=
None
,
keep_dim
=
False
):
def
reduce_mean
(
input
,
dim
=
None
,
keep_dim
=
False
):
"""
"""
Computes the mean of tensor elements over the given dimension.
Computes the mean of tensor elements over the given dimension.
Args:
Args:
input (Variable): The input variable which is a Tensor or LoDTensor.
input (Variable): The input variable which is a Tensor or LoDTensor.
dim (int|None): The dimension along which the mean is computed. If
dim (int|None): The dimension along which the mean is computed. If
:attr:`None`, compute the mean over all elements of :attr:`input`
:attr:`None`, compute the mean over all elements of :attr:`input`
and return a Tensor variable with a single element, otherwise
and return a Tensor variable with a single element, otherwise
must be in the range :math:`[-rank(input), rank(input))`. If
must be in the range :math:`[-rank(input), rank(input))`. If
:math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
:math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
keep_dim (bool): Whether to reserve the reduced dimension in the
keep_dim (bool): Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
output Tensor. The result tensor will have one fewer dimension
than the :attr:`input` unless :attr:`keep_dim` is true.
than the :attr:`input` unless :attr:`keep_dim` is true.
Returns:
Returns:
...
@@ -1356,22 +1365,22 @@ def reduce_mean(input, dim=None, keep_dim=False):
...
@@ -1356,22 +1365,22 @@ def reduce_mean(input, dim=None, keep_dim=False):
def
reduce_max
(
input
,
dim
=
None
,
keep_dim
=
False
):
def
reduce_max
(
input
,
dim
=
None
,
keep_dim
=
False
):
"""
"""
Computes the maximum of tensor elements over the given dimension.
Computes the maximum of tensor elements over the given dimension.
Args:
Args:
input (Variable): The input variable which is a Tensor or LoDTensor.
input (Variable): The input variable which is a Tensor or LoDTensor.
dim (int|None): The dimension along which the maximum is computed.
dim (int|None): The dimension along which the maximum is computed.
If :attr:`None`, compute the maximum over all elements of
If :attr:`None`, compute the maximum over all elements of
:attr:`input` and return a Tensor variable with a single element,
:attr:`input` and return a Tensor variable with a single element,
otherwise must be in the range :math:`[-rank(input), rank(input))`.
otherwise must be in the range :math:`[-rank(input), rank(input))`.
If :math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
If :math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
keep_dim (bool): Whether to reserve the reduced dimension in the
keep_dim (bool): Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
output Tensor. The result tensor will have one fewer dimension
than the :attr:`input` unless :attr:`keep_dim` is true.
than the :attr:`input` unless :attr:`keep_dim` is true.
Returns:
Returns:
Variable: The reduced Tensor variable.
Variable: The reduced Tensor variable.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
...
@@ -1400,22 +1409,22 @@ def reduce_max(input, dim=None, keep_dim=False):
...
@@ -1400,22 +1409,22 @@ def reduce_max(input, dim=None, keep_dim=False):
def
reduce_min
(
input
,
dim
=
None
,
keep_dim
=
False
):
def
reduce_min
(
input
,
dim
=
None
,
keep_dim
=
False
):
"""
"""
Computes the minimum of tensor elements over the given dimension.
Computes the minimum of tensor elements over the given dimension.
Args:
Args:
input (Variable): The input variable which is a Tensor or LoDTensor.
input (Variable): The input variable which is a Tensor or LoDTensor.
dim (int|None): The dimension along which the minimum is computed.
dim (int|None): The dimension along which the minimum is computed.
If :attr:`None`, compute the minimum over all elements of
If :attr:`None`, compute the minimum over all elements of
:attr:`input` and return a Tensor variable with a single element,
:attr:`input` and return a Tensor variable with a single element,
otherwise must be in the range :math:`[-rank(input), rank(input))`.
otherwise must be in the range :math:`[-rank(input), rank(input))`.
If :math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
If :math:`dim < 0`, the dimension to reduce is :math:`rank + dim`.
keep_dim (bool): Whether to reserve the reduced dimension in the
keep_dim (bool): Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
output Tensor. The result tensor will have one fewer dimension
than the :attr:`input` unless :attr:`keep_dim` is true.
than the :attr:`input` unless :attr:`keep_dim` is true.
Returns:
Returns:
Variable: The reduced Tensor variable.
Variable: The reduced Tensor variable.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
...
...
python/paddle/v2/fluid/tests/test_layers.py
浏览文件 @
89bbc4f6
...
@@ -177,8 +177,8 @@ class TestBook(unittest.TestCase):
...
@@ -177,8 +177,8 @@ class TestBook(unittest.TestCase):
name
=
'x_t_data'
,
shape
=
[
10
,
10
],
dtype
=
'float32'
)
name
=
'x_t_data'
,
shape
=
[
10
,
10
],
dtype
=
'float32'
)
x_t
=
layers
.
fc
(
input
=
x_t_data
,
size
=
10
)
x_t
=
layers
.
fc
(
input
=
x_t_data
,
size
=
10
)
prev_hidden_data
=
layers
.
data
(
prev_hidden_data
=
layers
.
data
(
name
=
'prev_hidden_data'
,
shape
=
[
10
,
2
0
],
dtype
=
'float32'
)
name
=
'prev_hidden_data'
,
shape
=
[
10
,
3
0
],
dtype
=
'float32'
)
prev_hidden
=
layers
.
fc
(
input
=
prev_hidden_data
,
size
=
2
0
)
prev_hidden
=
layers
.
fc
(
input
=
prev_hidden_data
,
size
=
3
0
)
prev_cell_data
=
layers
.
data
(
prev_cell_data
=
layers
.
data
(
name
=
'prev_cell'
,
shape
=
[
10
,
30
],
dtype
=
'float32'
)
name
=
'prev_cell'
,
shape
=
[
10
,
30
],
dtype
=
'float32'
)
prev_cell
=
layers
.
fc
(
input
=
prev_cell_data
,
size
=
30
)
prev_cell
=
layers
.
fc
(
input
=
prev_cell_data
,
size
=
30
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录