Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
b23a23c9
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
大约 1 年 前同步成功
通知
695
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
b23a23c9
编写于
4月 28, 2017
作者:
Z
zhanghaichao
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fixed error in beam_search example and documents
上级
29026f9f
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
27 addition
and
20 deletion
+27
-20
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+27
-20
未找到文件。
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
b23a23c9
...
@@ -1349,9 +1349,9 @@ def last_seq(input,
...
@@ -1349,9 +1349,9 @@ def last_seq(input,
"""
"""
Get Last Timestamp Activation of a sequence.
Get Last Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
If stride > 0, this layer slides a window whose size is determined by stride,
and return the last value of the window as the output. Thus, a long sequence
and return the last value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.
of stride is -1.
The simple usage is:
The simple usage is:
...
@@ -1365,7 +1365,7 @@ def last_seq(input,
...
@@ -1365,7 +1365,7 @@ def last_seq(input,
:type name: basestring
:type name: basestring
:param input: Input layer name.
:param input: Input layer name.
:type input: LayerOutput
:type input: LayerOutput
:param stride: window size.
:param stride: window size.
:type stride: Int
:type stride: Int
:param layer_attr: extra layer attributes.
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
:type layer_attr: ExtraLayerAttribute.
...
@@ -1405,9 +1405,9 @@ def first_seq(input,
...
@@ -1405,9 +1405,9 @@ def first_seq(input,
"""
"""
Get First Timestamp Activation of a sequence.
Get First Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
If stride > 0, this layer slides a window whose size is determined by stride,
and return the first value of the window as the output. Thus, a long sequence
and return the first value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.
of stride is -1.
The simple usage is:
The simple usage is:
...
@@ -1421,7 +1421,7 @@ def first_seq(input,
...
@@ -1421,7 +1421,7 @@ def first_seq(input,
:type name: basestring
:type name: basestring
:param input: Input layer name.
:param input: Input layer name.
:type input: LayerOutput
:type input: LayerOutput
:param stride: window size.
:param stride: window size.
:type stride: Int
:type stride: Int
:param layer_attr: extra layer attributes.
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
:type layer_attr: ExtraLayerAttribute.
...
@@ -1561,7 +1561,7 @@ def seq_reshape_layer(input,
...
@@ -1561,7 +1561,7 @@ def seq_reshape_layer(input,
bias_attr
=
None
):
bias_attr
=
None
):
"""
"""
A layer for reshaping the sequence. Assume the input sequence has T instances,
A layer for reshaping the sequence. Assume the input sequence has T instances,
the dimension of each instance is M, and the input reshape_size is N, then the
the dimension of each instance is M, and the input reshape_size is N, then the
output sequence has T*M/N instances, the dimension of each instance is N.
output sequence has T*M/N instances, the dimension of each instance is N.
Note that T*M/N must be an integer.
Note that T*M/N must be an integer.
...
@@ -2118,8 +2118,8 @@ def img_conv_layer(input,
...
@@ -2118,8 +2118,8 @@ def img_conv_layer(input,
:param trans: true if it is a convTransLayer, false if it is a convLayer
:param trans: true if it is a convTransLayer, false if it is a convLayer
:type trans: bool
:type trans: bool
:param layer_type: specify the layer_type, default is None. If trans=True,
:param layer_type: specify the layer_type, default is None. If trans=True,
layer_type has to be "exconvt" or "cudnn_convt",
layer_type has to be "exconvt" or "cudnn_convt",
otherwise layer_type has to be either "exconv" or
otherwise layer_type has to be either "exconv" or
"cudnn_conv"
"cudnn_conv"
:type layer_type: String
:type layer_type: String
:return: LayerOutput object.
:return: LayerOutput object.
...
@@ -2337,9 +2337,9 @@ def spp_layer(input,
...
@@ -2337,9 +2337,9 @@ def spp_layer(input,
.. code-block:: python
.. code-block:: python
spp = spp_layer(input=data,
spp = spp_layer(input=data,
pyramid_height=2,
pyramid_height=2,
num_channels=16,
num_channels=16,
pool_type=MaxPooling())
pool_type=MaxPooling())
:param name: layer name.
:param name: layer name.
...
@@ -2433,7 +2433,7 @@ def img_cmrnorm_layer(input,
...
@@ -2433,7 +2433,7 @@ def img_cmrnorm_layer(input,
The example usage is:
The example usage is:
.. code-block:: python
.. code-block:: python
norm = img_cmrnorm_layer(input=net, size=5)
norm = img_cmrnorm_layer(input=net, size=5)
:param name: layer name.
:param name: layer name.
...
@@ -2494,7 +2494,7 @@ def batch_norm_layer(input,
...
@@ -2494,7 +2494,7 @@ def batch_norm_layer(input,
The example usage is:
The example usage is:
.. code-block:: python
.. code-block:: python
norm = batch_norm_layer(input=net, act=ReluActivation())
norm = batch_norm_layer(input=net, act=ReluActivation())
:param name: layer name.
:param name: layer name.
...
@@ -2795,11 +2795,11 @@ def seq_concat_layer(a, b, act=None, name=None, layer_attr=None,
...
@@ -2795,11 +2795,11 @@ def seq_concat_layer(a, b, act=None, name=None, layer_attr=None,
"""
"""
Concat sequence a with sequence b.
Concat sequence a with sequence b.
Inputs:
Inputs:
- a = [a1, a2, ..., an]
- a = [a1, a2, ..., an]
- b = [b1, b2, ..., bn]
- b = [b1, b2, ..., bn]
- Note that the length of a and b should be the same.
- Note that the length of a and b should be the same.
Output: [a1, b1, a2, b2, ..., an, bn]
Output: [a1, b1, a2, b2, ..., an, bn]
The example usage is:
The example usage is:
...
@@ -3563,9 +3563,15 @@ def beam_search(step,
...
@@ -3563,9 +3563,15 @@ def beam_search(step,
simple_rnn += last_time_step_output
simple_rnn += last_time_step_output
return simple_rnn
return simple_rnn
generated_word_embedding = GeneratedInput(
size=target_dictionary_dim,
embedding_name="target_language_embedding",
embedding_size=word_vector_dim)
beam_gen = beam_search(name="decoder",
beam_gen = beam_search(name="decoder",
step=rnn_step,
step=rnn_step,
input=[StaticInput(encoder_last)],
input=[StaticInput(encoder_last),
generated_word_embedding],
bos_id=0,
bos_id=0,
eos_id=1,
eos_id=1,
beam_size=5)
beam_size=5)
...
@@ -3584,7 +3590,8 @@ def beam_search(step,
...
@@ -3584,7 +3590,8 @@ def beam_search(step,
You can refer to the first parameter of recurrent_group, or
You can refer to the first parameter of recurrent_group, or
demo/seqToseq/seqToseq_net.py for more details.
demo/seqToseq/seqToseq_net.py for more details.
:type step: callable
:type step: callable
:param input: Input data for the recurrent unit
:param input: Input data for the recurrent unit, which should include the
previously generated words as a GeneratedInput object.
:type input: list
:type input: list
:param bos_id: Index of the start symbol in the dictionary. The start symbol
:param bos_id: Index of the start symbol in the dictionary. The start symbol
is a special token for NLP task, which indicates the
is a special token for NLP task, which indicates the
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录