Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
b23a23c9
P
Paddle
项目概览
PaddlePaddle
/
Paddle
1 年多 前同步成功
通知
2302
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
b23a23c9
编写于
4月 28, 2017
作者:
Z
zhanghaichao
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fixed error in beam_search example and documents
上级
29026f9f
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
27 addition
and
20 deletion
+27
-20
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+27
-20
未找到文件。
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
b23a23c9
...
...
@@ -1349,9 +1349,9 @@ def last_seq(input,
"""
Get Last Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
and return the last value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
If stride > 0, this layer slides a window whose size is determined by stride,
and return the last value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.
The simple usage is:
...
...
@@ -1365,7 +1365,7 @@ def last_seq(input,
:type name: basestring
:param input: Input layer name.
:type input: LayerOutput
:param stride: window size.
:param stride: window size.
:type stride: Int
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
...
...
@@ -1405,9 +1405,9 @@ def first_seq(input,
"""
Get First Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
and return the first value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
If stride > 0, this layer slides a window whose size is determined by stride,
and return the first value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.
The simple usage is:
...
...
@@ -1421,7 +1421,7 @@ def first_seq(input,
:type name: basestring
:param input: Input layer name.
:type input: LayerOutput
:param stride: window size.
:param stride: window size.
:type stride: Int
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
...
...
@@ -1561,7 +1561,7 @@ def seq_reshape_layer(input,
bias_attr
=
None
):
"""
A layer for reshaping the sequence. Assume the input sequence has T instances,
the dimension of each instance is M, and the input reshape_size is N, then the
the dimension of each instance is M, and the input reshape_size is N, then the
output sequence has T*M/N instances, the dimension of each instance is N.
Note that T*M/N must be an integer.
...
...
@@ -2118,8 +2118,8 @@ def img_conv_layer(input,
:param trans: true if it is a convTransLayer, false if it is a convLayer
:type trans: bool
:param layer_type: specify the layer_type, default is None. If trans=True,
layer_type has to be "exconvt" or "cudnn_convt",
otherwise layer_type has to be either "exconv" or
layer_type has to be "exconvt" or "cudnn_convt",
otherwise layer_type has to be either "exconv" or
"cudnn_conv"
:type layer_type: String
:return: LayerOutput object.
...
...
@@ -2337,9 +2337,9 @@ def spp_layer(input,
.. code-block:: python
spp = spp_layer(input=data,
pyramid_height=2,
num_channels=16,
spp = spp_layer(input=data,
pyramid_height=2,
num_channels=16,
pool_type=MaxPooling())
:param name: layer name.
...
...
@@ -2433,7 +2433,7 @@ def img_cmrnorm_layer(input,
The example usage is:
.. code-block:: python
norm = img_cmrnorm_layer(input=net, size=5)
:param name: layer name.
...
...
@@ -2494,7 +2494,7 @@ def batch_norm_layer(input,
The example usage is:
.. code-block:: python
norm = batch_norm_layer(input=net, act=ReluActivation())
:param name: layer name.
...
...
@@ -2795,11 +2795,11 @@ def seq_concat_layer(a, b, act=None, name=None, layer_attr=None,
"""
Concat sequence a with sequence b.
Inputs:
Inputs:
- a = [a1, a2, ..., an]
- b = [b1, b2, ..., bn]
- Note that the length of a and b should be the same.
Output: [a1, b1, a2, b2, ..., an, bn]
The example usage is:
...
...
@@ -3563,9 +3563,15 @@ def beam_search(step,
simple_rnn += last_time_step_output
return simple_rnn
generated_word_embedding = GeneratedInput(
size=target_dictionary_dim,
embedding_name="target_language_embedding",
embedding_size=word_vector_dim)
beam_gen = beam_search(name="decoder",
step=rnn_step,
input=[StaticInput(encoder_last)],
input=[StaticInput(encoder_last),
generated_word_embedding],
bos_id=0,
eos_id=1,
beam_size=5)
...
...
@@ -3584,7 +3590,8 @@ def beam_search(step,
You can refer to the first parameter of recurrent_group, or
demo/seqToseq/seqToseq_net.py for more details.
:type step: callable
:param input: Input data for the recurrent unit
:param input: Input data for the recurrent unit, which should include the
previously generated words as a GeneratedInput object.
:type input: list
:param bos_id: Index of the start symbol in the dictionary. The start symbol
is a special token for NLP task, which indicates the
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录