提交 6aece506 编写于 作者: Y Yu Yang

Stash

上级 2d1c405b
......@@ -139,20 +139,21 @@
本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf
.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
:language: python
:lines: 41-58
:lines: 42-59
:linenos:
- 双层序列\:
- 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。
- 函数`outer_step`中可以分别处理这两个特征,但我们需要用<font color=red>targetInlink</font>指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。
- 函数`outer_step`中可以分别处理这两个特征,但我们需要用\ :red:`targetInlink`\ 指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。
- 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。
.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf
.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
:language: python
:lines: 41-89
:lines: 42-75, 82-89
:linenos:
示例4:beam_search的生成
========================
......
#edit-mode: -*- python -*-
# edit-mode: -*- python -*-
# Copyright (c) 2016 Baidu, Inc. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
......@@ -35,32 +35,22 @@ speaker2 = data_layer(name="word2", size=dict_dim)
emb1 = embedding_layer(input=speaker1, size=word_dim)
emb2 = embedding_layer(input=speaker2, size=word_dim)
# This hierachical RNN is designed to be equivalent to the simple RNN in
# sequence_rnn_multi_unequalength_inputs.conf
# This hierarchical RNN is designed to be equivalent to the simple RNN in
# sequence_rnn_multi_unequalength_inputs.conf
def outer_step(x1, x2):
outer_mem1 = memory(name="outer_rnn_state1", size=hidden_dim)
outer_mem2 = memory(name="outer_rnn_state2", size=hidden_dim)
index = [0]
def inner_step1(y):
inner_mem = memory(
name='inner_rnn_state_' + y.name,
size=hidden_dim,
boot_layer=outer_mem1)
out = fc_layer(
input=[y, inner_mem],
size=hidden_dim,
act=TanhActivation(),
bias_attr=True,
name='inner_rnn_state_' + y.name)
return out
def inner_step(ipt):
index[0] += 1
i = index[0]
outer_mem = memory(name="outer_rnn_state_%d" % i, size=hidden_dim)
def inner_step2(y):
def inner_step_impl(y):
inner_mem = memory(
name='inner_rnn_state_' + y.name,
name="inner_rnn_state_" + y.name,
size=hidden_dim,
boot_layer=outer_mem2)
boot_layer=outer_mem)
out = fc_layer(
input=[y, inner_mem],
size=hidden_dim,
......@@ -69,12 +59,13 @@ def outer_step(x1, x2):
name='inner_rnn_state_' + y.name)
return out
encoder1 = recurrent_group(step=inner_step1, name='inner1', input=x1)
encoder2 = recurrent_group(step=inner_step2, name='inner2', input=x2)
encoder = recurrent_group(
step=inner_step_impl, name='inner_%d' % i, input=ipt)
last = last_seq(name="outer_rnn_state_%d" % i, input=encoder)
return encoder, last
sentence_last_state1 = last_seq(input=encoder1, name='outer_rnn_state1')
sentence_last_state2_ = last_seq(input=encoder2, name='outer_rnn_state2')
_, sentence_last_state1 = inner_step(ipt=x1)
encoder2, _ = inner_step(ipt=x2)
encoder1_expand = expand_layer(
input=sentence_last_state1, expand_as=encoder2)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册