未验证 提交 01d6ccb4 编写于 作者: C Cao Ying 提交者: GitHub

Merge pull request #5763 from ranqiu92/dot_product_attention

update the dot_product_attention.
...@@ -1476,10 +1476,8 @@ def dot_product_attention(encoded_sequence, ...@@ -1476,10 +1476,8 @@ def dot_product_attention(encoded_sequence,
expand_as=encoded_sequence, expand_as=encoded_sequence,
name='%s_expand' % name) name='%s_expand' % name)
m = linear_comb_layer( m = dot_prod_layer(
weights=expanded, input1=expanded, input2=encoded_sequence, name='%s_dot-product' % name)
vectors=encoded_sequence,
name='%s_dot-product' % name)
attention_weight = fc_layer( attention_weight = fc_layer(
input=m, input=m,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册