提交 8ca89039 编写于 作者: G guosheng

Fix the merged conflicts in Transformer inference

上级 2e7494d9
...@@ -203,7 +203,7 @@ def translate_batch(exe, ...@@ -203,7 +203,7 @@ def translate_batch(exe,
predict_all = np.log( predict_all = np.log(
predict_all.reshape([len(beam_inst_map) * beam_size, i + 1, -1]) predict_all.reshape([len(beam_inst_map) * beam_size, i + 1, -1])
[:, -1, :]) [:, -1, :])
predict_all = (predict_all + scores[beam_inst_map].reshape( predict_all = (predict_all + scores[active_beams].reshape(
[len(beam_inst_map) * beam_size, -1])).reshape( [len(beam_inst_map) * beam_size, -1])).reshape(
[len(beam_inst_map), beam_size, -1]) [len(beam_inst_map), beam_size, -1])
if not output_unk: # To exclude the <unk> token. if not output_unk: # To exclude the <unk> token.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册