提交 f82f47ca 编写于 作者: G guosheng

Increase the batch size in inference config of Transformer

上级 86fe83f2
...@@ -25,7 +25,7 @@ class TrainTaskConfig(object): ...@@ -25,7 +25,7 @@ class TrainTaskConfig(object):
class InferTaskConfig(object): class InferTaskConfig(object):
use_gpu = False use_gpu = False
# the number of examples in one run for sequence generation. # the number of examples in one run for sequence generation.
batch_size = 1 batch_size = 10
# the parameters for beam search. # the parameters for beam search.
beam_size = 5 beam_size = 5
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册