-
由 zhupengyang 提交于
* [X86] add attention_padding_mask op, x86 kernel and unit test test=develop * [CUDA] add attention_padding_mask cuda kernel and unit test test=develop
ef6f7b84
* [X86] add attention_padding_mask op, x86 kernel and unit test test=develop * [CUDA] add attention_padding_mask cuda kernel and unit test test=develop