Fix potential random layout inconsistency issues in sparse attention modules (#534)
* 1) Register layout as buffer of module so that we can save/load checkpoint; 2) Add a broadcast of layout at the beginning to ensure different processes will have consistent layout during distributed training. * Add docstring for max_seq_length argument in SparseSelfAttention Co-authored-by: NZhun Liu <zhunliu@microsoft.com> Co-authored-by: NJeff Rasley <jerasley@microsoft.com>
Showing
想要评论请 注册 或 登录