1. 25 10月, 2021 1 次提交
    • L
      Add fused_attention_op: add impl wrappers. (#35903) (#36673) · 8c0bacd4
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      8c0bacd4
  2. 23 8月, 2021 1 次提交
    • L
      Refactor the organization of layer_norm cuda impl. (#34883) · 7f5eb533
      Li Min 提交于
      Refactor the organization of layer_norm cuda impl so that it can be reused in fused attention op.
      
          Extract the layer_norm cuda impl form layer_norm_op.cu to layer_norm_kernel.cu.h.
          Define fused/attention_layer_norm.h, which can be used in fused attention op in next PR.
      7f5eb533