1. 25 10月, 2021 1 次提交
    • L
      Add fused_attention_op: add impl wrappers. (#35903) (#36673) · 8c0bacd4
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      8c0bacd4
  2. 08 9月, 2021 1 次提交
    • Z
      fix the bug of layer_norm when batch_size=1 (#35480) · ad5f7494
      zhangkaihuo 提交于
      The bug is that access to mean and var is incorrect, and the array will be out of bounds: the shape of mean and var is [batch_size], and the range of thread idx is 0~feature_size, so mean[idx] and var[idx] is incorrect.
      
      When batch_size=1, the correct access is mean[0] and var[0], and a unit test with batch_size=1 is added.
      ad5f7494
  3. 23 8月, 2021 1 次提交
    • L
      Refactor the organization of layer_norm cuda impl. (#34883) · 7f5eb533
      Li Min 提交于
      Refactor the organization of layer_norm cuda impl so that it can be reused in fused attention op.
      
          Extract the layer_norm cuda impl form layer_norm_op.cu to layer_norm_kernel.cu.h.
          Define fused/attention_layer_norm.h, which can be used in fused attention op in next PR.
      7f5eb533
  4. 24 6月, 2021 1 次提交
  5. 22 6月, 2021 1 次提交
  6. 15 6月, 2021 1 次提交
  7. 12 6月, 2021 1 次提交
    • Z
      Fix LayerNorm Problem (#33420) · fe94db6c
      zhiboniu 提交于
      * Eliminate numerical differences of LayerNorm; fix LayerNorm Nan Bug while large data input
      
      * fix bug while large shape of data input
      fe94db6c
  8. 08 6月, 2021 1 次提交
    • S
      add dynamic layer_norm plugin (#33293) · 45d1ae21
      Shang Zhizhou 提交于
      * add dynamic layer_norm plugin
      
      * fix bug
      
      * fix numpy.allclose
      
      * fix format
      
      * fix code style
      
      * remove shepe in dynamic shape
      
      * code format
      
      * remove layer norm fp16
      
      * fix format
      45d1ae21
  9. 19 3月, 2021 1 次提交
  10. 02 3月, 2021 1 次提交
  11. 15 1月, 2021 1 次提交
  12. 14 12月, 2020 1 次提交
  13. 10 12月, 2020 1 次提交
  14. 07 12月, 2020 1 次提交
  15. 02 12月, 2020 1 次提交
    • F
      Layer norm fp16 (#29169) · 7584bb50
      furnace 提交于
      * add fp16 for layer_norm op
      
      * revert layernorm api
      
      * fix forward
      
      * fix forward
      
      * fix backward for layernorm with fp16
      
      * fix unit test for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U>
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      7584bb50
  16. 14 5月, 2020 1 次提交
  17. 20 4月, 2020 1 次提交
  18. 06 1月, 2020 1 次提交
  19. 05 9月, 2018 1 次提交
  20. 08 8月, 2018 1 次提交
  21. 12 2月, 2018 1 次提交
  22. 10 2月, 2018 2 次提交
  23. 03 2月, 2018 2 次提交