1. 27 4月, 2023 1 次提交
    • S
      Move fused feedforward (#53166) · 25b4ba7f
      Sonder 提交于
      * trans fused_feedward Compute function to phi
      
      * add register info
      
      * remove maxfunctor
      
      * move fused feedward to phi
      
      * remove sig file
      
      * remove fliud include
      
      * add include
      
      * add include
      
      * add sig file
      
      * add output register info
      
      * fix sig file
      
      * Update fused_feedforward_sig.cc
      
      * fix grad kernel
      
      * update output register info
      
      * fix
      
      * open fused_feedforward static build
      
      * add optional and fix code style
      
      * fix output info for fused attention
      
      * add optional param
      
      * merge
      25b4ba7f
  2. 14 4月, 2023 1 次提交
    • S
      Move fused_attention op to phi [迁移反向 GPU OpKernel] (#51909) · 3bac6264
      Sonder 提交于
      * add kernel functions
      
      * update kernel functions
      
      * update func parameters' name
      
      * create codes for gpu device
      
      * 调整文件位置
      
      * fix include error
      
      * remove dependent files to phi/
      
      * restore fused_attention_op.cu
      
      * fix dependence errors
      
      * fix dependence errors
      
      * fix include error
      
      * fix all depandence errors[build success]
      
      * remove useless include
      
      * recover useless include
      
      * use phi::ToNCCLDataType
      
      * fix namespace
      
      * update new register code
      
      * fix error in fused_gemm_epilogue_utils
      
      * fix error in FusedAttentionKernel parm
      
      * finish fused_attention registe code[build success]
      
      * add paddle::optional
      
      * add sig file
      
      * fix build error
      
      * fix a include error
      
      * 恢复正向代码
      
      * update CMkaeList
      
      * trans Compute function to phi [build success]
      
      * add register code and fix include error [build success]
      
      * fix parameter sequence
      
      * add include file
      
      * update #if before include
      
      * update #if before include
      
      * fix grammly error
      
      * update codes for DropoutParam
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * add #if
      
      * update test code
      
      * update test codes
      
      * recover test codes
      
      * fix namespace and remove fluid include
      
      * recover random seed
      
      * remove fluid quant_helper
      
      * fix include error
      
      * include utils in funcs
      
      * change include file
      
      * move grad codes back to fluid floder
      
      * move grad codes back to fluid floder
      
      * fix sig file error
      
      * update include
      
      * recover codes to develop
      
      * update register codes
      
      * fix build error
      
      * recover fluid include
      
      * remove some fluid include
      
      * remove some fluid include
      
      * Update fused_attention_op.cu
      
      * remove fluid include
      
      * add some fluid include
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * remote useless include
      3bac6264
  3. 06 4月, 2023 1 次提交
    • S
      Move fused_attention op to phi [迁移前向 GPU OpKernel] (#51743) · a7ec8958
      Sonder 提交于
      * add kernel functions
      
      * update kernel functions
      
      * update func parameters' name
      
      * create codes for gpu device
      
      * 调整文件位置
      
      * fix include error
      
      * remove dependent files to phi/
      
      * restore fused_attention_op.cu
      
      * fix dependence errors
      
      * fix dependence errors
      
      * fix include error
      
      * fix all depandence errors[build success]
      
      * remove useless include
      
      * recover useless include
      
      * use phi::ToNCCLDataType
      
      * fix namespace
      
      * update new register code
      
      * fix error in fused_gemm_epilogue_utils
      
      * fix error in FusedAttentionKernel parm
      
      * finish fused_attention registe code[build success]
      
      * add paddle::optional
      
      * add sig file
      
      * fix build error
      
      * fix a include error
      
      * update CMkaeList
      
      * fix parameter sequence
      
      * add include file
      
      * update #if before include
      
      * fix grammly error
      
      * update codes for DropoutParam
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * add #if
      
      * update test code
      
      * update test codes
      
      * recover test codes
      
      * trans fused_attention to fluid
      
      * move #endif to end
      
      * move #endif
      
      * delete useless files
      
      * use fused attention utils and recover random seed
      
      * remove fluid include in phi
      a7ec8958
  4. 04 4月, 2023 1 次提交
  5. 08 2月, 2023 1 次提交
  6. 05 1月, 2023 1 次提交
  7. 29 12月, 2022 1 次提交
  8. 16 12月, 2022 1 次提交
  9. 13 12月, 2022 1 次提交
  10. 07 12月, 2022 1 次提交
  11. 22 11月, 2022 1 次提交
    • H
      [PHI decoupling] remove "gpu_device_function.h" in fluid. (#48117) · 4da1a0fe
      huangjiyi 提交于
      * move "paddle/phi/backends/gpu/gpu_device_function.h" to phi
      
      * update copyright years
      
      * rm "fluid/platform/device/gpu/gpu_device_function.h" in phi
      
      * rm dependence to "gpu_device_function.h" in fluid
      
      * rm gpu_device_function.h etc in fluid
      
      * fix rocm-complie bugs
      
      * fix cuda_helper_test.cu bugs
      4da1a0fe
  12. 07 11月, 2022 1 次提交
  13. 26 10月, 2022 1 次提交
  14. 09 10月, 2022 1 次提交
  15. 28 9月, 2022 1 次提交
    • C
      Remove the declaration of using Tensor in framework/tensor.h (#46432) · e12a905e
      Chen Weihang 提交于
      * remove needless using tensor
      
      * remove needless using tensor
      
      * resolve conflict
      
      * replace tensor using
      
      * fix format error
      
      * revert needless changing
      
      * fix rocm and npu compile error
      
      * fix cinn compile error
      
      * fix format error
      
      * fix mkldnn format error
      
      * fix mkldnn format error
      
      * fix cinn compile error
      
      * fix cinn compile error
      
      * fix cinn compile error
      
      * resolve conflict
      e12a905e
  16. 07 9月, 2022 1 次提交
  17. 01 8月, 2022 1 次提交
    • L
      unify gpu context (#44740) · 86763023
      Leo Chen 提交于
      * remove cudaDeviceContext
      
      * remove more template
      
      * fix rocm compile
      
      * remove alias name CUDADeviceContext
      
      * fix compile
      
      * fix tests
      
      * revert changes
      86763023
  18. 12 7月, 2022 1 次提交
  19. 26 6月, 2022 1 次提交
  20. 17 6月, 2022 1 次提交
  21. 05 6月, 2022 1 次提交
  22. 31 5月, 2022 1 次提交
  23. 24 5月, 2022 1 次提交
  24. 11 3月, 2022 1 次提交
  25. 20 2月, 2022 1 次提交
  26. 11 2月, 2022 1 次提交
  27. 18 1月, 2022 1 次提交
  28. 03 12月, 2021 1 次提交
  29. 23 11月, 2021 1 次提交
  30. 16 11月, 2021 1 次提交
    • L
      Fix attn_bias_add bug. (#37147) · a9e7a854
      Li Min 提交于
      fused_attention_op的实现中,使用了bias_add,且其实现是通过使用kernel primitive来实现的,之后kernel primitive的WriteData api接口及函数内部实现发生了更改,将判断越界的逻辑移到了template的参数中,使得调用的分支有错误,产生了越界赋值操作,污染了别的显存空间的内容。具体表现为:test_fused_attention_op_api.py 单次执行基本上不会报错,多次循环执行不同shape的输入,结果计算不对,具有偶发性,bug不易察觉。
      a9e7a854
  31. 10 11月, 2021 1 次提交
  32. 08 11月, 2021 1 次提交
  33. 28 10月, 2021 1 次提交
  34. 26 10月, 2021 1 次提交
    • L
      Add fused attention op backward and python layer. (#36498) · 5119428e
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      5119428e
  35. 22 10月, 2021 1 次提交
    • L
      Fused attention op forward (#35905) · d4906214
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      d4906214