1. 07 8月, 2023 1 次提交
    • U
      [WIP] Integration flash attention 2 (#55758) · 0473369f
      umiswing 提交于
      * Work for fa-2 padded fwd. Code to be cleaned.
      
      * Work for fa2 unpadded fwd.
      
      * Work for padded-bwd, dk get small diff on np.random.seed(0)
      
      * Anyway I pass paddle's utest, except return softmax without dropout.
      
      * Clean code.
      
      * Modify interface.
      
      * Clean code and add some check.
      
      * Easy compile for dev.
      
      * Fix ci.
      
      * Fix ci-build.
      
      * Add std c++17 option again.
      
      * Limit max job when compiling fa2.
      
      * Remove const_cast
      
      * Add fwd params, to be cleaned.
      
      * Clean code.
      
      * Add bwd params.
      
      * Clean code.
      
      * Add enforce.
      
      * Use v2.0.4
      
      * Pass RNG state to fa2 capi
      
      * Fix review.
      
      * Add assert
      
      * Skip compile for sm less than 80.
      0473369f
  2. 02 8月, 2023 1 次提交
    • G
      [clang-tidy] NO.6 enable `modernize-avoid-c-arrays` check (#55774) · c000091e
      gouzil 提交于
      * [clang-tidy] modernize-avoid-c-arrays
      
      * rollback
      
      * [clang-tidy] fix
      
      * close modernize-avoid-c-arrays
      
      * fix PHI_DEFINE_string; add PHI_DEFINE_bool NOLINT
      
      * fix PHI_DEFINE_string
      
      * fix next_h_state and parity err
      
      * fix win32
      
      * fix cuda_graph
      
      * fix accuracy_kernel
      
      * fix math_function
      
      * fix fused_softmax_mask_kernel.cu load_data and warp_reduce; rollback concat_and_split_functor ins_addr
      
      * fix fused_dropout_add_grad_kernel
      
      * fix
      
      * rollback cu
      
      * rollback concat_and_split_functor.cu
      
      * rollback
      c000091e
  3. 28 6月, 2023 1 次提交
  4. 26 6月, 2023 1 次提交
  5. 20 6月, 2023 1 次提交
    • X
      [XPU] avoid compile issue in non-xpu env (#54711) · e2690526
      XiaociZhang 提交于
      * [kunlun] avoid compile issue in non-xpu env
      
      also rename macro WITH_XPU_XPTI to WITH_XPTI
      
      * move get_xpti_dependency.sh to tools/xpu
      
      * move get_xpti_dependency.sh to tools/xpu
      
      * call get_xpti_dependency.sh only in need
      e2690526
  6. 16 6月, 2023 1 次提交
    • J
      [kunlun] support xpu runtime profiler (#54685) · 82eeda69
      jameszhang 提交于
      * [kunlun] support xpu runtime profiler
      
      * fix cmake error
      
      * add libxpti.so to paddle package
      
      * fix for style check
      
      * sync change in setup.py and python/setup.py.in
      
      * remove libxpti.so from paddle output dir in this PR
      82eeda69
  7. 26 5月, 2023 1 次提交
    • Y
      [PHI Decoupling]Create PHI shared lib (#53735) · da50a009
      YuanRisheng 提交于
      * create phi so
      
      * fix ci bugs
      
      * fix py3 bugs
      
      * add file
      
      * fix py3 bugs
      
      * fix windows bugs
      
      * perfect so
      
      * fix py3 bugs
      
      * delete all static target in phi
      
      * fix windows bugs
      
      * fix py3 bugs
      
      * fix ci bugs
      
      * fix windows bugs
      
      * fix bugs: gflags can't be linked by dynamic and static lib
      
      * fix bugs that can not load 3rd party
      
      * fix ci bugs
      
      * fix compile bugs
      
      * fix py3 bugs
      
      * fix conflict
      
      * fix xpu bugs
      
      * fix mac compile bugs
      
      * fix psgpu bugs
      
      * fix inference failed
      
      * deal with conflict
      
      * fix LIBRARY_PATH bug
      
      * fix windows bugs
      
      * fix onednn error
      
      * fix windows compile bugs
      
      * fix windows compile bugs
      
      * fix test_cuda_graph_static_mode_error aborted
      
      * fix windows bugs
      
      * fix mac-python3 error
      
      * fix hip compile bugs
      
      * change mode to static
      
      * change to static mode
      
      * fix ci bugs
      
      * fix py3 bugs
      
      * fix windows bugs
      
      * fix bugs
      
      * add static flag
      
      * add PADDLE_API
      
      * change position of PADDLE_API
      
      * fix windows bugs
      
      * change mode to dynamic lib
      
      * fix windows static bugs
      
      * deal with conflict
      
      * fix windows unit bug
      
      * fix coverage
      
      * deal with conflict
      
      * fix windows-inference
      
      * fix py3 bugs
      
      * fix bugs when compile type_info
      
      * fix compile bugs
      
      * fix py3 bugs
      
      * fix windows bugs
      
      * fix windows openblas
      
      * fix xpu bugs
      
      * fix enforce_test in windows
      
      * update code according comment
      
      * fix windows cmake bug
      
      * fix windows bugs
      
      * fix windows bugs
      
      * delete cinn unittest
      
      * fix cinn bugs
      
      ---------
      Co-authored-by: HappyHeavyRain's avatarlzydev <1528794076@qq.com>
      da50a009
  8. 23 5月, 2023 1 次提交
  9. 19 5月, 2023 1 次提交
    • L
      Add flash attention to speedup fused_gate_attention. (#52731) · d29c1f8e
      limingshu 提交于
      * Reorganize the forward codes of flash-attention.
      
      * Fix forward.
      
      * Remove some noused codes.
      
      * Simplify codes and fix backward.
      
      * Change all LOG(INFO) to VLOG and fix the backward.
      
      * add scale for AF2 flash_attn, much thanks to xreki and shaojie for debug these codes
      
      * decrease the effect of debug print on performance
      
      * Unify the initialize of flashattn arguments.
      
      * Rewirte the reshape of temp_mask and temp_bias.
      
      * API support use_flash_attn.
      
      * Fix compiling error on CI.
      
      * Try to crop the flash-attention lib.
      
      * Correct the condition of whether can use flash-attn.
      
      * Remove the softmax_out argument.
      
      * Remove is_causal.
      
      * Polish codes.
      
      * Fix qkv_transpose_out's shape and scaling of Q * K.
      
      * Update commit of flash-attention.
      
      ---------
      Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
      d29c1f8e
  10. 25 4月, 2023 1 次提交
    • Y
      [PHI]Add flags macro for PHI (#52991) · 22e96bde
      YuanRisheng 提交于
      * add flags for phi
      
      * fix compile bugs
      
      * fix ci bugs
      
      * fix inference bugs
      
      * fix cinn' bugs
      
      * fix cinn bugs
      
      * perfect code according comment
      
      * fix ci bugs
      
      * fix ci bugs
      22e96bde
  11. 17 4月, 2023 1 次提交
  12. 14 4月, 2023 1 次提交
  13. 03 4月, 2023 1 次提交
  14. 01 3月, 2023 1 次提交
    • C
      Integration flash attention (#49869) · 61611786
      Chitsing KUI 提交于
      * flash attn
      
      * seed
      
      * almost
      
      * softmax
      
      * fix workspace
      
      * add unitest; linux only
      
      * fix setup
      
      * fix datatype include
      
      * fix setup typo
      
      * fix def scope
      
      * new error api
      
      * use paddle fork
      
      * fix attr bug; complete ut
      
      * update flash hash
      
      * fix rng reset
      
      * fix offset
      
      * fix comments
      61611786
  15. 06 1月, 2023 1 次提交
  16. 23 12月, 2022 1 次提交
  17. 12 12月, 2022 1 次提交
    • Optimization of Eigh op with ssyevj_batched runtime api (#48560) · 16e364d3
      傅剑寒 提交于
      * fix codestyle
      
      * add double complex<float> complex<double> dtype support for syevj_batched
      
      * fix use_syevj flag for precision loss when input dtype of syevj_batch is complex128 in some case
      
      * optimize eigh in different case
      
      * fix missing ; bug
      
      * fix use_syevj bug
      
      * fix use_cusolver_syevj_batched flag
      16e364d3
  18. 24 11月, 2022 1 次提交
  19. 15 11月, 2022 1 次提交
  20. 10 11月, 2022 1 次提交
  21. 03 11月, 2022 1 次提交
  22. 02 11月, 2022 1 次提交
  23. 19 10月, 2022 1 次提交
  24. 17 10月, 2022 1 次提交
  25. 18 9月, 2022 1 次提交
  26. 14 9月, 2022 1 次提交
  27. 01 8月, 2022 1 次提交
  28. 22 7月, 2022 1 次提交
  29. 18 7月, 2022 1 次提交
  30. 12 7月, 2022 1 次提交
  31. 28 6月, 2022 1 次提交
  32. 24 6月, 2022 2 次提交
  33. 18 6月, 2022 1 次提交
  34. 15 6月, 2022 2 次提交
  35. 13 6月, 2022 1 次提交
  36. 09 6月, 2022 1 次提交
  37. 05 6月, 2022 1 次提交
  38. 04 6月, 2022 1 次提交