1. 29 6月, 2023 1 次提交
    • N
      Add fused_rope forward op (#54351) · a215c46a
      niuliling123 提交于
      * style
      
      * more
      
      * update ctest
      
      * Update legacy_backward.yaml
      
      * Update legacy_ops.yaml
      
      * Update legacy_ops.yaml
      
      * update
      
      * update
      
      * update for move
      a215c46a
  2. 22 5月, 2023 1 次提交
    • M
      [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() (#53856) · 3794d171
      Meteor Liu 提交于
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode()
      
      * fixed cyclic reference that caused patial import
      
      * fixed bad change
      
      * fix bad import
      
      * fix bad import
      
      * fix bad import
      
      * fix ut failed caused by change in_dynamic_mode
      
      * fix ut failed caused by change in_dynamic_mode
      
      * fixed usage of in_dynamic_mode() or in_dygraph_mode()
      
      * revert python3 to python in .pre-commit-config.yaml
      
      * fix merge conflicts
      3794d171
  3. 19 5月, 2023 1 次提交
    • L
      Add flash attention to speedup fused_gate_attention. (#52731) · d29c1f8e
      limingshu 提交于
      * Reorganize the forward codes of flash-attention.
      
      * Fix forward.
      
      * Remove some noused codes.
      
      * Simplify codes and fix backward.
      
      * Change all LOG(INFO) to VLOG and fix the backward.
      
      * add scale for AF2 flash_attn, much thanks to xreki and shaojie for debug these codes
      
      * decrease the effect of debug print on performance
      
      * Unify the initialize of flashattn arguments.
      
      * Rewirte the reshape of temp_mask and temp_bias.
      
      * API support use_flash_attn.
      
      * Fix compiling error on CI.
      
      * Try to crop the flash-attention lib.
      
      * Correct the condition of whether can use flash-attn.
      
      * Remove the softmax_out argument.
      
      * Remove is_causal.
      
      * Polish codes.
      
      * Fix qkv_transpose_out's shape and scaling of Q * K.
      
      * Update commit of flash-attention.
      
      ---------
      Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
      d29c1f8e
  4. 06 5月, 2023 1 次提交
  5. 17 4月, 2023 1 次提交
  6. 24 3月, 2023 1 次提交
    • Z
      Memory Efficient Attention (#51867) · e5ad3859
      ZhangDY-6483 提交于
      * first version, notest
      
      * return final rst, notest
      
      * use infinity() instead of max
      
      * ut structure
      
      * start up of ut
      
      * generate lse
      
      * update
      
      * add depense
      
      * reconstruct cmake
      
      * move file
      
      * add memory efficient attention and fix blasimpl
      
      * update
      
      * update cmake
      
      * add namespace
      
      * update cmake
      
      * use .cu
      
      * update for pad3d
      
      * bug fix
      
      * bug fix
      
      * update
      
      * bug fix
      
      * update enforce
      
      * add test case
      
      * merge the lse pad
      
      * fix kernel_fn of backward
      
      * fix PADDLE_ENFORCE_EQ and phi_api
      
      * fix PADDLE_ENFORCE
      
      * fix PADDLE_ENFORCE
      
      * rerun coverage
      
      * fix memory efficient attention test
      
      * rerun ci
      
      * add cuda version condition
      
      * add cuda version condition
      
      * delete WIP test
      
      * replace PADDLE_ENFORCE
      
      * edit the namespace of datatype in multiple.cc
      
      * rerun
      
      * rerun
      
      ---------
      Co-authored-by: Nliuyuang <liuyuang@baidu.com>
      e5ad3859
  7. 23 3月, 2023 1 次提交
  8. 22 3月, 2023 1 次提交
  9. 15 2月, 2023 1 次提交
  10. 01 2月, 2023 1 次提交
  11. 05 1月, 2023 2 次提交
  12. 23 12月, 2022 1 次提交
  13. 22 12月, 2022 1 次提交
  14. 29 11月, 2022 1 次提交
  15. 22 11月, 2022 1 次提交
    • U
      Fixdocs (#47986) · 91f4d1ce
      ustiniankw 提交于
      * list112-122, test=document_fix
      
      * precommitfix, test=document_fix
      
      * list112-127, test=document_fix
      
      * fix_ResNetBasicBlock, test=document_fix
      
      * pre-commit_resnet, test=document_fix
      
      * refix, test=document
      
      * refix, test=document_fix
      91f4d1ce
  16. 02 11月, 2022 1 次提交
  17. 23 10月, 2022 1 次提交
  18. 12 10月, 2022 1 次提交
  19. 10 10月, 2022 1 次提交
  20. 14 9月, 2022 1 次提交
  21. 26 8月, 2022 1 次提交
  22. 30 6月, 2022 1 次提交
  23. 28 6月, 2022 1 次提交
  24. 21 6月, 2022 1 次提交
  25. 17 6月, 2022 1 次提交
  26. 14 6月, 2022 1 次提交
  27. 13 6月, 2022 1 次提交
  28. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  29. 01 6月, 2022 1 次提交
  30. 31 5月, 2022 1 次提交
  31. 30 5月, 2022 1 次提交
  32. 26 4月, 2022 1 次提交
  33. 25 3月, 2022 1 次提交
    • J
      Refactor Dygraph Flags (#40786) · 3085d5e4
      Jiabin Yang 提交于
      * refactor eager flags
      
      * fix flags error when we switch from eager to dygraph
      
      * fix ci problem
      
      * fix ci
      
      * fix ci
      
      * merge develop and fix code style
      
      * merge develop and fix code style
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * merge develop
      3085d5e4
  34. 11 3月, 2022 1 次提交
  35. 24 2月, 2022 1 次提交
  36. 26 11月, 2021 1 次提交
  37. 23 11月, 2021 1 次提交
  38. 16 11月, 2021 1 次提交
    • L
      Fix attn_bias_add bug. (#37147) · a9e7a854
      Li Min 提交于
      fused_attention_op的实现中,使用了bias_add,且其实现是通过使用kernel primitive来实现的,之后kernel primitive的WriteData api接口及函数内部实现发生了更改,将判断越界的逻辑移到了template的参数中,使得调用的分支有错误,产生了越界赋值操作,污染了别的显存空间的内容。具体表现为:test_fused_attention_op_api.py 单次执行基本上不会报错,多次循环执行不同shape的输入,结果计算不对,具有偶发性,bug不易察觉。
      a9e7a854
  39. 12 11月, 2021 1 次提交