1. 04 3月, 2022 2 次提交
  2. 20 2月, 2022 1 次提交
  3. 19 2月, 2022 1 次提交
    • A
      [Pten]Unify paddle/pten::framework::ddim into pten::ddim (#39614) · 2fe04264
      Aurelius84 提交于
      * Unify paddle/pten::framework::ddim into pten::ddim
      
      * fix paddle namespace
      
      * compile sucessfully
      
      * fix npu src file
      
      * fix conflict
      
      * fix conflict
      
      * fix tensorrt compiler error
      
      * fix conflict
      
      * fix conflict
      
      * fix tesst file conflict
      
      * fix conflict
      
      * fix mlu file conflict
      
      * fix mlu file conflict
      
      * fix cinn header file conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      2fe04264
  4. 24 12月, 2021 1 次提交
    • C
      [pten] combine reduce_cuda codes (#38328) · 08941eda
      chentianyu03 提交于
      * combine reduce_cuda codes
      
      * support float16 in pten redcue_mean
      
      * replace ReduceCudaKernel impl with pten reduce impl
      
      * mv reduce funcs into reduce_cuda_impl
      
      * rm unsed codes and headers
      
      * mv GetReduceDim into reduce_cuda_impl
      
      * recover GetReduceDim in reduce_op.h
      
      * add new dispatch macro
      
      * fix pool op output not inited and cause transform to pten::denseTensor error
      
      * fix output tensor not initialized error
      
      * rename new dispatch macro and format code style
      
      * rm reduce_functor_op.h file
      08941eda
  5. 17 12月, 2021 1 次提交
  6. 03 12月, 2021 1 次提交
  7. 12 11月, 2021 1 次提交
  8. 21 10月, 2021 1 次提交
  9. 23 9月, 2021 1 次提交
  10. 14 9月, 2021 1 次提交
  11. 13 9月, 2021 2 次提交
  12. 08 9月, 2021 1 次提交
  13. 03 9月, 2021 1 次提交
  14. 26 8月, 2021 1 次提交
    • L
      Add feed_forward for fused attention op. (#34945) · d1a33bc7
      Li Min 提交于
      Describe
      
      Add feed_forward for fused attention op.
      (1) Encapsulate matmul impl (forward and backward) used in attention op.
      (2) Implement bias_add (forward and backward) used in attention op.
      d1a33bc7