1. 24 12月, 2021 1 次提交
    • C
      [pten] combine reduce_cuda codes (#38328) · 08941eda
      chentianyu03 提交于
      * combine reduce_cuda codes
      
      * support float16 in pten redcue_mean
      
      * replace ReduceCudaKernel impl with pten reduce impl
      
      * mv reduce funcs into reduce_cuda_impl
      
      * rm unsed codes and headers
      
      * mv GetReduceDim into reduce_cuda_impl
      
      * recover GetReduceDim in reduce_op.h
      
      * add new dispatch macro
      
      * fix pool op output not inited and cause transform to pten::denseTensor error
      
      * fix output tensor not initialized error
      
      * rename new dispatch macro and format code style
      
      * rm reduce_functor_op.h file
      08941eda
  2. 17 12月, 2021 1 次提交
  3. 03 12月, 2021 1 次提交
  4. 12 11月, 2021 1 次提交
  5. 21 10月, 2021 1 次提交
  6. 23 9月, 2021 1 次提交
  7. 14 9月, 2021 1 次提交
  8. 13 9月, 2021 2 次提交
  9. 08 9月, 2021 1 次提交
  10. 03 9月, 2021 1 次提交
  11. 26 8月, 2021 1 次提交
    • L
      Add feed_forward for fused attention op. (#34945) · d1a33bc7
      Li Min 提交于
      Describe
      
      Add feed_forward for fused attention op.
      (1) Encapsulate matmul impl (forward and backward) used in attention op.
      (2) Implement bias_add (forward and backward) used in attention op.
      d1a33bc7