1. 24 5月, 2023 1 次提交
  2. 13 3月, 2023 1 次提交
  3. 30 1月, 2023 1 次提交
  4. 28 11月, 2022 1 次提交
  5. 24 11月, 2022 1 次提交
  6. 21 11月, 2022 1 次提交
  7. 01 11月, 2022 1 次提交
    • C
      Adapting device-specific Extra Attributes for the PHI kernel (#46342) · c923e6c9
      Chen Weihang 提交于
      * add extra attr property set
      
      * add type_info for all context
      
      * add onednn context to all context
      
      * fix context compile error
      
      * simplify conv kernel args
      
      * pass runtime attr into dev_ctx
      
      * fix marco error
      
      * clear conv_grad_kernel extra args
      
      * merge conv_grad_grad into conv_grad
      
      * clear conv2d_grad_grad extra attrs
      
      * clear yaml and eager extra attr
      
      * fix conv1d error
      
      * change to thread local
      
      * fix npu compile failed
      
      * try to fix windows compile failed
      
      * add conv2d onednn phi kernel
      
      * fix ci bugs (#36)
      
      * fix compile bugs (#38)
      
      * fix extra input transform bug (#39)
      
      * support dynamic created attr (#40)
      
      * reset extra info gen code
      
      * rm conv_grad_grad kernel
      
      * reimpl pass attr adapting
      
      * add int attr support
      
      * remove vector inputnames creating
      
      * fix map at error
      
      * Update paddle/phi/kernels/onednn/conv_grad_kernel.cc
      Co-authored-by: NSławomir Siwek <slawomir.siwek@intel.com>
      
      * remove useless extra attrs
      
      * replace mkldnn_engine by onednn_engine
      Co-authored-by: NYuanRisheng <yuanrisheng@baidu.com>
      Co-authored-by: NSławomir Siwek <slawomir.siwek@intel.com>
      c923e6c9
  8. 06 9月, 2022 1 次提交
  9. 01 8月, 2022 1 次提交
  10. 29 7月, 2022 1 次提交
    • L
      move CUDAStream to phi (#44529) · da3743fd
      Leo Chen 提交于
      * init
      
      * move CUDAStream to phi
      
      * fix compilation
      
      * merge develop
      
      * add stream_owned_ member
      
      * split cuda_stream.h
      
      * fix cpu compile
      
      * fix constructor
      
      * fix bug
      
      * fix windows compile
      
      * fix inference test_levit
      
      * fix windows tests
      da3743fd
  11. 26 7月, 2022 1 次提交
  12. 15 6月, 2022 1 次提交
  13. 08 6月, 2022 1 次提交
  14. 07 6月, 2022 1 次提交
  15. 05 6月, 2022 1 次提交
  16. 19 5月, 2022 1 次提交
  17. 13 5月, 2022 1 次提交
  18. 09 4月, 2022 1 次提交
  19. 17 3月, 2022 1 次提交
    • W
      Trt engine. (#40532) · 3082ed46
      Wilber 提交于
      * infrt add trt engine
      
      * fix register
      
      * file generate
      
      * fix ci error
      
      * fix conflict
      
      * add copyright
      
      * update
      
      * update
      
      * update
      
      * update engine name
      
      * refactor trt code
      
      * update
      
      * update
      
      * update
      
      * update
      
      * fix conflict
      
      * update
      
      * fix compile with cuda
      3082ed46
  20. 14 3月, 2022 1 次提交
    • L
      fix gpu callback (#40445) · 2c21d240
      Leo Chen 提交于
      * fix gpu conetxt callback
      
      * fix gpu callback
      
      * fix callback early destruct problem
      2c21d240
  21. 07 3月, 2022 1 次提交
    • M
      cuBlasLt Epilogue To Fuse Linear + ReLU|GeLU (#39437) · 2a3d9eca
      Ming-Xu Huang 提交于
      * Added cuBlasLtHandle_t to device context.
      
      * Added fused_gemm_epilogue op.
      
      1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue.
      2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2.
      2. Act currently only be supported ReLU. (Will add GeLU in the future).
      
      * Added UT to fused_gemm_epilogue op.
      
      * Added LinearAct Pattern
      
      1. Added LinearAct into graph_pattern_detector.* to define (2.)'s
      pattern.
      2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)).
      3. act currently only support ReLU (Will support GeLU in the future).
      
      * Added FuseGemmEpiloguePass
      
      1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU}
      fusion (GeLU will be supported in the future).
      2. Only support matmul_v2 from nn.Linear.
      
      * Added pybind to BuildStrageter.fuse_gemm_epilogue_.
      
      * Added UT for fuse_gemm_epilogue_pass.
      
      * GeLU support and EpilogueSingleton
      
      1. Added GeLU support to fused_gemm_epilogue op.
      2. Added EpilogueSingleton to cache auxiliary pointer.
      3. Added related UTs.
      
      * Rename cublaslt_epilogue_opto gemm_epilogue_op.*.
      
      * Added both train and infer pattern to LinearAct.
      
      1. Added support of fwd graph with grap_ops linking to LinearAct.
      2. Added related changes to fuse_gemm_epilogue_pass for above
      modification.
      
      * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass.
      
      * Added identity activation support to gemm_epilogue_op.
      
      * Added Linear Fusion (matmul_v2 + ele_add)
      
      1. Added matmul_v2 + ele_add pattern to LinearActPattern.
      2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass.
      
      * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.*
      
      * Add fused_gemm_epilogue_grad op.
      
      1. Added fused_gemm_epilogue_grad to support backward epilogue fusion.
      
      * Add UTs to fused_gemm_epilogue_grad_op.
      
      * Change attribute name in fused_gemm_epilogue_grad_op for clearing.
      
      * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op.
      
      * Added ElementwiseAdd+Matmul+Act graph pattern detection.
      
      * Fuse backward of Linear( Act(x))
      
      1. Added backward fusion pass to Linear( Act(x)).
      2. Added backward fusion pass to Linear(x).
      
      * Added UTs to backward fusion of Linear(Act(x)).
      
      * Complete document of arguments to fused_gemm_epilogue_op.
      
      * Made arguments of some functions pass by reference.
      
      * Modify code with review comments.
      
      1. Made arguments of some function pass by reference.
      2. Removed redundant code.
      3. Followed Google code style to change code.
      
      * Made 'const' code style be consistent
      
      * Fixed random seed of python UTs.
      
      * Set Compiling constrains to cuBlasLt
      
      1. Require CUDA 11.6+
      2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6.
      
      * Code Reivew from Paddle
      
      1. Changed arguments name is_first_gemm to without_x_gradient for
      clearing.
      2. Applied PADDLE_THROW in fused_gemm_epilogue_op.
      
      * Remove EpilogueSingleton
      
      1. Applied ReserveSpace to replace Epilogue for passing auxiliary
      pointers between FWD and BWD.
      
      * Fix a logical error and enhance UTs.
      
      1. Added act op count checking in UTs.
      2. Fix issue to fuse backward or ReLU(Linear(X)).
      3. TODO: solve GELU fusion issues.
      
      * Fix Linear and GeLU fusion issues.
      
      1. Modified graph_detech_pattern to fit with both linear wiht gelu or
      relu.
      2. Modified data range in Uts to allow negative values.
      
      * Removed fused_gemm_epilogue_op.h.
      
      * Rename namespace pten to phi.
      
      * Rename name of arguments in fused_gemm_epilogue_op
      
      1. bias -> Bias.
      2. out -> Out.
      3. reserve_space -> ReserveSpace.
      
      * Change EpiloguePassActivationCache as local variable.
      
      1. Removed singleton in EpiloguePassActivationCache.
      2. Made EpiloguePassActivationCache as an argument to each pass
      functions.
      2a3d9eca
  22. 28 2月, 2022 1 次提交
  23. 20 2月, 2022 1 次提交
  24. 08 2月, 2022 1 次提交
  25. 06 2月, 2022 1 次提交