1. 14 4月, 2023 1 次提交
    • S
      Move fused_attention op to phi [迁移反向 GPU OpKernel] (#51909) · 3bac6264
      Sonder 提交于
      * add kernel functions
      
      * update kernel functions
      
      * update func parameters' name
      
      * create codes for gpu device
      
      * 调整文件位置
      
      * fix include error
      
      * remove dependent files to phi/
      
      * restore fused_attention_op.cu
      
      * fix dependence errors
      
      * fix dependence errors
      
      * fix include error
      
      * fix all depandence errors[build success]
      
      * remove useless include
      
      * recover useless include
      
      * use phi::ToNCCLDataType
      
      * fix namespace
      
      * update new register code
      
      * fix error in fused_gemm_epilogue_utils
      
      * fix error in FusedAttentionKernel parm
      
      * finish fused_attention registe code[build success]
      
      * add paddle::optional
      
      * add sig file
      
      * fix build error
      
      * fix a include error
      
      * 恢复正向代码
      
      * update CMkaeList
      
      * trans Compute function to phi [build success]
      
      * add register code and fix include error [build success]
      
      * fix parameter sequence
      
      * add include file
      
      * update #if before include
      
      * update #if before include
      
      * fix grammly error
      
      * update codes for DropoutParam
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * add #if
      
      * update test code
      
      * update test codes
      
      * recover test codes
      
      * fix namespace and remove fluid include
      
      * recover random seed
      
      * remove fluid quant_helper
      
      * fix include error
      
      * include utils in funcs
      
      * change include file
      
      * move grad codes back to fluid floder
      
      * move grad codes back to fluid floder
      
      * fix sig file error
      
      * update include
      
      * recover codes to develop
      
      * update register codes
      
      * fix build error
      
      * recover fluid include
      
      * remove some fluid include
      
      * remove some fluid include
      
      * Update fused_attention_op.cu
      
      * remove fluid include
      
      * add some fluid include
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * Update fused_attention_op.cu
      
      * remote useless include
      3bac6264
  2. 10 4月, 2023 1 次提交
  3. 06 4月, 2023 1 次提交
    • S
      Move fused_attention op to phi [迁移前向 GPU OpKernel] (#51743) · a7ec8958
      Sonder 提交于
      * add kernel functions
      
      * update kernel functions
      
      * update func parameters' name
      
      * create codes for gpu device
      
      * 调整文件位置
      
      * fix include error
      
      * remove dependent files to phi/
      
      * restore fused_attention_op.cu
      
      * fix dependence errors
      
      * fix dependence errors
      
      * fix include error
      
      * fix all depandence errors[build success]
      
      * remove useless include
      
      * recover useless include
      
      * use phi::ToNCCLDataType
      
      * fix namespace
      
      * update new register code
      
      * fix error in fused_gemm_epilogue_utils
      
      * fix error in FusedAttentionKernel parm
      
      * finish fused_attention registe code[build success]
      
      * add paddle::optional
      
      * add sig file
      
      * fix build error
      
      * fix a include error
      
      * update CMkaeList
      
      * fix parameter sequence
      
      * add include file
      
      * update #if before include
      
      * fix grammly error
      
      * update codes for DropoutParam
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * add #if
      
      * update test code
      
      * update test codes
      
      * recover test codes
      
      * trans fused_attention to fluid
      
      * move #endif to end
      
      * move #endif
      
      * delete useless files
      
      * use fused attention utils and recover random seed
      
      * remove fluid include in phi
      a7ec8958
  4. 04 4月, 2023 1 次提交
  5. 30 3月, 2023 1 次提交
    • P
      Speedup worker (#51760) · 8ca86d72
      pangengzheng 提交于
      * support run haokanctr model in heterps-models
      
      * polish setup.py
      
      * polish JVM_LIB in evn_dict
      
      * align infer auc with DistPsArch pre-stable
      
      * async and multi thread data feed
      
      * rewrite dense tensor intialization
      
      * async infer shape and reuse memory
      8ca86d72
  6. 27 3月, 2023 1 次提交
    • S
      Fused elementwise_(mul/div) (#50428) · 968f7f24
      Sławomir Siwek 提交于
      * extract Op and OPMaker to .h
      
      * extend pattern for fused_op
      
      * set "with_residual" default to false
      
      * adjust fuse passes
      
      * remove fc+eltwise flag
      
      * fused_output_scale
      
      * activation attrs
      
      * remove extra attrs
      
      * fix int8/bf16 unit tests
      
      * simplify RecomputeOutputDims
      
      * remove unused method
      
      * Add description for attributes
      
      * add extra check
      
      * adjust op compats
      
      * update quantize test
      
      * fix protobuf parsing error
      
      * fix int8 performance
      
      * fused elementwises
      
      * merge develop
      
      * remove activation
      
      * restore activation for existing add/sub ops
      968f7f24
  7. 23 3月, 2023 1 次提交
  8. 22 3月, 2023 3 次提交
    • S
      Extract fused_transpose op dedicated for oneDNN fuse passes (#50021) · 02296977
      Sławomir Siwek 提交于
      * extract common methods to reuse
      
      * add header for transpose ops
      
      * fused_transpose
      
      * Split big function
      
      * transpose2 tests
      
      * fused_transpose
      
      * Apply extra attributes
      
      * add pbtxt file
      
      * update pbtxt
      
      * Merge develop
      
      * add more strict op compats
      
      * code  style
      
      * remove mkldnn_data_type
      
      * unify SetOutMemDescWithReshape2FuseSupport
      
      * adjust quantize-dequantize for transpose
      
      * remove appendact
      
      * transpose2 quantization
      
      * fix int8 tests
      
      * adjust transpose_op to current develop
      
      * delete fusion code from transpose_kernel
      
      * add fused transpose to NHWC unittest
      
      * change order
      02296977
    • S
      Add fused_linear_param_grad_add_kernel (#51805) · f59c5d8b
      sneaxiy 提交于
      * add fused_linear_param_grad_add_kernel
      
      * fix compile error
      
      * remove flag
      
      * fix ci compile error
      
      * fix ci compile error
      
      * revert pylayer revision
      
      * fix ci ut
      
      * improve performance
      f59c5d8b
    • R
      Fix conflict of CppTypeToDataType (#51919) · 535ddd3d
      Ruibiao Chen 提交于
      535ddd3d
  9. 21 3月, 2023 1 次提交
    • iSerendipity's avatar
      [PHI decoupling] Move DataType* from paddle:experimental to phi namespace (#51716) · 4638a62e
      iSerendipity 提交于
      * move DataType from paddle::experimental to phi
      
      * convert namespace
      
      * convert namespace
      
      * convert namespace
      
      * clarify namespace
      
      * convert more datatype
      
      * Revert "convert more datatype"
      
      This reverts commit 083b462959e6a22d4d8767707b628b95b396642e.
      
      * convert more in auto_code_generator
      
      * fix conflicts for XPU
      
      * fix namespace conflicts
      
      * fix errors
      
      * Revert "fix errors"
      
      This reverts commit f9d9958b54ee32141112274c8a5c3c381ab0f876.
      
      * fix errors
      
      * fix formatting
      4638a62e
  10. 20 3月, 2023 1 次提交
    • L
      Support Linear operation in cuBlaslt and plug into attn_gemm and fusedLinear forward op (#51124) · 2dfc3fa8
      limingshu 提交于
      * optimization for fused linear op
      
      * fix code format
      
      * optimization for linear fused forward
      
      * merge with develop
      
      * fix bugs for gemm_ephilog
      
      * package of cublaslt ephilogue type with enmu
      
      * final fix before code reviewing
      
      * fix missed fusedType typo
      
      * fix code according to review suggestions
      
      * fix windows ci error
      
      * change location of MatmulPlanner
      
      * add some changes for compiler error fix
      
      ---------
      2dfc3fa8
  11. 15 3月, 2023 1 次提交
    • iSerendipity's avatar
      [PHI] remove operator.h in blas.h (rebase to latest codebase) (#51472) · 427712df
      iSerendipity 提交于
      * Revert "Revert "【Hackathon No.67】remove operator.h in blas.h (#50989)" (#51467)"
      
      This reverts commit b9d91531.
      
      * remove cout
      
      * add header
      
      * fix missing header
      
      * fix refer fluid error
      
      * fix missing header
      
      * 更新 repeat_interleave_grad_kernel_impl.h
      
      Change to phi style datatype.
      
      * 更新 repeat_interleave_grad_kernel_impl.h
      
      Fix missing header
      
      * datatype fluid -> phi
      
      * paddle::experimental -> phi
      
      * fix reference error
      
      * fix reference error
      
      * fix reference error
      
      * fix errors
      
      * fix missing FLAGS
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing header
      
      * fix missing header
      
      * fix errors
      427712df
  12. 14 3月, 2023 1 次提交
  13. 13 3月, 2023 2 次提交
  14. 10 3月, 2023 2 次提交
  15. 09 3月, 2023 2 次提交
  16. 07 3月, 2023 1 次提交
  17. 06 3月, 2023 2 次提交
    • S
      convert todos to internal tasks (#51174) · 6b393e45
      Sławomir Siwek 提交于
      6b393e45
    • H
      [phi decoupling] decouple dependency to device_context in phi (Part 1) (#50865) · a1006b2b
      Huang Jiyi 提交于
      * move DeviceContextPool to phi
      
      * add EmplaceExternalContextFunc
      
      * update namespace
      
      * update cmake
      
      * fix bugs and create context_pool_impl.h
      
      * replace platform::is_xxx_place
      
      * fix bugs
      
      * update generator
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix enforce usage
      
      * Revert "fix enforce usage"
      
      This reverts commit 5f521f08a69713cee506e64a00ec6d9fba709e27.
      
      * fix bugs
      
      * rm XPUDeviceContext and CustomDeviceContext
      
      * fix bugs
      
      * fix fix context init bug
      
      * fix bugs after merge
      
      * fix bugs
      
      * fix name
      
      * fix mutable_data
      
      * update and fix bugs
      
      * fix bugs
      
      * update
      
      * fix bugs
      
      * fix name
      
      * fix bugs
      
      * merge
      
      * fix bugs
      
      * create context_pool in phi/backends
      
      * create context_pool in phi/backends
      
      * fix bugs
      
      * fix xpu bugs
      
      * fix rocm bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix xpu bugs
      
      * update
      
      * update
      
      * fix bugs
      
      * fix bugs
      a1006b2b
  18. 03 3月, 2023 1 次提交
  19. 28 2月, 2023 1 次提交
  20. 26 2月, 2023 1 次提交
  21. 23 2月, 2023 1 次提交
  22. 22 2月, 2023 1 次提交
  23. 17 2月, 2023 1 次提交
    • Y
      Rename MultiTensorAdam To FusedAdam (#50449) · e6af9bd2
      yuehuayingxueluo 提交于
      * rename multi_tensor_adam to fused_adam
      
      * fix some bugs
      
      * fix CI coverage
      
      * rename test_fused_adam.py
      
      * fix some bug
      
      * add test_fused_adam_op.py
      
      * fix some bugs
      
      * fix fused_adam_op.cc
      
      * fix CI bugs
      
      * fix CI bug
      
      * fix CI bug
      e6af9bd2
  24. 16 2月, 2023 1 次提交
  25. 15 2月, 2023 1 次提交
  26. 14 2月, 2023 1 次提交
  27. 08 2月, 2023 3 次提交
  28. 06 2月, 2023 2 次提交
  29. 03 2月, 2023 2 次提交
    • S
      Replace matmul(v2) with fused_matmul during oneDNN fuse passes (#49515) · 5cfe1645
      Sławomir Siwek 提交于
      * replace matmul with matmul_v2 in fuse passes
      
      * Remove fusion logic from matmul
      
      * removing fusion methods
      
      * add proper name
      
      * adjust namespaces
      
      * clean attrs in python tests
      
      * delete checkpoint and restore matmul version
      
      * remove unused code
      
      * matmul and reshape/transpose fuses migrated
      
      * split MatmulOneDNN headers
      
      * fuse activation and eltwise_add
      
      * add fuse_activation
      
      * matmul_transpose_reshape/reshape_transpose_matmul
      
      * matmul + elementwise_add (fused)
      
      * activation temporary modifciation
      
      * merge newest develop
      
      * remove depedency from other PR
      
      * revert pbtxt
      
      * remove placeholders from matmul_v2
      
      * add description in OPMaker
      
      * remove matmul_v2_op.h and all depedencies
      
      * remove dims changing in base op
      
      * add possibility to fuse already fused_matmul
      
      * restart broken CI
      
      * Empty-Commit
      
      * revert matmul_utils.h
      
      * codestyle
      
      * adjust imports
      
      * add pbtxt file
      
      * 100% matmul unit tests coverage
      
      * trigger CI with minimal changes to develop
      
      * adjust changes to develop
      
      * add fused_matmul op
      
      * inherit base ops
      
      * add "v2"
      
      * move OPMaker
      
      * Gradually add fused_matmul files
      
      * second batch of fused_matmul changes
      
      * split infershapes of matmul_v2 and fused_matmul
      
      * inherit fused_matmul from matmul_v2
      
      * Update paddle/phi/backends/onednn/onednn_reuse.h
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      * Update paddle/phi/kernels/fusion/onednn/fused_matmul_kernel.cc
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      ---------
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      5cfe1645
    • Y
      Fused attention pass backward op replace. (#50186) · 7e8ef328
      Yuang Liu 提交于
      7e8ef328
  30. 01 2月, 2023 1 次提交
    • W
      Preln fix (#49802) · e03718f5
      Wang Bojun 提交于
      * preln_residual 2 fused_bias_residual
      
      * skip layernorm fix and ut
      
      * code refine
      
      * code style refine
      
      * fix ut
      
      * fix output
      
      * add trt layer fall back info
      
      * refine op teller and ut
      
      * DropoutMaskOut output fix
      e03718f5