1. 07 4月, 2023 5 次提交
  2. 06 4月, 2023 9 次提交
    • S
      Remove oneDNN-specific attributes from matmul (#49444) · 4d97b25d
      Sławomir Siwek 提交于
      * replace matmul with matmul_v2 in fuse passes
      
      * Remove fusion logic from matmul
      
      * removing fusion methods
      
      * add proper name
      
      * adjust namespaces
      
      * clean attrs in python tests
      
      * delete checkpoint and restore matmul version
      
      * remove unused code
      
      * matmul and reshape/transpose fuses migrated
      
      * split MatmulOneDNN headers
      
      * fuse activation and eltwise_add
      
      * add fuse_activation
      
      * matmul_transpose_reshape/reshape_transpose_matmul
      
      * matmul + elementwise_add (fused)
      
      * activation temporary modifciation
      
      * restore matmul(v1) version 0
      
      * merge newest develop
      
      * remove depedency from other PR
      
      * revert pbtxt
      
      * remove placeholders from matmul_v2
      
      * add description in OPMaker
      
      * remove matmul_v2_op.h and all depedencies
      
      * remove dims changing in base op
      
      * add possibility to fuse already fused_matmul
      
      * restart broken CI
      
      * Empty-Commit
      
      * revert matmul_utils.h
      
      * codestyle
      
      * adjust imports
      
      * add pbtxt file
      
      * 100% matmul unit tests coverage
      
      * trigger CI with minimal changes to develop
      
      * adjust changes to develop
      
      * add fused_matmul op
      
      * inherit base ops
      
      * add "v2"
      
      * move OPMaker
      
      * Gradually add fused_matmul files
      
      * second batch of fused_matmul changes
      
      * split infershapes of matmul_v2 and fused_matmul
      
      * merge code from other PR
      
      * 2023
      
      * inherit fused_matmul from matmul_v2
      
      * Update paddle/phi/backends/onednn/onednn_reuse.h
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      * Update paddle/phi/kernels/fusion/onednn/fused_matmul_kernel.cc
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      * resolve conflicts
      
      * codestyle
      
      * simplify isgemmlinear
      
      * 2023
      
      * remove import
      
      * reuse methods
      
      * matmul_v2_mkldnn cleanup
      
      * simplify ExecuteMatMulV1Grad
      
      * matmul refactored
      
      * fc
      
      * SetOutMemDescWithLogicalLayoutFusesSupport
      
      * matmul_v2
      
      * alpha support
      
      * group repetetive funcs
      
      * matmul utils
      
      * execute matmul methods
      
      * restore registered kernel names
      
      * split header and impl files
      
      * remove double negatives
      
      * reduce numer of modified files
      
      * adjust ExecuteMatmul
      
      * add scales for ut
      
      * dates
      
      * limit number of modified files
      
      * fluid imports
      
      * remove alpha
      
      * codestyle
      
      ---------
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      4d97b25d
    • S
      Move fused_attention op to phi [迁移前向 GPU OpKernel] (#51743) · a7ec8958
      Sonder 提交于
      * add kernel functions
      
      * update kernel functions
      
      * update func parameters' name
      
      * create codes for gpu device
      
      * 调整文件位置
      
      * fix include error
      
      * remove dependent files to phi/
      
      * restore fused_attention_op.cu
      
      * fix dependence errors
      
      * fix dependence errors
      
      * fix include error
      
      * fix all depandence errors[build success]
      
      * remove useless include
      
      * recover useless include
      
      * use phi::ToNCCLDataType
      
      * fix namespace
      
      * update new register code
      
      * fix error in fused_gemm_epilogue_utils
      
      * fix error in FusedAttentionKernel parm
      
      * finish fused_attention registe code[build success]
      
      * add paddle::optional
      
      * add sig file
      
      * fix build error
      
      * fix a include error
      
      * update CMkaeList
      
      * fix parameter sequence
      
      * add include file
      
      * update #if before include
      
      * fix grammly error
      
      * update codes for DropoutParam
      
      * remove const cast
      
      * trans some fluid api to phi api
      
      * add #if
      
      * update test code
      
      * update test codes
      
      * recover test codes
      
      * trans fused_attention to fluid
      
      * move #endif to end
      
      * move #endif
      
      * delete useless files
      
      * use fused attention utils and recover random seed
      
      * remove fluid include in phi
      a7ec8958
    • N
    • Z
      Modify the condition of _get_places in fp16 (#52508) · 160dfd01
      Zhang Zheng 提交于
      160dfd01
    • K
      feat: add composite rule of roll grad (#52532) · 348a36b5
      Kang Zhao 提交于
      * feat: add relu composite rule
      
      * feat: add relu composite rule, maximum op
      
      * feat: add relu composite rule, maximum op
      
      * feat: add relu composite rule, polish comments
      
      * feat: add relu composite rule, polish comments
      
      * feat: add relu composite rule, add python api of relu
      
      * feat: add relu composite rule, commit hook
      
      * fix: maximum type error & ban cinn test
      
      * fix: maximum input sequence bugs
      
      * resolve conflicts
      
      * fix: code style bugs
      
      * add: relu fp16 test
      
      * feat: add rsqrt composite rule
      
      * feat: add rsqrt composite rule
      
      * resolve conflicts of composite rule
      
      * fix: delete check eager
      
      * feat: add roll grad composite rule
      
      * fix minus shift
      
      * fix test roll op
      348a36b5
    • J
      [CINN] disable CINN test_mean_op unittest to pass CINN CI (#52510) · b36ac56d
      jiangcheng 提交于
      * [CINN] disable CINN test_mean_op unittest to pass CINN CI
      
      * disable test_mean_op for pass ci
      b36ac56d
    • K
      rem is_compiled_with_npu (#52385) · 7976e2a3
      Kim Yann 提交于
      * rem is_compiled_with_npu
      
      * rem nup related code
      
      * make lint happy
      
      * rem test
      
      * remove some tests
      
      * Update grad_scaler.py
      
      * fix an error
      7976e2a3
    • L
      【PaddlePaddle Hackathon 4】No.63 add fp16 and bf16 for eye and frame (#51819) · ae10133a
      LoneRanger 提交于
      * add fp16 and bf16 for eye and frame
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * Update test_frame_op.py
      
      fix code style
      
      * fix bug
      
      * fix bug
      ae10133a
    • W
      [AMP OP&Test]Add fp16/bf16 support logical op (#52112) · b10e4577
      WJJ1995 提交于
      * fixed glog
      
      * add
      
      * add bfloat16 test for logical op
      
      * rm useless code
      
      * add uint16
      
      * deal with comments
      
      * fixed code style
      
      * fixed code style
      
      * fixed for ci
      
      * deal with comments
      
      * fixed for ci
      b10e4577
  3. 05 4月, 2023 1 次提交
  4. 04 4月, 2023 10 次提交
  5. 03 4月, 2023 8 次提交
  6. 01 4月, 2023 1 次提交
  7. 31 3月, 2023 6 次提交
    • zhenhailiu's avatar
      gather with doc (#52105) · 77d24854
      zhenhailiu 提交于
      * gather with doc
      
      * resolve comment
      
      * polish
      
      * polish
      
      * code style
      
      * polish doc
      
      * add_test
      
      * polish
      
      * polish
      
      * add test check
      
      * add test check
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * fix_time_out
      
      * polish
      
      * fix timeout
      
      * fix_timeout
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      77d24854
    • H
      register fluid kerenls to phi [part 2] (#52044) · d05b73e4
      huangjiyi 提交于
      * update bipartite_match
      
      * update
      
      * fix bug
      
      * fix test
      
      * fix bug
      
      * fix Kunlun-KP-Build
      
      * Revert "fix Kunlun-KP-Build"
      
      This reverts commit ceab63cc23079fd6839c826bb52db893fb056355.
      
      * update
      d05b73e4
    • F
      [AMP_OP&Test] improve FP16 and BF16 OpTest for maximum, minimum and multiply op (#52256) · 4c6ad5c0
      FlyingQianMM 提交于
      * [AMP_OP&Test] add FP16 and BF16 OpTest for minimum; add FP16 OpTest for multiply
      
      * [AMP_OP&Test] reset atol and max_relative_error for multiply fp16 and bf16 optest
      
      * [AMP_OP&Test] delete manually atol and max_relative_error setting
      4c6ad5c0
    • H
      [XPU] fix ut: add rtol. (#52398) · 6efeb227
      houj04 提交于
      6efeb227
    • iSerendipity's avatar
      [Test Mv] move mkldnn unittests to test dir (#51911) · 63865c74
      iSerendipity 提交于
      * move mkldnn unittests to test dir
      
      * add py_test_modules funcs
      
      * move py_test_modules to test/mkldnn
      
      * fix import error
      
      * Revert "move py_test_modules to test/mkldnn"
      
      This reverts commit 0277e0c29c22e5c29b7a3b843103f3d0b711c442.
      
      * fix import error by conflicts
      
      * fix errors
      
      * fix mistake on solving conflicts
      63865c74
    • C
      [Prim] Add prod backward composite rule (#51238) · a0069278
      chenjian 提交于
      * first commit
      
      * add registry
      
      * add unit test
      
      * fix format
      
      * add unit test
      
      * fix  bug
      
      * replace unsuqeeze to reshape
      
      * fix
      
      * fix unit test
      
      * update test
      
      * update test
      
      * fix unit test
      
      * fix
      
      * fix
      a0069278