1. 12 4月, 2023 1 次提交
    • R
      [Move Test] xpu (#52661) · 9a7c83bd
      RedContritio 提交于
      * move python/paddle/fluid/tests/unittests/xpu to test/xpu
      
      * update CMakeLists.txt
      
      * remove xpu in fluid/tests/unittests/
      
      * add path to op_test_xpu
      
      * fix incorrect path
      
      * update test script
      
      * fix test_adadelta_op_xpu error
      9a7c83bd
  2. 11 4月, 2023 6 次提交
  3. 10 4月, 2023 6 次提交
  4. 09 4月, 2023 1 次提交
  5. 08 4月, 2023 2 次提交
  6. 07 4月, 2023 5 次提交
  7. 06 4月, 2023 3 次提交
    • S
      Remove oneDNN-specific attributes from matmul (#49444) · 4d97b25d
      Sławomir Siwek 提交于
      * replace matmul with matmul_v2 in fuse passes
      
      * Remove fusion logic from matmul
      
      * removing fusion methods
      
      * add proper name
      
      * adjust namespaces
      
      * clean attrs in python tests
      
      * delete checkpoint and restore matmul version
      
      * remove unused code
      
      * matmul and reshape/transpose fuses migrated
      
      * split MatmulOneDNN headers
      
      * fuse activation and eltwise_add
      
      * add fuse_activation
      
      * matmul_transpose_reshape/reshape_transpose_matmul
      
      * matmul + elementwise_add (fused)
      
      * activation temporary modifciation
      
      * restore matmul(v1) version 0
      
      * merge newest develop
      
      * remove depedency from other PR
      
      * revert pbtxt
      
      * remove placeholders from matmul_v2
      
      * add description in OPMaker
      
      * remove matmul_v2_op.h and all depedencies
      
      * remove dims changing in base op
      
      * add possibility to fuse already fused_matmul
      
      * restart broken CI
      
      * Empty-Commit
      
      * revert matmul_utils.h
      
      * codestyle
      
      * adjust imports
      
      * add pbtxt file
      
      * 100% matmul unit tests coverage
      
      * trigger CI with minimal changes to develop
      
      * adjust changes to develop
      
      * add fused_matmul op
      
      * inherit base ops
      
      * add "v2"
      
      * move OPMaker
      
      * Gradually add fused_matmul files
      
      * second batch of fused_matmul changes
      
      * split infershapes of matmul_v2 and fused_matmul
      
      * merge code from other PR
      
      * 2023
      
      * inherit fused_matmul from matmul_v2
      
      * Update paddle/phi/backends/onednn/onednn_reuse.h
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      * Update paddle/phi/kernels/fusion/onednn/fused_matmul_kernel.cc
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      
      * resolve conflicts
      
      * codestyle
      
      * simplify isgemmlinear
      
      * 2023
      
      * remove import
      
      * reuse methods
      
      * matmul_v2_mkldnn cleanup
      
      * simplify ExecuteMatMulV1Grad
      
      * matmul refactored
      
      * fc
      
      * SetOutMemDescWithLogicalLayoutFusesSupport
      
      * matmul_v2
      
      * alpha support
      
      * group repetetive funcs
      
      * matmul utils
      
      * execute matmul methods
      
      * restore registered kernel names
      
      * split header and impl files
      
      * remove double negatives
      
      * reduce numer of modified files
      
      * adjust ExecuteMatmul
      
      * add scales for ut
      
      * dates
      
      * limit number of modified files
      
      * fluid imports
      
      * remove alpha
      
      * codestyle
      
      ---------
      Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
      4d97b25d
    • N
    • K
      rem is_compiled_with_npu (#52385) · 7976e2a3
      Kim Yann 提交于
      * rem is_compiled_with_npu
      
      * rem nup related code
      
      * make lint happy
      
      * rem test
      
      * remove some tests
      
      * Update grad_scaler.py
      
      * fix an error
      7976e2a3
  8. 05 4月, 2023 1 次提交
  9. 04 4月, 2023 6 次提交
    • Z
      Rewrite quant2_int8_nlp_comparison test (#51995) · 7ee31e72
      Zuza Gawrysiak 提交于
      * Correct ernie int8 test to use new QAT process
      
      * Add comment
      
      * Fix code style
      
      * Fix string formatting
      
      * Fix cmake files
      
      ---------
      Co-authored-by: Nwozna <joanna.wozna@intel.com>
      7ee31e72
    • N
      [Dy2St] support train step in to_static (#51693) · 7728efb4
      Nyakku Shigure 提交于
      Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
      7728efb4
    • R
      Improve new executor static build (#51149) · 5bac67d4
      Ruibiao Chen 提交于
      * Improve new executor static build
      
      * Skip GC for static build
      
      * Skip infershape for static build
      
      * Handle read_op
      
      * Add fused_attention to OpsWithFluidKernelNeedMoveToPhi
      
      * Fix argsort typos
      
      * Add sequence_pool to OpsWithFluidKernelNeedMoveToPhi
      
      * Fix skip share lod errors
      
      * Fix errors for adam
      
      * Fix errors for eigvals, memcpy and fake_quantize
      
      * Add static_build.cc
      
      * Add black list
      
      * Fix CI errors
      
      * Fix CI errors
      
      * Fix CI errors
      
      * Fix TensorArray
      
      * Fix TensorArray
      
      * Add update_loss_scaling to OpsNeedSetOutputDtypeWhenRegisterPhiKernel
      
      * Fix copy
      
      * Fix errors
      
      * Fix momentum
      
      * Skip mkldnn
      
      * Fix CI errors
      
      * Fix c_sync_calc_stream_op
      
      * Fix CINN
      
      * Fix while op
      
      * All CI pass, disable FLAGS to merge code, enable it after more tests in future
      
      * Add UTs
      
      * Fix typos
      
      * Fix typos
      
      * Add mkldnn UT
      
      * Remove mkldnn test
      
      * Fix typos
      
      * Fix dist test
      
      * Fix typos
      
      * Fix CI errors
      
      * Fix CI errors
      
      * Add UTs
      
      * Fix typos
      
      * Fix typos
      
      * Add sparse tests
      
      * ToComplexType -> ToComplex
      
      * Add test_matmul_op_static_build to disable_win_inference_test
      5bac67d4
    • J
      【Prim】Support fuse jit save (#52344) · fca0a4bf
      Jiabin Yang 提交于
      * fix_prim
      
      * fix bug
      
      * add note
      
      * fix logic
      
      * fix
      
      * add note
      
      * fix check
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * add debug
      
      * fix check
      
      * fix bug
      
      * sync print log
      
      * fix test case
      
      * change default
      
      * support jit save with fuse
      
      * add more check
      
      * sync with pr 52120
      
      * add more ut
      
      ---------
      Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
      fca0a4bf
    • W
      Remove prim skip ops (#52470) · cf361716
      WangZhen 提交于
      cf361716
    • L
      remove op.py in fluid (#52248) · 273783b3
      LoneRanger 提交于
      * remove op.py
      
      * [Zero-Dim] change Tensor.numpy() usage to other equivalent usage, avoid hack (#52197)
      
      * [BugFix] fix compute error in fused_dropout_add (#52261)
      
      * fix bg
      
      * add utest
      
      * add utest
      
      * [CodeStyle][UP034] remove (()) cases (#52060)
      
      * add up34
      
      * modify var name in loop
      
      * revert changes in test_slice
      
      * Revert "modify var name in loop"
      
      This reverts commit 6d748e371afb417054ed0c6b36fd11e87959a90d.
      
      * temporarily ignore test_slice.py
      
      * add comment
      
      * empty commit, re-trigger all ci
      
      * fix inc
      
      ---------
      Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
      
      * [AMP OP&Test] add unittest for log_softmax (#52264)
      
      * Fix_Linux_[-Wterminate]warning (#52186)
      
      * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output (#52214)
      
      * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output
      
      * delete dtype,shape func of multi_inplace op
      
      * [CustomOP Inplace] Automap inplace dtype and shape, support vector<Tensor> output
      
      * [CustomOP Inplace] Auto-generate python API for inplace vector<Tensor> output
      
      * [AMP OP&Test] add float16 optest for reshape_op (#51678)
      
      * [AMP OP&Test] add float16 optest for reshape_op
      
      * add public_python_api
      
      * [AMP OP&Test] Add fp16/bf16 to clip op (#52158)
      
      * add fp16/bf16 to clip op
      
      * fix as reviewed
      
      * update test_clip_op.py
      
      * update test_clip_op.py
      
      * fix bug
      
      * fix code style
      
      * fix bug
      
      * fix bug
      
      ---------
      Co-authored-by: zhouweiwei2014's avatarZhou Wei <1183042833@qq.com>
      Co-authored-by: NShenLiang <1422485404@qq.com>
      Co-authored-by: N张春乔 <83450930+Liyulingyue@users.noreply.github.com>
      Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
      Co-authored-by: NCcc <52520497+juncaipeng@users.noreply.github.com>
      Co-authored-by: NGalaxy1458 <55453380+Galaxy1458@users.noreply.github.com>
      Co-authored-by: NHongyuJia <jiahongyu@baidu.com>
      Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com>
      Co-authored-by: Nwuyefeilin <30919197+wuyefeilin@users.noreply.github.com>
      273783b3
  10. 03 4月, 2023 3 次提交
  11. 01 4月, 2023 1 次提交
  12. 31 3月, 2023 5 次提交