1. 08 11月, 2022 2 次提交
  2. 03 11月, 2022 2 次提交
  3. 29 10月, 2022 1 次提交
  4. 28 10月, 2022 1 次提交
  5. 20 10月, 2022 2 次提交
    • K
      [cherry pick] Add FusedMultiTransformer fuse pass for GPT3 (#47150) · 396427a7
      Kaipeng Deng 提交于
      * add fused_attention_pass. test=develop
      
      * support fp16. test=develop
      
      * fix format. test=develop
      396427a7
    • Y
      [cherry-pick] Fix quantize model deploy bug in MKLDNN (#47119) · c2d344dd
      yeliang2258 提交于
      * Fix quantize model deploy bugs when using MKLDNN (#45920)
      
      * fix immutable op quantize bugs
      
      * fix
      
      * fix build bug
      
      * fix test
      
      * notest,test=inference
      
      * fix ppyoloe acc drop bugs
      
      * fix test
      
      * fix test
      
      * add test
      
      * fix
      
      * fix
      
      * fix test
      
      * fix refined name bug
      
      * fix test
      
      * bias fix
      
      * fix matmul weight dequant bug
      
      * re-ci
      
      * fix tester
      
      * fix test
      
      * fix tester
      
      * update weight dequantize func
      
      * update code
      
      * update test for converage
      
      * update test
      
      * update cmake
      
      * update cmakelist
      
      * update code
      
      * rerun ci
      
      * remove useless code
      
      * re-ci
      
      * update code
      
      * update code
      
      * fix header
      
      * update code for log
      c2d344dd
  6. 18 10月, 2022 1 次提交
  7. 17 10月, 2022 1 次提交
  8. 14 10月, 2022 4 次提交
  9. 11 10月, 2022 1 次提交
  10. 28 9月, 2022 1 次提交
  11. 20 9月, 2022 2 次提交
  12. 15 9月, 2022 1 次提交
  13. 07 9月, 2022 1 次提交
    • W
      Layernorm shift partition (#45736) · 960109af
      wenbin 提交于
      * first commit
      
      * conver done
      
      * correct format
      
      * layernorm_shift_partition
      
      * correct convert
      
      * redefine plugin
      
      * runable
      
      * bug fix
      
      * modify ShiftPartitionPattern
      
      * correct
      
      * add UT
      
      * modify ut
      
      * compile
      
      * modify enforce
      
      * modify UT
      960109af
  14. 06 9月, 2022 2 次提交
  15. 05 9月, 2022 2 次提交
    • Y
      New format quant model support for MKLDNN (#45416) · 4e4f4586
      yeliang2258 提交于
      * support onnx format quantized model
      
      * update code
      
      * add test
      
      * add test
      
      * fix
      
      * fix test
      
      * fix cmake
      
      * update code
      
      * change scale file path to calibration file path
      
      * update code
      
      * update code
      
      * fix build bug
      
      * fix build bugs
      
      * fix
      
      * fix
      4e4f4586
    • D
      Update DlNNE engine (#45027) · 638965c5
      denglin-github 提交于
      * add config param for enable_dlnne and support calibration mode
      * remove useless file
      * refine code and add annotation
      * refine code of Warnning tips
      638965c5
  16. 02 9月, 2022 1 次提交
  17. 30 8月, 2022 1 次提交
  18. 29 8月, 2022 1 次提交
  19. 22 8月, 2022 3 次提交
  20. 18 8月, 2022 2 次提交
  21. 16 8月, 2022 2 次提交
    • F
      convert multihead to oss (#45019) · f706d95d
      feng_shuai 提交于
      * convert multihead to oss
      
      * fix:bug
      
      * fix:delete const cast
      
      * fix:don't support bias_qk
      
      * add vit pass
      
      * fix:convert bug and add preln_residual_bias
      
      * support length=-1
      
      * add UT for convert
      
      * add no_bias_qk support for gpu_multihead_op
      
      * delete infer_shape depends on bias_qk
      
      * oss just can be used in T4 and A*
      
      * fix:change api for ROCM CI
      f706d95d
    • W
      memoptim and fp16 mixed precision (#45132) · fa890092
      Wilber 提交于
      fa890092
  22. 15 8月, 2022 1 次提交
  23. 14 8月, 2022 1 次提交
  24. 10 8月, 2022 1 次提交
  25. 05 8月, 2022 2 次提交
  26. 04 8月, 2022 1 次提交
    • S
      Matmuls with activation and elementwise_add fuses (#44655) · 0420d514
      Sławomir Siwek 提交于
      * Add unit tests
      
      * matmul_v2 + activation
      
      * matmuls + elementwise_add
      
      * matmul_v2 postops
      
      * transform matmul to v2
      
      * opcompat
      
      * fix fusing matmul with multipe outs
      
      * add shape constraints
      
      * remove unused vars
      
      * change pass order
      
      * - Unit tests to be debugged
      
      - fix
      
      - refactor
      
      - diagnostic
      
      - more diagnostic
      
      - fix
      
      - Fix number two
      
      - fix
      
      - fix
      
      - fix
      
      - alpha added
      
      - more fixes
      
      - compilation fix
      
      - removed diagnostic code
      
      - cosmetic fixes
      
      * lint
      
      * add alpha constraint
      
      * merge matmul refactor
      
      * trigger CI
      
      * - fix
      
      * - another fix
      
      * code style
      
      * add support for matmul+elementwise_add+activation
      
      * code style
      
      * fix bfloat16 bugs
      
      * change append_binary to append_sum
      Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
      0420d514