1. 19 5月, 2022 1 次提交
  2. 19 4月, 2022 1 次提交
  3. 14 3月, 2022 1 次提交
    • T
      Add an elementwise + activation fusion pass. (#36541) · 3f219160
      Tomasz Socha 提交于
      * Add elementwise add and activation fuse pass
      
      * Fix copy ellision
      
      * More flexible pattern detector
      
      * More flexible fusion pass
      
      * Update lists for pass
      
      * Add support for Pow operator
      
      * Add support for more activation types
      
      * Style
      
      * Rename fusion pass
      
      * First version of tests
      
      * Dirty version of pass
      
      * Polished version
      
      * Update pbtxt
      
      * Style
      
      * Update names
      
      * Style
      
      * Use PADDLE_ENFORCE_EQ
      
      * Save error message to variable
      
      * WO for error checks
      
      * CR
      
      * Static style check
      
      * Add missing 'activation_scale' attribute
      
      * Add relu6 and sigmoid activations
      
      * Style
      
      * Fix fuse list formating
      
      * Sync filenames for fuse pass files
      
      * Fix cmake after move
      
      * Fix registration
      
      * Fix pass name in tests
      
      * Add missing activations to checker
      
      * WIPS
      
      * Working mul op
      
      * Working sub
      
      * Working Add
      
      * Remove pten includes
      
      * Remove some forward declarations
      
      * Remove Includes
      
      * Fixes
      
      * Remove default kernels
      
      * Add check if post_ops attributes are avaliable
      
      * Style
      
      * Code adjustment
      
      * Register default kernels
      
      * We have year 2022 not 2021...
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Fast review fixes
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Review Fix
      
      * Rename one_dnn -> onednn
      
      * Style after review
      
      * Fast and dirty fix for quantization
      
      * Update tests
      
      * Style
      
      * Fix mkldnn_quantizer config
      
      * Add Joanna's suggestion.
      
      * Check if operator is explicitly disables on OneDNN
      
      * Try to use unregistered attributes
      
      * Style
      
      * Test new framework
      
      * FXI
      
      * FXII
      
      * Update test
      
      * Style
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      3f219160
  4. 28 2月, 2022 1 次提交
  5. 20 2月, 2022 1 次提交
  6. 19 2月, 2022 2 次提交
    • A
      [Pten]Unify paddle/pten::framework::ddim into pten::ddim (#39614) · 2fe04264
      Aurelius84 提交于
      * Unify paddle/pten::framework::ddim into pten::ddim
      
      * fix paddle namespace
      
      * compile sucessfully
      
      * fix npu src file
      
      * fix conflict
      
      * fix conflict
      
      * fix tensorrt compiler error
      
      * fix conflict
      
      * fix conflict
      
      * fix tesst file conflict
      
      * fix conflict
      
      * fix mlu file conflict
      
      * fix mlu file conflict
      
      * fix cinn header file conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      2fe04264
    • C
      fix RecordEvent interface (#39675) · 019a552b
      chenjian 提交于
      * fix RecordEvent interface
      
      * modify default level to 4
      
      * update interface use
      
      * add const default trace level
      
      * update operator.cc
      019a552b
  7. 15 2月, 2022 1 次提交
    • A
      [PTen]Migrate proto::VarType outside of Pten (#39411) · 7e7e9404
      Aurelius84 提交于
      * #1 migrate dist-related type()-> dtype()
      
      * move datatype function from pten -> fluid/framework
      
      * change type() in imperative into convert(dtype())
      
      * modify xx_tensor->type into xx_tensor->dtype
      
      * change the set_type interface and the caller
      
      * modify xx_tensor.type into xx_tensor.dtype
      
      * fix mutable_data(place, dtype())
      
      * change caller of mutable_data in pten and distributed
      
      * change the caller of mutable_data in fluid/framework
      
      * change the caller of mutable_data in imperative directory
      
      * mutable_data: inference
      
      * update the call of mutable_data
      
      * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType
      
      * pass the compile. the next step is remove VarType in Pten
      
      * fix all and remove VarType from pten. success in linux. Next task is other platform
      
      * fix conflict with develop
      
      * fix compiled error
      
      * Fix reset conversion
      
      * fix conflict
      
      * fix compiled problem
      
      * fix typo
      
      * Fix << in tensor_utils.cc
      
      * fix type->dtype
      
      * fix unittest
      
      * fix tensor init constructor
      
      * fix DataTypeSize for BFloat16
      
      * fix code style
      
      * fix npu compiled error
      
      * fix npu
      
      * compile npu sucessfully
      
      * fix conflict
      
      * fix conflict
      Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
      7e7e9404
  8. 08 2月, 2022 1 次提交
    • J
      Fix to #38126 (#39097) · f884edb9
      Jacek Czaja 提交于
      * - 38126 potential fix
      
      * - fix
      
      * - build fix
      
      * - another candidate fix
      
      * - compilation fix
      
      * - another fix
      
      * - Fix to activation of NHWC being first oneDNN op in chain on oneDNN ops
      
      * - compilation fix
      
      * - added NHWC reotating for elementwise being first op
      
      * - compilation fix
      
      * - compilation fix
      
      * - Added UT
      
      * - cosmetic fixes
      f884edb9
  9. 17 1月, 2022 1 次提交
  10. 13 1月, 2022 1 次提交
    • J
      Added mul BF16/FP32 FWD/BWD oneDNN kernel (#38552) · fc6eed5b
      jakpiase 提交于
      * base changes for mul reimplementation
      
      * empty commit
      
      * tmp save
      
      * full implementation of mul bf16/fp32 fwd bwd
      
      * CI fix
      
      * CI rerun
      
      * changed unity build cmake to avoid gpu issues
      
      * removed mul mkldnn from unity build
      
      * added skipping tests if not cpu_bf16
      
      * CI fix
      
      * CI fix
      
      * CI fix
      fc6eed5b
  11. 17 11月, 2021 1 次提交
  12. 05 11月, 2021 1 次提交
    • J
      Disable pool&conv_transpose&quantize caching (#36695) · db6c00c4
      Jacek Czaja 提交于
      * - WIP
      
      - compilation fix
      
      - fix
      
      - fixes
      
      - fix
      
      - fix
      
      - fix again
      
      - fix
      
      - another fix
      
      - another compilation fix
      
      - fix
      
      - fix
      
      - fix
      
      - lint
      
      * - pool2d partially stripped from cache
      
      - pool2d partially stripped of caching
      
      * - compilation fix
      
      * - compilation fix
      
      * - Fix to UT of caching
      
      * - Enabling test_conv3d_mkldnn
      
      * - conv_transpose stripped of cache
      
      * - compilation fix
      
      * - fix
      
      * - fix
      
      * - compilation fix
      
      * - fix
      
      * Reverted disabling caching of conv2d
      
      * - compilation fix
      
      * - ut reverted
      db6c00c4
  13. 27 10月, 2021 1 次提交
    • P
      Added fp32 / bf16 forward and backward elementwise_div_mkldnn operator (#36158) · e92e6b06
      piotrekobiIntel 提交于
      * Add WIP version of elementwise_div_mkldnn without working dy grad
      
      * Add dy gradient calculation implementation, disable broadcast tests
      
      * Readd removed tests from static_mode_white_list
      
      * Add bfloat16 gradient tests, remove int8 and uint8 support
      
      * - Change the way dy grad is calculated to improve performance
      - Refactor BinaryMKLDNNHandler to use a default parameter
      
      * Change copyright year
      
      * Refactor as suggested
      
      * Attempt to bypass CI Approval
      not accepting max_relative_error
      
      * Fix formatting issue
      e92e6b06
  14. 14 10月, 2021 1 次提交
  15. 13 10月, 2021 1 次提交
  16. 07 10月, 2021 1 次提交
  17. 24 9月, 2021 2 次提交
    • P
      Added elementwise_sub_mkldnn operator (#35662) · 787273ed
      piotrekobiIntel 提交于
      * Add elementwise_sub_mkldnn_op without grad
      
      * Add test to static_mode_white_list
      
      * Refactor code, change license years
      
      * Remove invalid grad implementation
      
      * Fix element_wise_sub_op test
      
      * Fix CI Approval error
      
      * Remove unnecessary EltwiseSubMKLDNNGradKernel class
      
      * Fix CI Approval 2
      
      * Fix CI Approval 3
      
      * Fix CI Approval Attempt #4
      
      * Fix CI Approve Attempt #5
      
      * Fix CI Approval Attempt #6
      
      * Fix CI Approval Attemt #7
      
      * Change test names containing add to sub
      
      * Fix old tests testing add instead of sub
      
      * Copy grad implementation from elementwise_add_mkldnn
      
      * CI test fix attempt
      
      * Revert "CI test fix attempt"
      
      This reverts commit c647cacf41e6a87c715385a185de5cbf65fc8900.
      
      * Fix CI attempt 2
      
      * Fix elementwise_sub tests, temporary mkldnn broadcast test disable
      
      * Add working implementation of elementwise_sub grad
      
      * Fix build errors caused by pull
      
      * Fix format error
      
      * Fix format error 2
      
      * Disable elementwise_sub_mkldnn test on GPU
      
      * Apply fix for paddle.fluid import
      
      * Revert changes of test_elementwise_sub and Fix mkldnn test
      
      * Revert "Apply fix for paddle.fluid import"
      
      This reverts commit fc3b122fec8e12f2bcb32928a2685ba4d20fd742.
      
      * fix bug of module 'paddle' has no attribute 'fluid' for python3.6 (#35862)
      
      * Add changes suggested by reviewers
      
      * Change @unittest.skipIf... to @OpTestTool.skip_if_not_cpu_bf16() to satisfy Approval CI
      
      * Remove check_dygraph=False to satisify CI Approval
      Co-authored-by: Nzhangbo9674 <82555433+zhangbo9674@users.noreply.github.com>
      787273ed
    • J
      [oneDNN] candidate fix to #34554 (#35884) · 485b387d
      Jacek Czaja 提交于
      * - candidate fix
      
      * - More fixes to #34554
      
      * - another incosnstent fix to key
      
      * - Remvoed unneeded line
      
      * - matching the cache behaviour to other ops
      485b387d
  18. 18 9月, 2021 1 次提交
    • J
      [oneDNN] Disable caching of Reorder operation (#35664) · e4c2a854
      Jacek Czaja 提交于
      * - REorder disabling caching
      
      * - compilation fix
      
      * - another compilation fix
      
      * - another compilation fix
      
      * - compilation fix
      
      * - Fix
      
      * - yet another compilation fix
      
      * - suppresingly another compilation fix
      
      * - lint
      
      * - fix after review
      
      * - fix
      e4c2a854
  19. 13 9月, 2021 1 次提交
  20. 01 9月, 2021 1 次提交
    • J
      Added slice BF16/FP32 FWD/BWD kernels (#34332) · 070cab11
      jakpiase 提交于
      * aded slice FWD FP32
      
      * added tests for slice FWD FP32
      
      * added slice bwd
      
      * added bf16 tests
      
      * CI fix
      
      * CI fix
      
      * added reason to skip_if
      
      * minor change
      
      * temporary fix for failing test
      
      * temporary fix
      
      * changes after review
      
      * CI rerun
      070cab11
  21. 26 8月, 2021 1 次提交
    • J
      [oneDNN] disable caching oneDNN primitives in matmul v2, Reduce grad and... · 31f0221f
      Jacek Czaja 提交于
      [oneDNN] disable caching oneDNN primitives in  matmul v2, Reduce grad and elementwise_add grad, expand_v2 (#35132)
      
      * - grad caching disabled of matmul_v1
      
      - compilation fix
      
      - compilation fix
      
      * - reduction removed
      
      * - Matmul v2 disabled caching
      
      * Draft of further changes
      
      * - workaround for reducegrad
      
      * - fixes to UT
      
      * - fix to compilation
      
      * - another fix
      
      * - fix
      31f0221f
  22. 17 8月, 2021 2 次提交
    • C
      Copy boost optional to Paddle (#34780) · 9be41447
      chentianyu03 提交于
      * copy boost optional.hpp to paddle
      
      * copy boost optional.hpp to paddle
      
      * move directions
      
      * del fluid/utils
      
      * modify .hpp to .h
      
      * move directions
      
      * modify to paddle::optional
      
      * add modification description
      
      * format code stype for the files in paddle/utils
      
      * format code stype
      9be41447
    • J
      [oneDNN ] disabling more ops caching (#34830) · f1c1d9e0
      Jacek Czaja 提交于
      * - disabled caching of layer norm
      
      - fix in compilation
      
      - compilation fix
      
      - transpose caching disabled
      
      - compilation fix
      
      - more compilation fixes
      
      - sum caching disabled
      
      - compilation fix
      
      * - LRN with disabled cache
      
      * lint fixes
      f1c1d9e0
  23. 16 8月, 2021 1 次提交
    • J
      [oneDNN] Fix to 34554 (same as previous PR but should build with GPU) (#34859) · 9cb65653
      Jacek Czaja 提交于
      * - Added softmax without caching
      
      * - Binary is no longer manually cached
      
      * - Activation onednn caching removed
      
      * - Removed manual caching of activation
      
      * - modified UT
      
      * - fix
      
      * - fix
      
      * - fixes to building
      
      * - fix
      
      * - fix
      
      * - fix to UT
      
      * - Faulty UT workaround
      
      * - approval workaround
      
      * - Fixes after review
      
      * - compilation fixes
      
      * - more lint fixes
      
      * - more fixes after review
      
      * - fixes after another round of review
      
      * - hopefully compilation fix
      
      - compilation fix
      9cb65653
  24. 12 8月, 2021 1 次提交
  25. 11 8月, 2021 1 次提交
    • J
      [oneDNN] Fix to issue #34554 (#34623) · 0a5c99e8
      Jacek Czaja 提交于
      * - Added softmax without caching
      
      * - Binary is no longer manually cached
      
      * - Activation onednn caching removed
      
      * - Removed manual caching of activation
      
      * - modified UT
      
      * - fix
      
      * - fix
      
      * - fixes to building
      
      * - fix
      
      * - fix
      
      * - fix to UT
      
      * - Faulty UT workaround
      
      * - approval workaround
      
      * - Fixes after review
      
      * - compilation fixes
      
      * - more lint fixes
      
      * - more fixes after review
      
      * - fixes after another round of review
      0a5c99e8
  26. 30 7月, 2021 1 次提交
  27. 24 6月, 2021 1 次提交
  28. 23 6月, 2021 1 次提交
    • J
      Added split op bf16/fp32 oneDNN kernel (#33584) · 68106509
      jakpiase 提交于
      * base changes for split op
      
      * 90% of split functionality added
      
      * full fp32 functionality
      
      * added bf16 test
      
      * added submemory caching
      
      * added bf test to static mode whitelist
      
      * minor change
      
      * enabled split op for inference
      
      * minor fix
      
      * minor fix
      68106509
  29. 16 6月, 2021 1 次提交
  30. 27 5月, 2021 1 次提交
  31. 26 5月, 2021 1 次提交
  32. 19 5月, 2021 1 次提交
  33. 06 5月, 2021 1 次提交
  34. 30 4月, 2021 1 次提交
  35. 21 4月, 2021 1 次提交
  36. 14 4月, 2021 1 次提交
  37. 19 3月, 2021 1 次提交