1. 17 8月, 2021 2 次提交
    • C
      Copy boost optional to Paddle (#34780) · 9be41447
      chentianyu03 提交于
      * copy boost optional.hpp to paddle
      
      * copy boost optional.hpp to paddle
      
      * move directions
      
      * del fluid/utils
      
      * modify .hpp to .h
      
      * move directions
      
      * modify to paddle::optional
      
      * add modification description
      
      * format code stype for the files in paddle/utils
      
      * format code stype
      9be41447
    • J
      [oneDNN ] disabling more ops caching (#34830) · f1c1d9e0
      Jacek Czaja 提交于
      * - disabled caching of layer norm
      
      - fix in compilation
      
      - compilation fix
      
      - transpose caching disabled
      
      - compilation fix
      
      - more compilation fixes
      
      - sum caching disabled
      
      - compilation fix
      
      * - LRN with disabled cache
      
      * lint fixes
      f1c1d9e0
  2. 16 8月, 2021 1 次提交
    • J
      [oneDNN] Fix to 34554 (same as previous PR but should build with GPU) (#34859) · 9cb65653
      Jacek Czaja 提交于
      * - Added softmax without caching
      
      * - Binary is no longer manually cached
      
      * - Activation onednn caching removed
      
      * - Removed manual caching of activation
      
      * - modified UT
      
      * - fix
      
      * - fix
      
      * - fixes to building
      
      * - fix
      
      * - fix
      
      * - fix to UT
      
      * - Faulty UT workaround
      
      * - approval workaround
      
      * - Fixes after review
      
      * - compilation fixes
      
      * - more lint fixes
      
      * - more fixes after review
      
      * - fixes after another round of review
      
      * - hopefully compilation fix
      
      - compilation fix
      9cb65653
  3. 12 8月, 2021 1 次提交
  4. 11 8月, 2021 1 次提交
    • J
      [oneDNN] Fix to issue #34554 (#34623) · 0a5c99e8
      Jacek Czaja 提交于
      * - Added softmax without caching
      
      * - Binary is no longer manually cached
      
      * - Activation onednn caching removed
      
      * - Removed manual caching of activation
      
      * - modified UT
      
      * - fix
      
      * - fix
      
      * - fixes to building
      
      * - fix
      
      * - fix
      
      * - fix to UT
      
      * - Faulty UT workaround
      
      * - approval workaround
      
      * - Fixes after review
      
      * - compilation fixes
      
      * - more lint fixes
      
      * - more fixes after review
      
      * - fixes after another round of review
      0a5c99e8
  5. 30 7月, 2021 3 次提交
    • J
      Added matmul_v2 BF16/FP32 BWD kernel (#34192) · 0be71571
      jakpiase 提交于
      * test version of matmul_v2
      
      * added matmul_v2 grad kernel
      
      * minor changes
      
      * minor changes
      
      * minor change for CI approval
      
      * CI fix
      
      * CI fix
      
      * trigger CI
      
      * changes after review, not working yet
      
      * moved ops to anonymous namespaces
      
      * changes after review
      0be71571
    • J
      Added reshape, reshape2, squeeze and squeeze2 BF16/FP32 FWD/BWD kernels (#34219) · 22c4c189
      jakpiase 提交于
      * test version of matmul_v2
      
      * added matmul_v2 grad kernel
      
      * minor changes
      
      * minor changes
      
      * minor change for CI approval
      
      * CI fix
      
      * CI fix
      
      * added squeeze and squeeze2 kernels
      
      * CI fix
      
      * CI fix
      
      * CI fix
      
      * disabled tests when compiled with cuda
      
      * added setting format_tag by strides
      
      * added sigmoid BF16 FWD/BWD and gelu BF16 BWD
      
      * changes after review
      
      * Revert "added sigmoid BF16 FWD/BWD and gelu BF16 BWD"
      
      This reverts commit 6e3f76720b545abfcff9f6052b46b73a1e745cae.
      
      * Revert "Merge branch 'matmul_v2_grad' into squeeze2_op"
      
      This reverts commit 06fcf67843a4a7884eccdf67a02a03575e1d4cb8, reversing
      changes made to 6e3f76720b545abfcff9f6052b46b73a1e745cae.
      
      * minor change
      
      * added reshape1/2 kernels
      
      * moved some functions into private block
      
      * CI fix
      
      * CI fix
      
      * CI fix
      22c4c189
    • J
      Added expand_v2 BF16/FP32 FWD/BWD kernels (#34284) · 41c4f723
      jakpiase 提交于
      * added expand_v2 bf16/fp32 kernel
      
      * minor change
      
      * CI fix
      
      * added missing test file
      
      * added formatting
      
      * reduced binary size
      
      * CI fix
      41c4f723
  6. 22 7月, 2021 1 次提交
  7. 19 7月, 2021 1 次提交
  8. 07 7月, 2021 1 次提交
  9. 30 6月, 2021 1 次提交
    • J
      Added matmul_v2 BF16/FP32 FWD kernel (#33750) · 24783c84
      jakpiase 提交于
      * added matmul_v2 bf16/fp32 FWD kernel
      
      added matmul_v2 bf16/fp32 FWD kernel
      
      * added formatting
      
      * removed some tests due to timeout in CI
      
      * refactored tests
      
      * merged tests classes into one file
      
      * minor change
      
      * removed test guard for CUDA
      
      * remove skipIf
      
      * changes after review
      
      * formated one file
      
      * minor change
      
      * added skipping UT in CUDA place
      24783c84
  10. 24 6月, 2021 1 次提交
  11. 23 6月, 2021 1 次提交
    • J
      Added split op bf16/fp32 oneDNN kernel (#33584) · 68106509
      jakpiase 提交于
      * base changes for split op
      
      * 90% of split functionality added
      
      * full fp32 functionality
      
      * added bf16 test
      
      * added submemory caching
      
      * added bf test to static mode whitelist
      
      * minor change
      
      * enabled split op for inference
      
      * minor fix
      
      * minor fix
      68106509
  12. 21 6月, 2021 1 次提交
    • L
      Add AXPY oneDNN handler (#33632) · 773aabc7
      lidanqing 提交于
      * Add oneDNN AXPY handler.
      
      * Add fallback for small tensors.
      
      * Fix ifdefs
      
      * Remove unnecessary namespace prefixes and add missing headers.
      
      * Guard handler_axpy with proper ifdefs.
      
      * Compilation of this function is possible only when Paddle is not build
      with CUDA nor HIP.
      
      * Move AXPY handler code to separate files.
      
      * Use oneDNN AXPY handler in SGD op.
      
      * Use axpy handler only when Paddle is built with oneDNN.
      
      * Add test for SUM BF16 with big rows.
      
      * Fix SFINAE rules for elementwise_add_to.
      
      * Add test case for SGD with big rows.
      
      * update
      
      * update
      Co-authored-by: NAdam Osewski <adam.osewski@intel.com>
      773aabc7
  13. 16 6月, 2021 1 次提交
  14. 27 5月, 2021 1 次提交
  15. 26 5月, 2021 1 次提交
  16. 25 5月, 2021 1 次提交
  17. 22 5月, 2021 1 次提交
    • J
      Added oneDNN matmul grad BF16/FP32 kernel (#32968) · e2a3a6f7
      jakpiase 提交于
      * added support for most matmul cases
      
      * added more functionality
      
      * full functionality of matmul op, fp32 only
      
      * added bf16 tests and functionality
      
      * added formatting
      
      * changes after review
      
      * minor change
      
      * added reviewers suggestions
      e2a3a6f7
  18. 19 5月, 2021 1 次提交
  19. 14 5月, 2021 1 次提交
  20. 28 4月, 2021 1 次提交
  21. 21 4月, 2021 1 次提交
  22. 24 3月, 2021 1 次提交
  23. 09 3月, 2021 1 次提交
  24. 25 2月, 2021 1 次提交
  25. 23 2月, 2021 2 次提交
  26. 18 2月, 2021 1 次提交
    • J
      Add Conv Transpose BF16 (#30877) · caf9d398
      joanna.wozna.intel 提交于
      * Add conv transpose BF16
      
      * Share function GetWeightsTz
      
      * Adjust to review and fix op compatibility
      
      * Add bias to unique handler name
      
      * Remove errors related to paddle enforce
      
      * Add conv2d_transpose to bf16 list and kernel refator
      caf9d398
  27. 04 2月, 2021 1 次提交
  28. 28 1月, 2021 1 次提交
  29. 25 1月, 2021 1 次提交
  30. 20 1月, 2021 1 次提交
  31. 12 1月, 2021 1 次提交
  32. 11 1月, 2021 1 次提交
  33. 09 1月, 2021 1 次提交
  34. 31 12月, 2020 1 次提交
  35. 24 12月, 2020 1 次提交
  36. 23 12月, 2020 1 次提交