1. 28 11月, 2022 10 次提交
  2. 25 11月, 2022 2 次提交
  3. 24 11月, 2022 8 次提交
  4. 23 11月, 2022 5 次提交
  5. 22 11月, 2022 3 次提交
  6. 21 11月, 2022 7 次提交
  7. 18 11月, 2022 5 次提交
    • S
      [PHI] Migrate matmul_grad kernel (#48023) · 4ab18ada
      Sławomir Siwek 提交于
      * cleanup unused code
      
      * unify is_int8 is_bfloat16
      
      * Simplify matmul_v2 FWD kernel
      
      * remove RunKernel methods
      
      * remove import namespace
      
      * remove headers
      
      * clean fluid/phi cross imports
      
      * remove fluid axpy_handler
      
      * delete fluid methods
      
      * activations
      
      * OneDNNMemDesc
      
      * MKLDNNFormatForSize
      
      * MatchShapeToLayout
      
      * MKLDNNMemoryFormat
      
      * MKLDNNFormat
      
      * ReorderMKLDNNHandler
      
      * to_void_cast
      
      * review suggestions
      
      * interpolate
      
      * remove fluid depedency
      
      * init
      
      * ExecuteMatMulV2
      
      * rm fluid kernel
      
      * matmul_grad
      
      * remove mutable_data
      4ab18ada
    • Z
      [PHI] Migrate conv_transpose kernel (#48119) · 9aacb31b
      Zuza Gawrysiak 提交于
      * Migrate conv_transpose to phi
      
      * Move handler to kernel
      
      * kernel m
      
      * Fix formatting
      
      * handler
      
      * remove fluid
      
      * revert tcp_store
      
      * tcp_store
      
      * remove unused
      
      * Fix declaration
      
      * add dnn input
      
      * Fix typo
      Co-authored-by: NSławomir Siwek <slawomir.siwek@intel.com>
      9aacb31b
    • MarDino's avatar
      Optimize FusedBiasAddGelu Kernel (#47679) · b0e28540
      MarDino 提交于
      * Add quick gelu and fused bias add kernel
      
      * fix annotation
      
      * remove useless code
      
      * add fast gelu option and set it in multi transformer op
      
      * add flag to restrict if use fast gelu approximate
      
      * fix flags conflict
      
      * fix use tanh function instead
      
      * add cudart version limit
      
      * use phi fast tanh func
      
      * fix comment
      b0e28540
    • H
      [PHI decoupling] move "gpu_device_function.h" from fluid to phi (#48097) · 27ee6e71
      huangjiyi 提交于
      * move "paddle/phi/backends/gpu/gpu_device_function.h" to phi
      
      * update copyright years
      
      * rm "fluid/platform/device/gpu/gpu_device_function.h" in phi
      
      * fix rocm-complie bugs
      27ee6e71
    • J
      correct sync behavior for XPU distributed training (#47882) · aafa9820
      james 提交于
      * correct sync behavior for XPU distributed training
      
      XPU support event mechanism similar to cuda event, so it is advisable to
      use an event to sync compute/comm streams for performance. However this
      mechanism is never fully tested, and inconsistent loss/ending_epochs are
      reported. Therefore, this PR replaces event sync with stream waiting as
      a temporary solution.
      
      * remove compile warning
      aafa9820