1. 08 3月, 2023 1 次提交
  2. 07 3月, 2023 2 次提交
  3. 06 3月, 2023 5 次提交
    • [AMP OP&Test] add bf16 fp16 type support for interpolate (#51153) · 2f2bf4e8
      傅剑寒 提交于
      * add bf16 fp16 type support for interpolate
      
      * add bf16 fp16 support for interpolate in phi on cpu
      2f2bf4e8
    • 2
      02f66747
    • N
      Add multiprecision for adadelta op (#50131) · a8a2b7f4
      niuliling123 提交于
      a8a2b7f4
    • H
      [phi decoupling] decouple dependency to device_context in phi (Part 1) (#50865) · a1006b2b
      Huang Jiyi 提交于
      * move DeviceContextPool to phi
      
      * add EmplaceExternalContextFunc
      
      * update namespace
      
      * update cmake
      
      * fix bugs and create context_pool_impl.h
      
      * replace platform::is_xxx_place
      
      * fix bugs
      
      * update generator
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix enforce usage
      
      * Revert "fix enforce usage"
      
      This reverts commit 5f521f08a69713cee506e64a00ec6d9fba709e27.
      
      * fix bugs
      
      * rm XPUDeviceContext and CustomDeviceContext
      
      * fix bugs
      
      * fix fix context init bug
      
      * fix bugs after merge
      
      * fix bugs
      
      * fix name
      
      * fix mutable_data
      
      * update and fix bugs
      
      * fix bugs
      
      * update
      
      * fix bugs
      
      * fix name
      
      * fix bugs
      
      * merge
      
      * fix bugs
      
      * create context_pool in phi/backends
      
      * create context_pool in phi/backends
      
      * fix bugs
      
      * fix xpu bugs
      
      * fix rocm bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix xpu bugs
      
      * update
      
      * update
      
      * fix bugs
      
      * fix bugs
      a1006b2b
    • S
      oneDNN kernels code cleanup (#50743) · e2054925
      Sławomir Siwek 提交于
      * matmul refactored
      
      * fc
      
      * SetOutMemDescWithLogicalLayoutFusesSupport
      
      * matmul_v2
      
      * alpha support
      
      * group repetetive funcs
      
      * matmul utils
      
      * execute matmul methods
      
      * restore registered kernel names
      
      * split header and impl files
      
      * remove double negatives
      
      * increase coverage
      
      * add onednn tests to ctest
      
      * remove fusion logic from base matmuls
      e2054925
  4. 03 3月, 2023 4 次提交
  5. 02 3月, 2023 6 次提交
  6. 01 3月, 2023 7 次提交
  7. 28 2月, 2023 4 次提交
  8. 27 2月, 2023 7 次提交
    • H
      [XPU] add fp16 support for shape and lookup_table_v2 op. (#50773) · d2a0577a
      houj04 提交于
      * [XPU] add fp16 support for shape op.
      
      * [XPU] add fp16 support for lookup_table_v2 op.
      
      * update approval list: add qingshu's id.
      d2a0577a
    • 【Hackathon No.68】Remove utils in phi (#50833) · 6c181d1d
      张春乔 提交于
      * remove utils
      
      * remove utils
      
      * remove utils
      
      * remove utils
      
      * Update get_data_from_tensor.h
      
      * Update rnn_functor.h
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_kernel.cc
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_functor.h
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_kernel.cc
      
      * remove utils
      
      * Update rnn_functor.h
      
      * remove utils
      
      * remove utils
      
      * remove utils
      
      * remove utils
      
      * remove utils
      
      * Update rnn_functor.h
      
      * Update unsqueeze_op.h
      
      * Update utils.h
      
      * roll back
      
      * Update tensor_utils.h
      
      * Update tensor_utils.h
      
      * Update tensor_utils.h
      
      * Update tensor_utils.h
      
      * Update tensor_utils.h
      
      * use TensorToVector
      
      * use TensorToVector
      
      * use TensorToVector
      
      * use TensorToVector
      
      * use TensorToVector
      
      * Update rnn_kernel.cc
      
      * Update rnn_grad_kernel.cc
      
      * Update rnn_functor.h
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_functor.h
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_functor.h
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_kernel.cu.cc
      
      * add TensorToVector
      
      * roll back
      
      * Update tensor_utils.h
      
      * Update rnn_functor.h
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update tensor_utils.h
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_grad_kernel.cc
      
      * Update rnn_kernel.cc
      
      * Update rnn_grad_kernel.cu.cc
      
      * Update rnn_kernel.cu.cc
      
      * Update rnn_grad_kernel.cc
      
      * Update rnn_kernel.cc
      
      * TensorCopySync to phi::Copy
      
      * fix codestyle
      
      * rnn_kernel.cc: add ;
      
      * replace all GetDataFromTensor with phi::GetVectorFromTensor
      
      * delete include of util.h
      6c181d1d
    • B
      Reduce redundant cpu computation in slice compute (#50348) · 8aec0580
      Bo Zhang 提交于
      * conflict
      
      * add UpdateSliceAttrs
      8aec0580
    • Y
      2eeaaa7d
    • zhouweiwei2014's avatar
    • W
      xpu: bind op scatter_nd_add. add data type for transpose2, clip & assign_value (#50825) · 0d12afea
      wangshengxiang 提交于
      * [XPU] bind op scatter_nd_add
      
      * [XPU] add more data type for op: clip, transpose2 & assign_value
      0d12afea
    • shaojie_wang's avatar
      [Bfloat16]register bfloat16 datatype for squared l2 norm (#50908) · 3c121040
      shaojie_wang 提交于
      * register bfloat16 datatype for squared l2 norm
      
      * register bfloat16 datatype for softmax with upper triangular mask
      
      * register bfloat16 for tril triu cuda kernel
      3c121040
  9. 26 2月, 2023 1 次提交
  10. 25 2月, 2023 1 次提交
  11. 24 2月, 2023 2 次提交