1. 24 3月, 2023 9 次提交
  2. 23 3月, 2023 25 次提交
  3. 22 3月, 2023 6 次提交
    • HappyHeavyRain's avatar
      Support optimizers operator to be generated (#51767) · 0b008e0c
      HappyHeavyRain 提交于
      * test_get_kernel
      
      * add invoke signature
      
      * change reduce_max
      
      * change frobenius_norm
      
      * reset reduce_max according to composite and change reduce_all
      
      * fix the bug when Scalar(*)
      
      * fix 'scalar when support_tensor'
      
      * change code according to review
      
      * change 'keep_signature' to 'manual_signature' and add some erro info
      
      * support optimizers autogen
      
      * change sgd yaml
      
      * change generate signature
      
      * fix test/cpp/new_executor/CM
      
      * reset signature generated function
      
      * change signature funciton
      
      * change signature funciton
      0b008e0c
    • Y
      [Zero-Dim] Support 0-D tensor for some oneDNN unary kernels (#51687) · 2a3d75bc
      YangQun 提交于
      * support 0-d tensor for element wise unary ops
      
      * fix python code style check
      
      * fix approval check
      
      * support 0-d tensor for onednn softmax and logsoftmax kernels
      
      * fix commnets
      
      * fix some unittests
      2a3d75bc
    • J
      Correct lstm qat test (#51499) · 31f81685
      joanna.wozna.intel 提交于
      31f81685
    • S
      add fused dropout add (#51752) · 6ba0507d
      ShenLiang 提交于
      6ba0507d
    • D
      [XPU] fix distribute_fpn_proposals (#51873) · a10718e8
      duanyanhui 提交于
      * fix distribute_fpn_proposals
      
      * fix bug
      a10718e8
    • G
      Add fused_feed_forward pass (#50423) · 5dda0ef6
      Ghost Screaming 提交于
      * Add fused_feed_forward pass for semi-automatic static graph training.
      
      * Add fused_feedforward property in parallel_executor.cc
      
      * Polish code.
      
      * Polish fused feed_forward pass code. Support use_dropout1 and
      use_dropout2 option.
      
      * Support model parallel in fused_feedforward pass.
      5dda0ef6