1. 17 2月, 2023 9 次提交
  2. 16 2月, 2023 24 次提交
  3. 15 2月, 2023 7 次提交
    • D
      fix npu save_combine (#50496) · 3c14b38e
      duanyanhui 提交于
      3c14b38e
    • N
      3d5faa88
    • L
      make cinn_launch_op run interpretercore in tracing mode to reduce number of threads (#50472) · bf38175e
      Leo Chen 提交于
      * make cinn_launch_op run interpretercore in tracing mode to reduce number of threads
      
      * skip getWorkqueue in tracing mode
      bf38175e
    • H
      Rewrite conv activation mkldnn fuse pass tester (#49278) · 84beef80
      Hulek 提交于
      * Done
      
      * Deleted old python test, fixed new python test, changed names in parallel_UT
      
      * Revert parallel UT changes
      
      * Revert parallel UT changes v2
      
      * Review fixes and simplification of conv output shape calculation, disabled sqrt from conv_act_duse_pass
      
      * delete sqrt from possible activations from conv_concat_relu test
      
      * review refactor
      
      * merge main
      
      * delete sqrt from list of compatible activations
      
      * Test with no outdated inputs
      84beef80
    • X
      align tool (#49865) · 4632ca13
      xu98bin 提交于
      * auto parallel align tool
      
      * modify function get_var's return
      
      * add save and load in align_tool
      
      * modify load function and save function
      
      * add finding different ops in align tool
      
      * full auto parallel align tool
      
      add test file for auto parallel align tool
      
      set timeout for test
      
      modify get_backward_tmp_var function
      
      add annotation for align tool
      
      modify test file
      
      modify code to restart CI
      
      remove timeout
      
      * set timeout
      4632ca13
    • W
      [gpups update] add gpups ci log print (#50522) · 41902dda
      wangzhen38 提交于
      41902dda
    • C
      fix composite op map (#50397) · ff86aeab
      cyber-pioneer 提交于
      * map output from composite rule to origin op
      
      add mean layer_norm dropout op map
      
      add input map check
      
      composite softmax support input shape []
      
      * composite softmax support shape []
      
      * polish log
      
      * solve conflict
      
      * polish code
      
      * polish op map output
      
      * add check dtype
      ff86aeab