1. 08 12月, 2022 4 次提交
  2. 07 12月, 2022 5 次提交
  3. 06 12月, 2022 6 次提交
    • X
      make bilinear interpolate stable. (#48644) · e1e8bf72
      xiongkun 提交于
      * make bilinear interpolate stable.
      
      * fix code
      e1e8bf72
    • Z
      Clear extra input (Bias, ResidualData) in OpMaker of conv2d (#47579) · 0a2dfa38
      zyfncg 提交于
      * delete Bias and ResidualData in OpMaker of conv2d
      
      * delete extra input of conv3d
      
      * refactor pass of conv_bias_fusion
      
      * fix mkldnn dependency
      
      * fix mkldnn compile
      
      * fix test_conv_bias_mkldnn_fuse_pass
      
      * police some code
      
      * remove useless log
      
      * fix analyzer_vit_ocr_tester
      
      * fix conv_activation_mkldnn_fuse_pass
      
      * fix test_analyzer_ocr
      
      * add fused_conv_sig
      
      * fix performence regression
      
      * fix performance regression
      0a2dfa38
    • S
      [PHI] Migrate elementwise_(add/mul) kernels (#48625) · 7575d37c
      Sławomir Siwek 提交于
      * remove fluid code
      
      * init
      
      * typo
      
      * fix merge conflicts
      7575d37c
    • H
      [XPU] add tile_grad op (#48720) · 8de336f9
      houj04 提交于
      8de336f9
    • K
      Remove fluid matmul (#47988) · 8fb829ba
      kangguangli 提交于
      * remove layers.matmul in nets.py
      
      * remove layers.matmul in rnn_impl/test_quantization_pass/auto_parallel_gpt_model/test_auto_parallel_completion_gpt
      
      * remove layers.matmul in other files
      
      * fix
      
      * fix
      
      * remove layers.matmul itself
      
      * remove ref in CMakeLists.txt and tools directory
      
      * remove matmul in fluid.layers.nn.py
      
      * remove matmul in fluid.dygraph.rnn.py && resotre test_matmul_op.py
      
      * replace matmul in fluid.dygraph.rnn.py && clean api_test in test_matmul_op.py
      
      * fix error && restore empty test_auto_search_dist_matmul_op.py
      
      * fix check in test_auto_parallel_partitioner.py
      
      * fix test_dist_matmul && test_flags_mkldnn_ops_on_off
      
      * fix test_fused_attention_op_xpu.py && test_matmul_op_xpu.py
      
      * remove test_auto_search_dist_matmul_op.py
      
      * remove layers.matmul in auto_parallel_gpt_model.py && fix doc in fluid/io.py
      
      * fix for matmul_grad
      
      * fix codestyle
      
      * fix codestyle
      
      * resolve conflicts error
      
      * restore unit test file but not compiled it for later remove
      
      * fix codestyle
      
      * fix wrong unittest skip
      
      * fix unittest delete
      
      * fix scale cost
      
      * fix scale cost
      
      * resolve conflicts error
      
      * resolve conflicts error
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      8fb829ba
    • Y
      add xpu centered rmsprop (#48658) · 54b756e2
      ykkk2333 提交于
      * add stat tool
      
      * add roll and roll_grad kernels and strided_slice and strided_slice_grad kernels, test=kunlun
      
      * add xpu rmsprop centered, test=kunlun
      54b756e2
  4. 05 12月, 2022 7 次提交
  5. 03 12月, 2022 1 次提交
  6. 02 12月, 2022 7 次提交
    • P
      [PHI] Migrate elementwise_sub kernel (#48611) · 493825a5
      Piotr Paturej 提交于
      * Add migrations
      
      * Fix build errors
      
      * Remove elementwise_mul from migration
      493825a5
    • H
      Migrate mul_mkldnn_op to phi matmul_kernel (#48299) · e8edbb09
      Hulek 提交于
      * Migrate mul_mkldnn_op to matmul_kernel
      
      * Review fixes - changed mutable_data, changed ctx to dev_ctx, fixed namespaces
      
      * switched some funcs to phi
      
      * Deleted not needed phi:: and changed place checking according to standards
      e8edbb09
    • J
      [XPU ]Fix xpu compile error (#48621) · 2af82190
      Jiabin Yang 提交于
      * [Eager] Fix paddle.grad interface
      
      * [Eager] Support minimum SubGraph for GeneralGrad
      
      * Add needed_nodes to prune grad graph more thoroughly
      
      * [Eager] Add grad_node_trans_mapping_ to record which grad_node has been transformed to AccumulationNode
      
      * [Eager] Fix paddle.grad interface
      
      * Polish code
      
      * remove potential_stop_node
      
      * Add endding_nodes to enhance genSugraph logic
      
      * clear endding_nodes_
      
      * polish code
      
      * rename endding_nodes to endding_nades_
      
      * Refactor grad interface
      
      * Add register_hook case to fix coverage-ci
      
      * Fix code format
      
      * Refactor general_grad
      
      * Add more code comments
      
      * call clear directly to release GradSlotMeta
      
      * fix a mistake
      
      * fix matmul/ multiply kernel logic and optional input in yaml, fill zeros logic and so on.
      
      * fix batch_norm_double_grad yaml optional config
      
      * fix tanh_triple_grad yaml and kernels
      
      * fix MultiplyTripleGradKernel optional logic
      
      * fix merge mistake
      
      * fix compile error
      
      * remove legacy attr for bn
      
      * polish code
      
      * fix some kernel
      
      * merge develop
      
      * fix error
      
      * remote log
      
      * fix kernel with full like
      
      * hide value log behind
      
      * hide value log behind
      
      * fix matmul_triple grad
      
      * fix xpu compile error
      
      * fix xpu compile error
      
      * fix xpu ut
      
      * fix xpu ut
      
      * fix_xpu_compile_error
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      2af82190
    • B
      Split common funcs from reduction and structure modification (#46970) · ef575d6a
      Bo Zhang 提交于
      * profile reduce kernel for fp16 and reduceHigherdim
      
      * use reinterpret_cast
      
      * fix for CI on ROCm
      
      * add Macro for ROCm
      
      * ROCm CI config
      
      * ROCm CI config
      
      * unit test repair
      
      * pull
      
      * add common_funcs.h
      
      * reduceType
      
      * Update reduce_function.h
      
      * not higher
      
      * rename
      ef575d6a
    • J
      [Eager] Optimize Grad by prune useless branch (#47827) · d1e93be1
      Jiabin Yang 提交于
      * [Eager] Fix paddle.grad interface
      
      * [Eager] Support minimum SubGraph for GeneralGrad
      
      * Add needed_nodes to prune grad graph more thoroughly
      
      * [Eager] Add grad_node_trans_mapping_ to record which grad_node has been transformed to AccumulationNode
      
      * [Eager] Fix paddle.grad interface
      
      * Polish code
      
      * remove potential_stop_node
      
      * Add endding_nodes to enhance genSugraph logic
      
      * clear endding_nodes_
      
      * polish code
      
      * rename endding_nodes to endding_nades_
      
      * Refactor grad interface
      
      * Add register_hook case to fix coverage-ci
      
      * Fix code format
      
      * Refactor general_grad
      
      * Add more code comments
      
      * call clear directly to release GradSlotMeta
      
      * fix a mistake
      
      * fix matmul/ multiply kernel logic and optional input in yaml, fill zeros logic and so on.
      
      * fix batch_norm_double_grad yaml optional config
      
      * fix tanh_triple_grad yaml and kernels
      
      * fix MultiplyTripleGradKernel optional logic
      
      * fix merge mistake
      
      * fix compile error
      
      * remove legacy attr for bn
      
      * polish code
      
      * fix some kernel
      
      * merge develop
      
      * fix error
      
      * remote log
      
      * fix kernel with full like
      
      * hide value log behind
      
      * hide value log behind
      
      * fix matmul_triple grad
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      d1e93be1
    • Y
      add silu, silu_grad, unfold and unfold_grad xpu kernels (#48325) · f71de378
      ykkk2333 提交于
      * add stat tool
      
      * add roll and roll_grad kernels and strided_slice and strided_slice_grad kernels, test=kunlun
      
      * add silu, unfold and their grads,test=kunlun
      f71de378
    • C
      polish fusion kernel naming (#48609) · 61486bf2
      Chen Weihang 提交于
      61486bf2
  7. 01 12月, 2022 2 次提交
  8. 30 11月, 2022 4 次提交
  9. 29 11月, 2022 4 次提交
    • P
      [PHI] traspose2 kernel migration (#47748) · d86aa4ca
      Paulina Gacek 提交于
      * traspose2 kernel migrated
      
      * Got rid of mutable_data
      
      * x modification added
      
      * ops added in extra info file
      
      * Formatting fix
      
      * 2 fuse passes with tanpose2 commented
      
      * nr of outs changed in 2 passes, passes uncommented
      
      * Changes in passes reverted
      
      * transpose chnaged in operator.cc
      
      * MKLDNN check in operator.cc
      
      * Transpose fixes
      
      * Fix deleted from operato
      
      * template corrected
      Co-authored-by: NPaulina Gacek <paulinagacek@intel.com>
      d86aa4ca
    • S
      eltwise_div + scale [PHI] (#48484) · fa10524d
      Sławomir Siwek 提交于
      fa10524d
    • V
      Optimize the implementation of the argsort operator. (#47738) · 9e9b705a
      Vvsmile 提交于
      Optimize the implementation of the argsort operator
      9e9b705a
    • S
      [PHI] Migrate matmul kernel (#48162) · f41ccbd5
      Sławomir Siwek 提交于
      * cleanup unused code
      
      * unify is_int8 is_bfloat16
      
      * Simplify matmul_v2 FWD kernel
      
      * remove RunKernel methods
      
      * remove import namespace
      
      * remove headers
      
      * clean fluid/phi cross imports
      
      * remove fluid axpy_handler
      
      * delete fluid methods
      
      * activations
      
      * OneDNNMemDesc
      
      * MKLDNNFormatForSize
      
      * MatchShapeToLayout
      
      * MKLDNNMemoryFormat
      
      * MKLDNNFormat
      
      * ReorderMKLDNNHandler
      
      * to_void_cast
      
      * review suggestions
      
      * interpolate
      
      * remove fluid depedency
      
      * init
      
      * ExecuteMatMulV2
      
      * rm fluid kernel
      
      * matmul_grad
      
      * remove mutable_data
      
      * mul_grad
      
      * matmul fwd
      
      * add extra attr
      
      * temp disable passes
      
      * re-enable passes
      
      * workaround for matmul+act
      
      * fix for matmul+eltwise_add
      
      * fix typo
      
      * merge bugfix #48364
      
      * remove merge conflict
      f41ccbd5