1. 15 3月, 2022 25 次提交
  2. 14 3月, 2022 15 次提交
    • S
      [Phi]Add diag_v2 grad kernel (#40447) · e157f2af
      Siming Dai 提交于
      * Add diag grad kernel
      
      * fix unittest case
      
      * add float16, remove const &
      
      * delete diag_grad in op_utils.h
      e157f2af
    • Z
      [PHI] Move set_value_grad kernel form fluid to phi (#40478) · 3149e399
      zyfncg 提交于
      * move set_value_grad kernel form fluid to phi
      
      * add unittest for passing coverage ci
      3149e399
    • T
      Add an elementwise + activation fusion pass. (#36541) · 3f219160
      Tomasz Socha 提交于
      * Add elementwise add and activation fuse pass
      
      * Fix copy ellision
      
      * More flexible pattern detector
      
      * More flexible fusion pass
      
      * Update lists for pass
      
      * Add support for Pow operator
      
      * Add support for more activation types
      
      * Style
      
      * Rename fusion pass
      
      * First version of tests
      
      * Dirty version of pass
      
      * Polished version
      
      * Update pbtxt
      
      * Style
      
      * Update names
      
      * Style
      
      * Use PADDLE_ENFORCE_EQ
      
      * Save error message to variable
      
      * WO for error checks
      
      * CR
      
      * Static style check
      
      * Add missing 'activation_scale' attribute
      
      * Add relu6 and sigmoid activations
      
      * Style
      
      * Fix fuse list formating
      
      * Sync filenames for fuse pass files
      
      * Fix cmake after move
      
      * Fix registration
      
      * Fix pass name in tests
      
      * Add missing activations to checker
      
      * WIPS
      
      * Working mul op
      
      * Working sub
      
      * Working Add
      
      * Remove pten includes
      
      * Remove some forward declarations
      
      * Remove Includes
      
      * Fixes
      
      * Remove default kernels
      
      * Add check if post_ops attributes are avaliable
      
      * Style
      
      * Code adjustment
      
      * Register default kernels
      
      * We have year 2022 not 2021...
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Fast review fixes
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Review Fix
      
      * Rename one_dnn -> onednn
      
      * Style after review
      
      * Fast and dirty fix for quantization
      
      * Update tests
      
      * Style
      
      * Fix mkldnn_quantizer config
      
      * Add Joanna's suggestion.
      
      * Check if operator is explicitly disables on OneDNN
      
      * Try to use unregistered attributes
      
      * Style
      
      * Test new framework
      
      * FXI
      
      * FXII
      
      * Update test
      
      * Style
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      3f219160
    • F
      [MLU] add merged_momentum mlu kernel (#40406) · 1f7b2516
      fwenguang 提交于
      1f7b2516
    • C
      optimize group_norm op backward (#39944) · 5720537e
      crystal 提交于
      * optimize backwad
      
      * optimize group_norm backward
      
      * Add vectorized code
      
      * move assignment code
      
      * merge function
      
      * move code
      
      * optimize code
      
      * Modify function name
      5720537e
    • L
      Optimize bilinear_interp backward (#39423) · 9e1f762c
      Lijunhui 提交于
      * bilinear_bw init
      
      * optimize code
      
      * optimize
      
      * optimize 2
      
      * optimize functions
      
      * modify func name
      9e1f762c
    • L
      fix gpu callback (#40445) · 2c21d240
      Leo Chen 提交于
      * fix gpu conetxt callback
      
      * fix gpu callback
      
      * fix callback early destruct problem
      2c21d240
    • X
      [phi]migrate fmax,fmin kernel to phi (#40140) · bb801960
      Xiaoxu Chen 提交于
      bb801960
    • J
      Support custom op and paddle.autograd.bacward in eager (#40423) · 227fa408
      Jiabin Yang 提交于
      * eager, test=develop
      
      * fix bug, test=develop
      
      * eager, test=develop
      
      * merge legacy to fluid
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Refactor TensorAdd func by template and remove gradient_accumulation in eager
      
      * Remove needless target name
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Use overload instead of template
      
      * Remove legacy code
      
      * Remove legacy code
      
      * selectedrows, test=develop
      
      * Remove DataType test
      
      * eager, test=develop
      
      * eager, test=develop
      
      * support gan, test=develop
      
      * Using Tensor directly instead of using EagerTensor
      
      * support gradient_accumulation
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * refine code
      
      * ptb, test=develop
      
      * Rename all EagerTensor to Tensor
      
      * Rename some EagerTensor to Tensor
      
      * rename EagerTensor to EagerVariable
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * add more test
      
      * eager, test=develop
      
      * Support copiable selected rows and merge develop
      
      * save load, eager, test=develop
      
      * save load, eager, test=develop
      
      * refine, test=develop
      
      * remove useless _set_value method
      
      * refine, test=develop
      
      * refine, test=develop
      
      * revert static_runner, test=develop
      
      * EagerTensor to Tensor, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * clear grad, test=develop
      
      * merge, develop
      
      * merge, develop
      
      * merge, test=develop
      
      * merge, test=develop
      
      * Support quant and part of slice
      
      * support legacy static save
      
      * extend slim tests time
      
      * remove imperative on inference
      
      * remove imperative on inference
      
      * merge develop
      
      * fix typo
      
      * fix typo
      
      * split slice related code into 2 part for imperative and eager
      
      * split slice from inference
      
      * split slice from inference
      
      * fix test_tensor_register_hook
      
      * support custom op in eager mode
      
      * fix inference deps error
      
      * split eager utils from custom operator
      
      * fix type match
      
      * fix typo
      Co-authored-by: NWang Huan <wanghuan29@baidu.com>
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      Co-authored-by: Nwanghuancoder <wanghuancoder@163.com>
      227fa408
    • Z
      Optimize performance of log_softmax (#38992) · 250e254f
      Zhang Zheng 提交于
      * Optimize performance of log_softmax
      
      * delete unity build
      
      * modify to phi
      
      * fix
      
      * fixfixfixfix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * simplify
      
      * fix
      
      * fix enforce
      250e254f
    • S
      set WITH_ONNXRUNTIME=off in windows (#40502) · 02e80f59
      Sing_chan 提交于
      02e80f59
    • W
      [Eager] [Bug Fix] fix eager trace op bug (#40402) · 65adfecf
      wanghuancoder 提交于
      * fix some slice bug, test=develop
      
      * refine, test=develop
      65adfecf
    • 0
      adjust params order for eager.Tensor._copy_to (#40449) · c6ec8b9f
      0x45f 提交于
      c6ec8b9f
    • L
      [KP] Add unittests for... · f269ca3f
      Lijunhui 提交于
      [KP] Add unittests for brelu,ceil,celu,elu,floor,hard_shrink,hard_sigmoid,log1p,logsigmoid,relu6,silu,soft_relu,softsign,swish (#40448)
      
      * solve unexecuted UT
      
      * add 24 activation op UT
      
      * append swish&thresholded_relu to kpfirst_list
      
      * rm thresholded_relu
      f269ca3f
    • Z
      [AutoParallel] Converter (#40434) · 3881b6cb
      zhaoyingli 提交于
      * [AutoParallel] Converter
      Converter API
      3881b6cb