1. 23 3月, 2022 2 次提交
  2. 22 3月, 2022 1 次提交
  3. 21 3月, 2022 3 次提交
  4. 18 3月, 2022 3 次提交
  5. 17 3月, 2022 6 次提交
  6. 16 3月, 2022 5 次提交
  7. 15 3月, 2022 4 次提交
    • J
      oneDNN NHWC fixes (#40049) · dde9cec0
      Jacek Czaja 提交于
      * - Prototype of third solution
      
      - fix
      
      - compilation fixes
      
      - fix
      
      - fixe
      
      - fix
      
      - fix
      
      - compilation fix
      
      - comment fix
      
      - lint
      
      update mkldnn conv_elementwise_add_fuse_pass ut
      
      - NHWC changes to prelu
      
      - alhpa dims
      
      - UT fix
      
      - fix to UT
      
      - lint
      
      - Some fixes
      
      - added to BWD of prelu NHWC support
      
      - reverted removal of resetting cu_layout in clearing of caching
      
      * - Small changes
      
      * - compilation fix
      
      * - fix
      
      * - fix
      
      * lint
      
      * - fixes after internal review
      
      * - compilation fix
      
      * - lint
      dde9cec0
    • T
      add shard_id (#40261) · 6b7d4845
      Thunderbrook 提交于
      * shard_id
      
      * format
      6b7d4845
    • H
      Move one hot to phi (#39876) · 7701db37
      hong 提交于
      * move one hot to phi; test=develop
      
      * fix bugs; test=develop
      
      * fix bugs; test=develop
      
      * add infer meta; test=develop
      
      * fix bugs; test=develop
      
      * resolve confilct
      
      * resolve confilct
      
      * fix bug;
      
      * fix error; test=develop
      
      * update; test=develop
      
      * polish code; test=develop
      
      * add one api in eager mode; test=develop
      
      * add one hot test; test=develop
      
      * remove use less code; test=develop
      
      * fix bug; test=develop
      
      * polish code; test=develop
      
      * polish code; test=develop
      7701db37
    • Y
      [Phi]Move Tanh/BRelu/LeakyRelu/ThresholdedRelu Kernels to Phi (#40385) · d7112180
      YuanRisheng 提交于
      * move activation op
      
      * adjust code format
      
      * fix compile bugs
      
      * fix ci bugs
      
      * code format adjust
      
      * code format adjust2
      
      * activate ci status
      
      * modify according to comment
      
      * move activation kernel
      
      * revert relu6
      
      * reduce add code
      
      * perfect use_phi_functor
      
      * completing func name
      
      * fix bugs when run ci
      
      * fix bugs when run infr
      
      * modifpy infrt get kernel signature
      d7112180
  8. 14 3月, 2022 3 次提交
    • T
      Add an elementwise + activation fusion pass. (#36541) · 3f219160
      Tomasz Socha 提交于
      * Add elementwise add and activation fuse pass
      
      * Fix copy ellision
      
      * More flexible pattern detector
      
      * More flexible fusion pass
      
      * Update lists for pass
      
      * Add support for Pow operator
      
      * Add support for more activation types
      
      * Style
      
      * Rename fusion pass
      
      * First version of tests
      
      * Dirty version of pass
      
      * Polished version
      
      * Update pbtxt
      
      * Style
      
      * Update names
      
      * Style
      
      * Use PADDLE_ENFORCE_EQ
      
      * Save error message to variable
      
      * WO for error checks
      
      * CR
      
      * Static style check
      
      * Add missing 'activation_scale' attribute
      
      * Add relu6 and sigmoid activations
      
      * Style
      
      * Fix fuse list formating
      
      * Sync filenames for fuse pass files
      
      * Fix cmake after move
      
      * Fix registration
      
      * Fix pass name in tests
      
      * Add missing activations to checker
      
      * WIPS
      
      * Working mul op
      
      * Working sub
      
      * Working Add
      
      * Remove pten includes
      
      * Remove some forward declarations
      
      * Remove Includes
      
      * Fixes
      
      * Remove default kernels
      
      * Add check if post_ops attributes are avaliable
      
      * Style
      
      * Code adjustment
      
      * Register default kernels
      
      * We have year 2022 not 2021...
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Fast review fixes
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      
      * Review Fix
      
      * Rename one_dnn -> onednn
      
      * Style after review
      
      * Fast and dirty fix for quantization
      
      * Update tests
      
      * Style
      
      * Fix mkldnn_quantizer config
      
      * Add Joanna's suggestion.
      
      * Check if operator is explicitly disables on OneDNN
      
      * Try to use unregistered attributes
      
      * Style
      
      * Test new framework
      
      * FXI
      
      * FXII
      
      * Update test
      
      * Style
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
      3f219160
    • J
      Support custom op and paddle.autograd.bacward in eager (#40423) · 227fa408
      Jiabin Yang 提交于
      * eager, test=develop
      
      * fix bug, test=develop
      
      * eager, test=develop
      
      * merge legacy to fluid
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Refactor TensorAdd func by template and remove gradient_accumulation in eager
      
      * Remove needless target name
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Use overload instead of template
      
      * Remove legacy code
      
      * Remove legacy code
      
      * selectedrows, test=develop
      
      * Remove DataType test
      
      * eager, test=develop
      
      * eager, test=develop
      
      * support gan, test=develop
      
      * Using Tensor directly instead of using EagerTensor
      
      * support gradient_accumulation
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * refine code
      
      * ptb, test=develop
      
      * Rename all EagerTensor to Tensor
      
      * Rename some EagerTensor to Tensor
      
      * rename EagerTensor to EagerVariable
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * add more test
      
      * eager, test=develop
      
      * Support copiable selected rows and merge develop
      
      * save load, eager, test=develop
      
      * save load, eager, test=develop
      
      * refine, test=develop
      
      * remove useless _set_value method
      
      * refine, test=develop
      
      * refine, test=develop
      
      * revert static_runner, test=develop
      
      * EagerTensor to Tensor, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * clear grad, test=develop
      
      * merge, develop
      
      * merge, develop
      
      * merge, test=develop
      
      * merge, test=develop
      
      * Support quant and part of slice
      
      * support legacy static save
      
      * extend slim tests time
      
      * remove imperative on inference
      
      * remove imperative on inference
      
      * merge develop
      
      * fix typo
      
      * fix typo
      
      * split slice related code into 2 part for imperative and eager
      
      * split slice from inference
      
      * split slice from inference
      
      * fix test_tensor_register_hook
      
      * support custom op in eager mode
      
      * fix inference deps error
      
      * split eager utils from custom operator
      
      * fix type match
      
      * fix typo
      Co-authored-by: NWang Huan <wanghuan29@baidu.com>
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      Co-authored-by: Nwanghuancoder <wanghuancoder@163.com>
      227fa408
    • F
      Move Pool OPs to phi (#40208) · 88ec08a7
      From00 提交于
      * Move Pool OPs to phi
      
      * Fix CI error
      
      * Fix conflicts
      88ec08a7
  9. 13 3月, 2022 1 次提交
  10. 12 3月, 2022 1 次提交
  11. 11 3月, 2022 4 次提交
  12. 10 3月, 2022 1 次提交
  13. 09 3月, 2022 2 次提交
  14. 08 3月, 2022 3 次提交
  15. 07 3月, 2022 1 次提交
    • M
      cuBlasLt Epilogue To Fuse Linear + ReLU|GeLU (#39437) · 2a3d9eca
      Ming-Xu Huang 提交于
      * Added cuBlasLtHandle_t to device context.
      
      * Added fused_gemm_epilogue op.
      
      1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue.
      2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2.
      2. Act currently only be supported ReLU. (Will add GeLU in the future).
      
      * Added UT to fused_gemm_epilogue op.
      
      * Added LinearAct Pattern
      
      1. Added LinearAct into graph_pattern_detector.* to define (2.)'s
      pattern.
      2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)).
      3. act currently only support ReLU (Will support GeLU in the future).
      
      * Added FuseGemmEpiloguePass
      
      1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU}
      fusion (GeLU will be supported in the future).
      2. Only support matmul_v2 from nn.Linear.
      
      * Added pybind to BuildStrageter.fuse_gemm_epilogue_.
      
      * Added UT for fuse_gemm_epilogue_pass.
      
      * GeLU support and EpilogueSingleton
      
      1. Added GeLU support to fused_gemm_epilogue op.
      2. Added EpilogueSingleton to cache auxiliary pointer.
      3. Added related UTs.
      
      * Rename cublaslt_epilogue_opto gemm_epilogue_op.*.
      
      * Added both train and infer pattern to LinearAct.
      
      1. Added support of fwd graph with grap_ops linking to LinearAct.
      2. Added related changes to fuse_gemm_epilogue_pass for above
      modification.
      
      * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass.
      
      * Added identity activation support to gemm_epilogue_op.
      
      * Added Linear Fusion (matmul_v2 + ele_add)
      
      1. Added matmul_v2 + ele_add pattern to LinearActPattern.
      2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass.
      
      * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.*
      
      * Add fused_gemm_epilogue_grad op.
      
      1. Added fused_gemm_epilogue_grad to support backward epilogue fusion.
      
      * Add UTs to fused_gemm_epilogue_grad_op.
      
      * Change attribute name in fused_gemm_epilogue_grad_op for clearing.
      
      * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op.
      
      * Added ElementwiseAdd+Matmul+Act graph pattern detection.
      
      * Fuse backward of Linear( Act(x))
      
      1. Added backward fusion pass to Linear( Act(x)).
      2. Added backward fusion pass to Linear(x).
      
      * Added UTs to backward fusion of Linear(Act(x)).
      
      * Complete document of arguments to fused_gemm_epilogue_op.
      
      * Made arguments of some functions pass by reference.
      
      * Modify code with review comments.
      
      1. Made arguments of some function pass by reference.
      2. Removed redundant code.
      3. Followed Google code style to change code.
      
      * Made 'const' code style be consistent
      
      * Fixed random seed of python UTs.
      
      * Set Compiling constrains to cuBlasLt
      
      1. Require CUDA 11.6+
      2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6.
      
      * Code Reivew from Paddle
      
      1. Changed arguments name is_first_gemm to without_x_gradient for
      clearing.
      2. Applied PADDLE_THROW in fused_gemm_epilogue_op.
      
      * Remove EpilogueSingleton
      
      1. Applied ReserveSpace to replace Epilogue for passing auxiliary
      pointers between FWD and BWD.
      
      * Fix a logical error and enhance UTs.
      
      1. Added act op count checking in UTs.
      2. Fix issue to fuse backward or ReLU(Linear(X)).
      3. TODO: solve GELU fusion issues.
      
      * Fix Linear and GeLU fusion issues.
      
      1. Modified graph_detech_pattern to fit with both linear wiht gelu or
      relu.
      2. Modified data range in Uts to allow negative values.
      
      * Removed fused_gemm_epilogue_op.h.
      
      * Rename namespace pten to phi.
      
      * Rename name of arguments in fused_gemm_epilogue_op
      
      1. bias -> Bias.
      2. out -> Out.
      3. reserve_space -> ReserveSpace.
      
      * Change EpiloguePassActivationCache as local variable.
      
      1. Removed singleton in EpiloguePassActivationCache.
      2. Made EpiloguePassActivationCache as an argument to each pass
      functions.
      2a3d9eca