1. 10 5月, 2022 1 次提交
  2. 26 4月, 2022 1 次提交
    • B
      【PaddlePaddle Hackathon 2】29、为 Paddle 新增 PixelUnshuffle 组网 API (#40728) · 5be9b824
      BrilliantYuKaimin 提交于
      * 增加PixelUnshuffle的形状推断
      
      * 增加PixelUnshuffle的算子注册
      
      * 增加PixelUnshuffle及其梯度的核函数
      
      * 增加PixelUnshuffle算子的描述
      
      * 增加PixelUnshuffle算子的签名
      
      * 在Python层面增加PixelUnshuffle
      
      * 增加PixelUnshuffle的单测
      
      * Update test_pixel_unshuffle.py
      
      * test=document_fix
      
      * Update test_pixel_unshuffle.py
      
      增加对extra_repr的测试
      
      * 修正代码格式
      
      * Update test_pixel_unshuffle.py
      
      修正对extra_repr的测试
      
      * 修改pixel_unshuffle核函数的实现位置
      
      * 修正代码格式
      
      * 完善对输入的检查
      
      * Update test_pixel_unshuffle.py
      
      * 完善pixel_unshuffle的输入检查
      
      * Update pixel_unshuffle_op.cc
      
      * Update unary.cc
      
      * add pixel_unshuffle
      
      * Update test_pixel_unshuffle.py
      
      * Update vision.py
      
      * 调整代码格式
      
      * Update vision.py
      
      * Delete extra spaces
      
      * Update pixel_unshuffle_sig.cc
      
      * Update vision.py
      
      * Update vision.py
      
      * add PixelUnshuffleGradInferMeta
      
      * remove PixelUnshuffleOpArgumentMapping
      
      * Update pixel_unshuffle_op.cc
      
      * 调整pixel_unshuffle及其梯度的核函数的实现位置
      
      * Update pixel_unshuffle_op.cc
      5be9b824
  3. 25 4月, 2022 1 次提交
    • B
      【PaddlePaddle Hackathon 2】24、为 Paddle 新增 nn.ChannelShuffle 组网 API (#40743) · bbaaf217
      BrilliantYuKaimin 提交于
      * Add infermeta for ChannelShuffle
      
      * Create channel_shuffle_grad_kernel.h
      
      * Create channel_shuffle_kernel.h
      
      * Create channel_shuffle_sig.cc
      
      * Create channel_shuffle_op.cc
      
      ChannelShuffle算子的描述
      
      * Create channel_shuffle_kernel_impl.h
      
      ChannelShuffle核函数的实现
      
      * Create channel_shuffle_grad_kernel_impl.h
      
      ChannelShuffle反向核函数的实现
      
      * Add kernel register of channel shuffle and grad
      
      注册ChannelShuffle及其反向的核函数
      
      * add nn.functional.channel_shuffle
      
      * add nn.ChannelShuffle
      
      * Create test_channel_shuffle.py
      
      * Update example of ChannelShuffle in vision.py
      
      * Update test_channel_shuffle.py
      
      * 修改channel_shuffle核函数的实现位置
      
      * 修正代码格式
      
      * 删除多余空格
      
      * 完善channel_shuffle的错误检查
      
      * Update unary.cc
      
      * Update channel_shuffle_op.cc
      
      * Update test_channel_shuffle.py
      
      * Update unary.cc
      
      * add channel_shuffle
      
      * Update test_channel_shuffle.py
      
      * Update vision.py
      
      * 调整代码格式
      
      * Update channel_shuffle_sig.cc
      
      * 更新ChannelShuffle的文档
      
      * 更新channel_shuffle的文档
      
      * remove ChannelShuffleOpArgumentMapping
      
      * add ChannelShuffleGradInferMeta
      
      * Update channel_shuffle_op.cc
      
      * 调整channel_shuffle及其梯度的核函数的位置
      bbaaf217
  4. 20 4月, 2022 1 次提交
    • B
      【PaddlePaddle Hackathon 2】9、为 Paddle 新增 logspace API (#41261) · a3c50c42
      BrilliantYuKaimin 提交于
      * 增加logspace的算子描述
      
      * 增加logspace的形状推断
      
      * 增加logspace核函数实现
      
      * 在python中增加logspace接口
      
      * 增加logspace单测
      
      * 增加logspace
      
      * Update logspace_kernel.cu
      
      * Update logspace_op.cc
      
      * 调整代码格式
      
      * Update doc of logspace
      
      * Update tensor.py
      
      * Update logspace_op.cc
      
      * Update logspace_kernel.cc
      
      * Update logspace_kernel.cu
      
      * Update test_logspace.py
      
      * 调整 logspace 的位置
      
      * 调整代码格式
      a3c50c42
  5. 05 4月, 2022 1 次提交
  6. 07 3月, 2022 1 次提交
    • M
      cuBlasLt Epilogue To Fuse Linear + ReLU|GeLU (#39437) · 2a3d9eca
      Ming-Xu Huang 提交于
      * Added cuBlasLtHandle_t to device context.
      
      * Added fused_gemm_epilogue op.
      
      1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue.
      2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2.
      2. Act currently only be supported ReLU. (Will add GeLU in the future).
      
      * Added UT to fused_gemm_epilogue op.
      
      * Added LinearAct Pattern
      
      1. Added LinearAct into graph_pattern_detector.* to define (2.)'s
      pattern.
      2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)).
      3. act currently only support ReLU (Will support GeLU in the future).
      
      * Added FuseGemmEpiloguePass
      
      1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU}
      fusion (GeLU will be supported in the future).
      2. Only support matmul_v2 from nn.Linear.
      
      * Added pybind to BuildStrageter.fuse_gemm_epilogue_.
      
      * Added UT for fuse_gemm_epilogue_pass.
      
      * GeLU support and EpilogueSingleton
      
      1. Added GeLU support to fused_gemm_epilogue op.
      2. Added EpilogueSingleton to cache auxiliary pointer.
      3. Added related UTs.
      
      * Rename cublaslt_epilogue_opto gemm_epilogue_op.*.
      
      * Added both train and infer pattern to LinearAct.
      
      1. Added support of fwd graph with grap_ops linking to LinearAct.
      2. Added related changes to fuse_gemm_epilogue_pass for above
      modification.
      
      * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass.
      
      * Added identity activation support to gemm_epilogue_op.
      
      * Added Linear Fusion (matmul_v2 + ele_add)
      
      1. Added matmul_v2 + ele_add pattern to LinearActPattern.
      2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass.
      
      * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.*
      
      * Add fused_gemm_epilogue_grad op.
      
      1. Added fused_gemm_epilogue_grad to support backward epilogue fusion.
      
      * Add UTs to fused_gemm_epilogue_grad_op.
      
      * Change attribute name in fused_gemm_epilogue_grad_op for clearing.
      
      * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op.
      
      * Added ElementwiseAdd+Matmul+Act graph pattern detection.
      
      * Fuse backward of Linear( Act(x))
      
      1. Added backward fusion pass to Linear( Act(x)).
      2. Added backward fusion pass to Linear(x).
      
      * Added UTs to backward fusion of Linear(Act(x)).
      
      * Complete document of arguments to fused_gemm_epilogue_op.
      
      * Made arguments of some functions pass by reference.
      
      * Modify code with review comments.
      
      1. Made arguments of some function pass by reference.
      2. Removed redundant code.
      3. Followed Google code style to change code.
      
      * Made 'const' code style be consistent
      
      * Fixed random seed of python UTs.
      
      * Set Compiling constrains to cuBlasLt
      
      1. Require CUDA 11.6+
      2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6.
      
      * Code Reivew from Paddle
      
      1. Changed arguments name is_first_gemm to without_x_gradient for
      clearing.
      2. Applied PADDLE_THROW in fused_gemm_epilogue_op.
      
      * Remove EpilogueSingleton
      
      1. Applied ReserveSpace to replace Epilogue for passing auxiliary
      pointers between FWD and BWD.
      
      * Fix a logical error and enhance UTs.
      
      1. Added act op count checking in UTs.
      2. Fix issue to fuse backward or ReLU(Linear(X)).
      3. TODO: solve GELU fusion issues.
      
      * Fix Linear and GeLU fusion issues.
      
      1. Modified graph_detech_pattern to fit with both linear wiht gelu or
      relu.
      2. Modified data range in Uts to allow negative values.
      
      * Removed fused_gemm_epilogue_op.h.
      
      * Rename namespace pten to phi.
      
      * Rename name of arguments in fused_gemm_epilogue_op
      
      1. bias -> Bias.
      2. out -> Out.
      3. reserve_space -> ReserveSpace.
      
      * Change EpiloguePassActivationCache as local variable.
      
      1. Removed singleton in EpiloguePassActivationCache.
      2. Made EpiloguePassActivationCache as an argument to each pass
      functions.
      2a3d9eca
  7. 01 3月, 2022 1 次提交
  8. 30 12月, 2021 1 次提交
  9. 24 12月, 2021 1 次提交
  10. 16 12月, 2021 1 次提交
    • Y
      Add tests for PaddleInference Pass (#37676) · 96597a85
      yeliang2258 提交于
      * add test for conv_elementwise_add2_act_fuse_pass and conv_elementwise_add_act_fuse_pass
      
      * Add conv_eltwiseadd_bn_fuse_pass test and fix test_conv_elementwise_addX_act_fuse_pass
      
      * add tests for conv_act_mkldnn_fuse_pass
      
      * add test for conv_bias_mkldnn_fuse_pass
      
      * update code
      
      * add conv_act_mkldnn_fuse_pass for relu, relu6, swish, leaky_relu
      
      * update test
      
      * update
      
      * update bug
      
      * update
      
      * update pattern_detector
      
      * fix test_conv_eltwiseadd_bn_fuse_pass
      
      * add diff display notest;test=windows_ci_inference
      
      * fix
      
      * remove test_conv_act_mkldnn_fuse_pass.py
      
      * ifix
      96597a85
  11. 14 12月, 2021 1 次提交
    • H
      add layer_norm_fuse_pass test case (#37830) · b95c9cf2
      heliqi 提交于
      * add layer_norm_fuse_pass test case
      
      * restore cmakelist code
      
      * Merge branch 'develop' into layer_norm_fuse_pass
      
      * Merge branch 'develop' into layer_norm_fuse_pass
      
      * add bad case test
      b95c9cf2
  12. 23 11月, 2021 1 次提交
  13. 27 10月, 2021 1 次提交
    • P
      Added fp32 / bf16 forward and backward elementwise_div_mkldnn operator (#36158) · e92e6b06
      piotrekobiIntel 提交于
      * Add WIP version of elementwise_div_mkldnn without working dy grad
      
      * Add dy gradient calculation implementation, disable broadcast tests
      
      * Readd removed tests from static_mode_white_list
      
      * Add bfloat16 gradient tests, remove int8 and uint8 support
      
      * - Change the way dy grad is calculated to improve performance
      - Refactor BinaryMKLDNNHandler to use a default parameter
      
      * Change copyright year
      
      * Refactor as suggested
      
      * Attempt to bypass CI Approval
      not accepting max_relative_error
      
      * Fix formatting issue
      e92e6b06
  14. 24 9月, 2021 1 次提交
    • P
      Added elementwise_sub_mkldnn operator (#35662) · 787273ed
      piotrekobiIntel 提交于
      * Add elementwise_sub_mkldnn_op without grad
      
      * Add test to static_mode_white_list
      
      * Refactor code, change license years
      
      * Remove invalid grad implementation
      
      * Fix element_wise_sub_op test
      
      * Fix CI Approval error
      
      * Remove unnecessary EltwiseSubMKLDNNGradKernel class
      
      * Fix CI Approval 2
      
      * Fix CI Approval 3
      
      * Fix CI Approval Attempt #4
      
      * Fix CI Approve Attempt #5
      
      * Fix CI Approval Attempt #6
      
      * Fix CI Approval Attemt #7
      
      * Change test names containing add to sub
      
      * Fix old tests testing add instead of sub
      
      * Copy grad implementation from elementwise_add_mkldnn
      
      * CI test fix attempt
      
      * Revert "CI test fix attempt"
      
      This reverts commit c647cacf41e6a87c715385a185de5cbf65fc8900.
      
      * Fix CI attempt 2
      
      * Fix elementwise_sub tests, temporary mkldnn broadcast test disable
      
      * Add working implementation of elementwise_sub grad
      
      * Fix build errors caused by pull
      
      * Fix format error
      
      * Fix format error 2
      
      * Disable elementwise_sub_mkldnn test on GPU
      
      * Apply fix for paddle.fluid import
      
      * Revert changes of test_elementwise_sub and Fix mkldnn test
      
      * Revert "Apply fix for paddle.fluid import"
      
      This reverts commit fc3b122fec8e12f2bcb32928a2685ba4d20fd742.
      
      * fix bug of module 'paddle' has no attribute 'fluid' for python3.6 (#35862)
      
      * Add changes suggested by reviewers
      
      * Change @unittest.skipIf... to @OpTestTool.skip_if_not_cpu_bf16() to satisfy Approval CI
      
      * Remove check_dygraph=False to satisify CI Approval
      Co-authored-by: Nzhangbo9674 <82555433+zhangbo9674@users.noreply.github.com>
      787273ed
  15. 17 9月, 2021 1 次提交
  16. 14 9月, 2021 1 次提交
  17. 10 9月, 2021 1 次提交
  18. 27 8月, 2021 1 次提交
  19. 18 8月, 2021 1 次提交
  20. 16 8月, 2021 1 次提交
  21. 07 7月, 2021 1 次提交
  22. 30 6月, 2021 1 次提交
    • J
      Added matmul_v2 BF16/FP32 FWD kernel (#33750) · 24783c84
      jakpiase 提交于
      * added matmul_v2 bf16/fp32 FWD kernel
      
      added matmul_v2 bf16/fp32 FWD kernel
      
      * added formatting
      
      * removed some tests due to timeout in CI
      
      * refactored tests
      
      * merged tests classes into one file
      
      * minor change
      
      * removed test guard for CUDA
      
      * remove skipIf
      
      * changes after review
      
      * formated one file
      
      * minor change
      
      * added skipping UT in CUDA place
      24783c84
  23. 24 6月, 2021 1 次提交
  24. 23 6月, 2021 1 次提交
    • J
      Added split op bf16/fp32 oneDNN kernel (#33584) · 68106509
      jakpiase 提交于
      * base changes for split op
      
      * 90% of split functionality added
      
      * full fp32 functionality
      
      * added bf16 test
      
      * added submemory caching
      
      * added bf test to static mode whitelist
      
      * minor change
      
      * enabled split op for inference
      
      * minor fix
      
      * minor fix
      68106509
  25. 17 6月, 2021 2 次提交
  26. 07 6月, 2021 1 次提交
  27. 26 5月, 2021 2 次提交
  28. 25 5月, 2021 1 次提交
  29. 29 4月, 2021 1 次提交
  30. 21 4月, 2021 1 次提交
  31. 14 4月, 2021 2 次提交
  32. 30 3月, 2021 1 次提交
  33. 22 3月, 2021 1 次提交
  34. 19 3月, 2021 1 次提交
  35. 04 3月, 2021 1 次提交
  36. 02 3月, 2021 1 次提交
    • G
      lamb_op_xpu;test=kunlun (#31012) · d79fdc3d
      Gradie 提交于
      * lamb_op_xpu;test=kunlun
      
      * modify lamb_op_xpu.cc;test=kunlun
      
      * delete atol lamb_op_xpu; test=kunlun
      
      * update xpu.cmake;test=kunlun
      
      * test_error 1e-5,lamb_op_xpu;test=kunlun
      
      * error1e-5,lamb_op_xpu,test=kunlun
      
      * delete atol lamb_xpu;test=kunlun
      
      * modify atol,lamb_op_xpy;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu, XPUOptest;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu,modify xpu_cmake; test=kunlun
      
      * lamb_op_xpu;test=kunlun
      
      * lamb_op_xpu,modify xpucmake;test=kunlun
      d79fdc3d
  37. 18 2月, 2021 1 次提交
    • J
      Add Conv Transpose BF16 (#30877) · caf9d398
      joanna.wozna.intel 提交于
      * Add conv transpose BF16
      
      * Share function GetWeightsTz
      
      * Adjust to review and fix op compatibility
      
      * Add bias to unique handler name
      
      * Remove errors related to paddle enforce
      
      * Add conv2d_transpose to bf16 list and kernel refator
      caf9d398