1. 09 3月, 2022 4 次提交
  2. 08 3月, 2022 10 次提交
  3. 07 3月, 2022 8 次提交
    • X
      [OpTest] Support to test paddle API end-to-end for check_eager (#40169) · 79a32715
      xiongkun 提交于
      * add python api test in TestOp
      
      * test_python_api if self.python_api is set
      
      * fix code by CR
      79a32715
    • H
      refactor unittest for nearest_interp_v2_op_xpu. test=kunlun (#39804) · c09adab8
      houj04 提交于
      * refactor unittest for nearest_interp_v2_op_xpu. test=kunlun
      
      * fix code style. test=kunlun
      
      * fix code style. test=kunlun
      c09adab8
    • M
      cuBlasLt Epilogue To Fuse Linear + ReLU|GeLU (#39437) · 2a3d9eca
      Ming-Xu Huang 提交于
      * Added cuBlasLtHandle_t to device context.
      
      * Added fused_gemm_epilogue op.
      
      1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue.
      2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2.
      2. Act currently only be supported ReLU. (Will add GeLU in the future).
      
      * Added UT to fused_gemm_epilogue op.
      
      * Added LinearAct Pattern
      
      1. Added LinearAct into graph_pattern_detector.* to define (2.)'s
      pattern.
      2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)).
      3. act currently only support ReLU (Will support GeLU in the future).
      
      * Added FuseGemmEpiloguePass
      
      1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU}
      fusion (GeLU will be supported in the future).
      2. Only support matmul_v2 from nn.Linear.
      
      * Added pybind to BuildStrageter.fuse_gemm_epilogue_.
      
      * Added UT for fuse_gemm_epilogue_pass.
      
      * GeLU support and EpilogueSingleton
      
      1. Added GeLU support to fused_gemm_epilogue op.
      2. Added EpilogueSingleton to cache auxiliary pointer.
      3. Added related UTs.
      
      * Rename cublaslt_epilogue_opto gemm_epilogue_op.*.
      
      * Added both train and infer pattern to LinearAct.
      
      1. Added support of fwd graph with grap_ops linking to LinearAct.
      2. Added related changes to fuse_gemm_epilogue_pass for above
      modification.
      
      * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass.
      
      * Added identity activation support to gemm_epilogue_op.
      
      * Added Linear Fusion (matmul_v2 + ele_add)
      
      1. Added matmul_v2 + ele_add pattern to LinearActPattern.
      2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass.
      
      * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.*
      
      * Add fused_gemm_epilogue_grad op.
      
      1. Added fused_gemm_epilogue_grad to support backward epilogue fusion.
      
      * Add UTs to fused_gemm_epilogue_grad_op.
      
      * Change attribute name in fused_gemm_epilogue_grad_op for clearing.
      
      * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op.
      
      * Added ElementwiseAdd+Matmul+Act graph pattern detection.
      
      * Fuse backward of Linear( Act(x))
      
      1. Added backward fusion pass to Linear( Act(x)).
      2. Added backward fusion pass to Linear(x).
      
      * Added UTs to backward fusion of Linear(Act(x)).
      
      * Complete document of arguments to fused_gemm_epilogue_op.
      
      * Made arguments of some functions pass by reference.
      
      * Modify code with review comments.
      
      1. Made arguments of some function pass by reference.
      2. Removed redundant code.
      3. Followed Google code style to change code.
      
      * Made 'const' code style be consistent
      
      * Fixed random seed of python UTs.
      
      * Set Compiling constrains to cuBlasLt
      
      1. Require CUDA 11.6+
      2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6.
      
      * Code Reivew from Paddle
      
      1. Changed arguments name is_first_gemm to without_x_gradient for
      clearing.
      2. Applied PADDLE_THROW in fused_gemm_epilogue_op.
      
      * Remove EpilogueSingleton
      
      1. Applied ReserveSpace to replace Epilogue for passing auxiliary
      pointers between FWD and BWD.
      
      * Fix a logical error and enhance UTs.
      
      1. Added act op count checking in UTs.
      2. Fix issue to fuse backward or ReLU(Linear(X)).
      3. TODO: solve GELU fusion issues.
      
      * Fix Linear and GeLU fusion issues.
      
      1. Modified graph_detech_pattern to fit with both linear wiht gelu or
      relu.
      2. Modified data range in Uts to allow negative values.
      
      * Removed fused_gemm_epilogue_op.h.
      
      * Rename namespace pten to phi.
      
      * Rename name of arguments in fused_gemm_epilogue_op
      
      1. bias -> Bias.
      2. out -> Out.
      3. reserve_space -> ReserveSpace.
      
      * Change EpiloguePassActivationCache as local variable.
      
      1. Removed singleton in EpiloguePassActivationCache.
      2. Made EpiloguePassActivationCache as an argument to each pass
      functions.
      2a3d9eca
    • J
      fix_conv2d_trt_convert_test_case (#39882) · d255bfe0
      JingZhuangzhuang 提交于
      * fix_conv2d_trt_convert_test_case
      
      * fix_conv2d_trt_convert_test_case
      
      * fix_conv2d_trt_convert_test_case
      
      * fix_conv2d_trt_convert_test_case
      d255bfe0
    • Z
      [bf16] add bf16 kernel: gaussian_random fill_constant fill_any_like (#40027) · 6a0d60d2
      zhangbo9674 提交于
      * add gaussian random
      
      * add full
      
      * refine reduce
      
      * refine code
      
      * refine gaussian_random unittest
      
      * add unittest for fill_any_like fill_constant
      6a0d60d2
    • Z
      [AutoParallel]engine support pp (#40084) · 71cb016c
      zhaoyingli 提交于
      * engine support pp
      
      * fix format
      
      * avoid multi print
      
      * fix convert
      
      * bug fix
      
      * add pp unittest
      71cb016c
    • Z
      [bf16] add bf16 kernel: sigmoid & sqrt & softplus & square (#40004) · 98c427e2
      zhangbo9674 提交于
      * add activ
      
      * refine unittest
      
      * refine unittest
      
      * refine unittest
      
      * refine unittest
      
      * refine code
      98c427e2
    • L
      initialize processgroupnccl with store (#40181) · 0ad25fb9
      lilong12 提交于
      0ad25fb9
  4. 05 3月, 2022 1 次提交
    • W
      Ps optimizer multi programs (#39883) · bcaf88d2
      wangguanqun 提交于
      * fix benchmark and communicator config
      
      * fix bugs of the_one_ps
      
      * multi program and fix bug in optimizer
      
      * multi program in the_one_ps
      
      * public commcontext
      
      * ps optimizer multi programs
      
      * the one ps merge
      
      * fix bug in test
      bcaf88d2
  5. 04 3月, 2022 7 次提交
    • H
      Move yolo box to phi (#40112) · faece382
      hong 提交于
      * add yolo box kernel; test=develop
      
      * fix comile error; test=develop
      faece382
    • H
      Enable eager model test (#40154) · 880dec0f
      hong 提交于
      * enable eager model; test=develop
      
      * set bs = 5; test=develop
      880dec0f
    • H
      Add digamma abs trunc yaml (#40024) · 0bfba16b
      hong 提交于
      * add digamma, abs, trunc; test=develop
      
      * fix bug and add diagonal; test=develop
      
      * add name coverter; test=develop
      
      * update tracer.py; test=develop
      
      * add test case; test=develop
      
      * fix bugs; test=develop
      0bfba16b
    • C
      [paddle-inference]support setting fully connected in multi-head attention... · 8dbfc2ae
      ceci3 提交于
      [paddle-inference]support setting fully connected in multi-head attention static shape branch to int8  (#39660)
      
      * fix inference int
      
      * update
      
      * add unittest
      8dbfc2ae
    • L
      add communication api for ProcessGroupGloo (#40100) · 5435459a
      lilong12 提交于
      * add pg_gloo apis
      5435459a
    • H
      add eager test in rnn and fc; test=develop (#40149) · c47ae621
      hong 提交于
      c47ae621
    • H
      Move conv to pten (#39354) · d50fb43e
      hong 提交于
      * move conv to pten
      
      * move conv to pten; test=develop
      
      * fix bug;
      
      * add conv cudnn impl; test=develop
      
      * update
      
      * update operator; test=develop
      
      * fix bug; test=develop
      
      * move operator and prepared_operator to develop; test=develop
      
      * resolve conflict; test=develop
      
      * remove useless code;test=develop
      
      * add depency ; test=develop
      
      * fix bug;
      
      * add sig.cc ; test=develop
      
      * fix use_op error; test=develop
      
      * fix bug; test=develop
      
      * fix bug; test=develop
      
      * add conv3d register; test=develop
      
      * fix star gan and conv_nn_grad test failed; test=develop
      
      * add header; test=develop
      
      * manul to recover to develop;
      
      * resolve confilct; test=develop
      
      * remove useless code
      
      * fix bug;
      
      * remove conv2d_cudnn; test=develop
      
      * fix bugs; test=develop
      
      * fix cpu rocm compile bugs; test=develop
      
      * fix blas error; test=develop
      
      * fix compile bug; test=develop
      
      * fix windows compile error; test=develop
      
      * fix windows error; test=develop
      
      * resolve confilct; test=develop
      d50fb43e
  6. 03 3月, 2022 7 次提交
    • S
    • W
      EmbEltwiseLayernorm fix (#40015) · c3f3643b
      wenbin 提交于
      * emb fix
      
      * fix trt6 compile
      
      * fix half
      
      * absolute error fix
      c3f3643b
    • L
      add communication api for ProcessGroupNCCL (#40097) · b565b349
      lilong12 提交于
      b565b349
    • B
      change_ASP_sharding_option (#40028) · 815f7a67
      Baibaifan 提交于
      815f7a67
    • J
      Support slim eager (#39874) · da47544c
      Jiabin Yang 提交于
      * eager, test=develop
      
      * fix bug, test=develop
      
      * eager, test=develop
      
      * merge legacy to fluid
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Refactor TensorAdd func by template and remove gradient_accumulation in eager
      
      * Remove needless target name
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Use overload instead of template
      
      * Remove legacy code
      
      * Remove legacy code
      
      * selectedrows, test=develop
      
      * Remove DataType test
      
      * eager, test=develop
      
      * eager, test=develop
      
      * support gan, test=develop
      
      * Using Tensor directly instead of using EagerTensor
      
      * support gradient_accumulation
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * refine code
      
      * ptb, test=develop
      
      * Rename all EagerTensor to Tensor
      
      * Rename some EagerTensor to Tensor
      
      * rename EagerTensor to EagerVariable
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * add more test
      
      * eager, test=develop
      
      * Support copiable selected rows and merge develop
      
      * save load, eager, test=develop
      
      * save load, eager, test=develop
      
      * refine, test=develop
      
      * remove useless _set_value method
      
      * refine, test=develop
      
      * refine, test=develop
      
      * revert static_runner, test=develop
      
      * EagerTensor to Tensor, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * clear grad, test=develop
      
      * merge, develop
      
      * merge, develop
      
      * merge, test=develop
      
      * merge, test=develop
      
      * Support quant and part of slice
      
      * support legacy static save
      
      * extend slim tests time
      
      * remove imperative on inference
      
      * remove imperative on inference
      
      * merge develop
      
      * fix typo
      
      * fix typo
      
      * split slice related code into 2 part for imperative and eager
      
      * split slice from inference
      
      * split slice from inference
      
      * fix test_tensor_register_hook
      Co-authored-by: NWang Huan <wanghuan29@baidu.com>
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      Co-authored-by: Nwanghuancoder <wanghuancoder@163.com>
      da47544c
    • H
      Move bn to pten (#39347) · ebd0f512
      hong 提交于
      * add bn cpu version; test=develop
      
      * move batch norm to pten
      
      * move batch norm to pten; test=develop
      
      * fix bug; test=develop
      
      * fix func::tranpose depend bug; test=develop
      
      * fix compile bugs; test=develop
      
      * fix use_op batch_norm bug; test=develop
      
      * fix cudnn bn add relu test; test=develop
      
      * fix pten context build and double grad bug; test= develop
      
      * remve useless code; test=develop
      
      * add batch norm gpu fp16 support; test=develop
      
      * fix test bn op bug; test=develop
      
      * remove output dtype set; test=develop
      
      * fix bug; test=develop
      
      * fix bug; test=develop
      
      * fix applay pass to program bug; test=develop
      
      * revert to develop; test=develop
      
      * fix rocm bug; test=develop
      
      * revert operator to develop; test=develop
      
      * fix pre_commit; test=develop
      
      * fix statci check error; test=develop
      
      * resolve conflict; test=develop
      
      * ana batch norm bug;
      
      * revert batch norm op
      
      * resolve conlict
      
      * fix nan inf and speed bug; test=develop
      
      * fix bug; test=develop
      
      * fix error; test=develop
      
      * test expand op; test=develop
      
      * fix bug; test=develop
      
      * resolve confilct
      
      * resolve confilct; test=develop
      
      * polish code; test=develop
      
      * polish code; test=develop
      
      * change mutable data to ctx alloc; test=develop
      
      * make format same with ci; test=develop
      
      * fix format error with ci; test=develop
      ebd0f512
    • L
      Add the implementation of Gloo for ProcessGroup (#39892) · c16f85f9
      lilong12 提交于
      * add pg_gloo
      c16f85f9
  7. 02 3月, 2022 3 次提交
    • F
      [MLU] add mlu ci script (#39805) · a8e02ef1
      fwenguang 提交于
      * [MLU] add mlu ci script
      
      * Update CMakeLists.txt
      a8e02ef1
    • L
      add check for backward hook (#40041) · 1980e33a
      Leo Chen 提交于
      * add check for backward hook
      
      * refine ut
      1980e33a
    • H
      Move transpose to pten (#39327) · 7a857924
      hong 提交于
      * immigrate_transpose_to_pten cpu kernel only; test=develop
      
      * fix bug; test=develop
      
      * add transpose cuda api
      
      * bug fix;
      
      * fix bugs
      
      * fix bugs; test=develop
      
      * bug fix;
      
      * move transepose to pten; test=develop
      
      * fix bug; test=develop
      
      * fix bugs; test=develop
      
      * add transpose grad fp16 support; test=develop
      
      * fix bug; test=develop
      
      * fix npu bug; test=develop
      
      * fix nemul = 0 bug; test=develop
      
      * add fp16 support; test=develop
      
      * fix data type register bug; test=develop
      
      * fix transpose bug; test=develop
      
      * update transpose
      
      * fix transpose bug; test=develop
      
      * remove useless code; test=develop
      
      * remove useless code; test=develop
      
      * fix transpose alias bug; test=develop
      
      * polish code; test=develop
      
      * resolve confict; test=develop
      
      * resolve confilct; test=develop
      
      * recover prepared operator; test=develop
      
      * fix bug; test=develop
      
      * polish code; test=develop
      
      * fix bug; test=develop
      
      * fix bug; test=develop
      7a857924