- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 04 3月, 2022 4 次提交
-
-
由 Feiyu Chan 提交于
move cpu_vec.h to phi/kernels/funcs.
-
由 Leo Chen 提交于
* clean distribution_helper, index_impl, aligned_vector code in fluid * fix conflicts
-
由 chentianyu03 提交于
* move reduce gpu impl funcs into pten/kernels/funcs * change reduce header name and namespace * fix spell word error * change mutable_data to dev_ctx.Alloc * modify place to devcontex * format code style * fix build error * fix build error * fix conflict
-
由 hong 提交于
* move conv to pten * move conv to pten; test=develop * fix bug; * add conv cudnn impl; test=develop * update * update operator; test=develop * fix bug; test=develop * move operator and prepared_operator to develop; test=develop * resolve conflict; test=develop * remove useless code;test=develop * add depency ; test=develop * fix bug; * add sig.cc ; test=develop * fix use_op error; test=develop * fix bug; test=develop * fix bug; test=develop * add conv3d register; test=develop * fix star gan and conv_nn_grad test failed; test=develop * add header; test=develop * manul to recover to develop; * resolve confilct; test=develop * remove useless code * fix bug; * remove conv2d_cudnn; test=develop * fix bugs; test=develop * fix cpu rocm compile bugs; test=develop * fix blas error; test=develop * fix compile bug; test=develop * fix windows compile error; test=develop * fix windows error; test=develop * resolve confilct; test=develop
-
- 03 3月, 2022 2 次提交
-
-
由 xiongkun 提交于
* add pad forward * fix error * transfer pad and pass the test_pad_op
-
由 hong 提交于
* add bn cpu version; test=develop * move batch norm to pten * move batch norm to pten; test=develop * fix bug; test=develop * fix func::tranpose depend bug; test=develop * fix compile bugs; test=develop * fix use_op batch_norm bug; test=develop * fix cudnn bn add relu test; test=develop * fix pten context build and double grad bug; test= develop * remve useless code; test=develop * add batch norm gpu fp16 support; test=develop * fix test bn op bug; test=develop * remove output dtype set; test=develop * fix bug; test=develop * fix bug; test=develop * fix applay pass to program bug; test=develop * revert to develop; test=develop * fix rocm bug; test=develop * revert operator to develop; test=develop * fix pre_commit; test=develop * fix statci check error; test=develop * resolve conflict; test=develop * ana batch norm bug; * revert batch norm op * resolve conlict * fix nan inf and speed bug; test=develop * fix bug; test=develop * fix error; test=develop * test expand op; test=develop * fix bug; test=develop * resolve confilct * resolve confilct; test=develop * polish code; test=develop * polish code; test=develop * change mutable data to ctx alloc; test=develop * make format same with ci; test=develop * fix format error with ci; test=develop
-
- 02 3月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move sequence2batch * move lstm and gru * Add phi/kernels directory into exclusion to stop using hipcc to compile non .cu files in it.
-
- 28 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten_utils to phi_utils * rename pten_utils target * rename Pten to Phi * replace pten with phi * resolve conflict
-
- 25 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* support cudnn kernel moving * polish cmake rules * add unittest for coverage * remove orig kernel * remove softmax cudnn kernel * fix softmax test failed * fix npu func error * resolve conflict * rename gpu dnn kernels * fix name rule error * fix compile error * update fp16 namespace
-
- 22 2月, 2022 1 次提交
-
-
由 xiongkun 提交于
* change Vector to std::vector and provide MixVector class as a helper wrapper class * solve the multi-gpu hang problem * remove the duplicate template instantialize * Copy vector to cpu * add CopyToCPU * xxx * final version: fix the problem of all reduce * remove mixvector dependence * fix * merge * fix code * fix by CI
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 19 2月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Unify paddle/pten::framework::ddim into pten::ddim * fix paddle namespace * compile sucessfully * fix npu src file * fix conflict * fix conflict * fix tensorrt compiler error * fix conflict * fix conflict * fix tesst file conflict * fix conflict * fix mlu file conflict * fix mlu file conflict * fix cinn header file conflict * fix conflict * fix conflict * fix conflict * fix conflict
-
- 18 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move blas related files * move lapack related files
-
- 15 2月, 2022 2 次提交
-
-
由 Feiyu Chan 提交于
* move paddle/operators/math/functors.h * move paddle/operators/math/compound_functors.h
-
由 Aurelius84 提交于
* #1 migrate dist-related type()-> dtype() * move datatype function from pten -> fluid/framework * change type() in imperative into convert(dtype()) * modify xx_tensor->type into xx_tensor->dtype * change the set_type interface and the caller * modify xx_tensor.type into xx_tensor.dtype * fix mutable_data(place, dtype()) * change caller of mutable_data in pten and distributed * change the caller of mutable_data in fluid/framework * change the caller of mutable_data in imperative directory * mutable_data: inference * update the call of mutable_data * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType * pass the compile. the next step is remove VarType in Pten * fix all and remove VarType from pten. success in linux. Next task is other platform * fix conflict with develop * fix compiled error * Fix reset conversion * fix conflict * fix compiled problem * fix typo * Fix << in tensor_utils.cc * fix type->dtype * fix unittest * fix tensor init constructor * fix DataTypeSize for BFloat16 * fix code style * fix npu compiled error * fix npu * compile npu sucessfully * fix conflict * fix conflict Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
-
- 11 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move operators/math/math_function_* to pten/kernels/func * namespace from `paddle::operators::math` to `pten::funcs`
-
- 08 2月, 2022 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 06 2月, 2022 1 次提交
-
-
由 Wilber 提交于
-
- 29 1月, 2022 1 次提交
-
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op. * Remove useless code. * Remove useless code. * Optimize layer_norm fwd when cols is 1024. * Remove useless code. * Minors. * Minors. * Modifications accordding to reviews. * Minors. * Optimize layer_norm bwd kernel when cols is 1024. * Polish layer_norm_bwd_1024 kernel. * Limit ln_bwd_1024_kernel to paddle_with_cuda. * Fix double type compile error. * Add optimization of ln bwd for fused_dropout_add_ln op. * Polish codes.
-
- 26 1月, 2022 2 次提交
-
-
由 Leo Chen 提交于
* update cmake file to remove fluid kernel * add pten declaration.h to where pybind.h used * fix sync_bn and tensorrt_engine * refine detection_library * fix interpreter_core * support eager legacy * fit eager legacy for pten * fall back to cpu if not found kernel * fix compile problem * fix compile problem * refine fallback logic * fit operator.run() * fix xpu compile * fit for new_exec * add REGISTER_OP_WITHOUT_GRADIENT * un-cache pt_kernel_context * fix compile * fix cudnn * fix compiling with on_infer * fix mkldnn * fix isfinite_v2 * fix xpu problem * fix op_device * refine fallback for xpu * fix xpu compile * merge develop * refine code format * fix compile * fix compile * add data_transfer * fix PreparePtenData * fix cpu context * merge develop * fix compile * fix error device context * fix xpu * fix dev_ctx
-
由 Li Min 提交于
* Optimize layer_norm fwd when cols is 1024.
-
- 25 1月, 2022 1 次提交
-
-
由 Weilong Wu 提交于
* Added selected_rows and rw_lock to pten * Renamed the unit test target to fix CI * Removed Class SelectedRows in Fluid, changed include/cmake relationship, use pten::SelectedRows in Fluid * Remove rw_lock.h,rw_lock_test.cc in fluid * Use pten::RWLock and pten::AutoRDLock, fix CI * Use pten::SelectedRows * Use pten::SelectedRows * Fix to pass NPU CI * Use pten::SelectedRows, to pass NPU CI * To fix NPU CI * To fix NPU CI again
-
- 24 1月, 2022 1 次提交
-
-
由 Jacek Czaja 提交于
* - more unlikely * - compilation fix * - removed redundant definition * - fix * - Fixes * - compilation fix for windows
-
- 21 1月, 2022 1 次提交
-
-
由 Weilong Wu 提交于
-
- 18 1月, 2022 1 次提交
-
-
由 Zhanlue Yang 提交于
* Merged LoDTensor with Tensor,test=allcases * Patched python level LoDTensor * Patched python level LoDTensor * Merge Tensor into DenseTensor * Fixed namespace issues,test=allcases * Fixed merge issues * Fixed inference issues * Fixed NPU test issues * Fixed merge issues
-
- 17 1月, 2022 1 次提交
-
-
由 Wilber 提交于
* add pten::Place data structure. * update ci problem * fix ci problem * update * using platform::Place=pten::Place * remove BOOST_GET_CONST for CPUPlace and GPUPlace * compile pass 25%. * compile pass 45% * compile pass 60% * remove boost_get for xpu npu mlu and ipu * compile pass on cpu and gpu. * fix compile problem * fix compile error. * update * fix ci problem * update * ci approve * fix ci problem * fix ci eager test problem * remove BOOST_GET_CONST * fix npu compile
-
- 12 1月, 2022 1 次提交
-
-
由 limingshu 提交于
* first commit * fix wrong filename * fix the wrong spell name * fix gpu config warper * modify according to pr advices * fix GpuLauchConfig1D api bugs * change the config for dropout grad * fix bugs * modification according to pr advices * modification according to pr advices
-
- 07 1月, 2022 1 次提交
-
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op.
-
- 28 12月, 2021 1 次提交
-
-
由 Li Min 提交于
-
- 24 12月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* combine reduce_cuda codes * support float16 in pten redcue_mean * replace ReduceCudaKernel impl with pten reduce impl * mv reduce funcs into reduce_cuda_impl * rm unsed codes and headers * mv GetReduceDim into reduce_cuda_impl * recover GetReduceDim in reduce_op.h * add new dispatch macro * fix pool op output not inited and cause transform to pten::denseTensor error * fix output tensor not initialized error * rename new dispatch macro and format code style * rm reduce_functor_op.h file
-
- 17 12月, 2021 1 次提交
-
-
由 niuliling123 提交于
-
- 16 12月, 2021 1 次提交
-
-
由 niuliling123 提交于
* Add the transformop parameter in TensorReduceFunctorImpl
-
- 03 12月, 2021 1 次提交
-
-
由 ronnywang 提交于
* refine structure for cuda and rocm * update * update * update * update
-
- 29 11月, 2021 1 次提交
-
-
由 piotrekobiIntel 提交于
-
- 23 11月, 2021 1 次提交
-
-
由 Li Min 提交于
Add support for bias is none for fused_attention op.
-
- 19 11月, 2021 1 次提交
-
-
由 wuhuanzhou 提交于
* GeneratePass support attr condition and mapping, test=develop * fix coverage, test=develop * Add fuse_resnet_unit pass, test=develop * fix CI errors, test=develop * fix CI errors, test=develop * fix unittest error when compiling without CUDA, test=develop * fix static ci error, test=develop * limit kernel size must equal 1, test=develop
-
- 17 11月, 2021 1 次提交
-
-
由 piotrekobiIntel 提交于
* Change first batch of mkldnn headers and namespace names to dnnl * Revert changes to tensor.h, which require approval * Format changes with pre-commit * Add int32 tests * Fix int32 tests and call GetDataFromTensor for int32 * Fix test
-
- 16 11月, 2021 1 次提交
-
-
由 Li Min 提交于
fused_attention_op的实现中,使用了bias_add,且其实现是通过使用kernel primitive来实现的,之后kernel primitive的WriteData api接口及函数内部实现发生了更改,将判断越界的逻辑移到了template的参数中,使得调用的分支有错误,产生了越界赋值操作,污染了别的显存空间的内容。具体表现为:test_fused_attention_op_api.py 单次执行基本上不会报错,多次循环执行不同shape的输入,结果计算不对,具有偶发性,bug不易察觉。
-
- 12 11月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
* fix bug: 1. atten: set the default value of attn_dropout_rate to None 2. ffn: add activation parameter
-