- 21 3月, 2022 1 次提交
-
-
由 Allen Guo 提交于
* sync changes * copy sOpNamescope * fix UTs * add authors Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NAllen Guo <alleng@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai> * fix code-format * fix compile error * add comments for feed_op Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai>
-
- 14 3月, 2022 1 次提交
-
-
由 Lijunhui 提交于
[KP] Add unittests for brelu,ceil,celu,elu,floor,hard_shrink,hard_sigmoid,log1p,logsigmoid,relu6,silu,soft_relu,softsign,swish (#40448) * solve unexecuted UT * add 24 activation op UT * append swish&thresholded_relu to kpfirst_list * rm thresholded_relu
-
- 11 3月, 2022 1 次提交
-
-
由 houj04 提交于
-
- 10 3月, 2022 2 次提交
-
-
由 Lijunhui 提交于
-
由 z8hanghuan 提交于
* add tril_triu for xpu, *test=kunlun * add tril_triu for xpu, *test=kunlun * add tril_triu for xpu, *test=kunlun * add tril_triu for xpu, *test=kunlun * add tril_triu for xpu, *test=kunlun
-
- 08 3月, 2022 1 次提交
-
-
由 lilong12 提交于
* add pg_hccl
-
- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 03 3月, 2022 2 次提交
-
-
由 ronnywang 提交于
-
由 zhangxiaoci 提交于
-
- 02 3月, 2022 2 次提交
-
-
由 zhangbo9674 提交于
* add softmax log_softmax * refine rocm * refine unittest
-
由 Lijunhui 提交于
-
- 01 3月, 2022 2 次提交
-
-
由 Allen Guo 提交于
-
由 zhangbo9674 提交于
* add scale gather sum * refine CUDA_ATOMIC_WRAPPER ADD for bf16 * add gather unittest * solve conflict * add scale uinttest * add sum unittest * solve conflict * refine gather unittest * refine unittest
-
- 28 2月, 2022 2 次提交
-
-
由 Chen Weihang 提交于
* rename pten_utils to phi_utils * rename pten_utils target * rename Pten to Phi * replace pten with phi * resolve conflict
-
由 Liu-xiandong 提交于
* [KP] Unify .cu and .xpu files with .kps files * fix CI bug in GPU and modify the list * fix conflict * modify the date
-
- 25 2月, 2022 1 次提交
-
-
由 Li Min 提交于
* Fix compile error on cuda_arch less than 700.
-
- 24 2月, 2022 2 次提交
-
-
由 Chen Weihang 提交于
* rename pten to phi * fix infrt compile failed * resolve conflict
-
由 Li Min 提交于
* optimize block config and fp16 atomicAdd perf for lookup_table_v2_grad.
-
- 23 2月, 2022 3 次提交
-
-
由 ShenLiang 提交于
* add processgroup_nccl
-
由 mhhhh1 提交于
* [MLU] add cncl parallel context and mlu resource pool * [MLU] fix the cncl_context_test
-
由 zhangbo9674 提交于
* add elementwise_div * refine rocm * refine code * refine op register * solve conflict * refine unittest * refine unittest precision * add rocm
-
- 22 2月, 2022 2 次提交
-
-
由 zhangyikun02 提交于
-
由 Allen Guo 提交于
-
- 21 2月, 2022 1 次提交
-
-
由 z8hanghuan 提交于
* fix fill_constant bug, *test=kunlun * fix fill_constant bug,*test=kunlun
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 19 2月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Unify paddle/pten::framework::ddim into pten::ddim * fix paddle namespace * compile sucessfully * fix npu src file * fix conflict * fix conflict * fix tensorrt compiler error * fix conflict * fix conflict * fix tesst file conflict * fix conflict * fix mlu file conflict * fix mlu file conflict * fix cinn header file conflict * fix conflict * fix conflict * fix conflict * fix conflict
-
- 18 2月, 2022 3 次提交
- 17 2月, 2022 1 次提交
-
-
由 houj04 提交于
* add softplus op for kunlun2. test=kunlun * add softplus op for kunlun2. test=kunlun * fix code style. test=kunlun * fix code style. test=kunlun * add more test cases. test=kunlun
-
- 16 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* pten matmul cuda kernel support bf16 * fix pten kernel name * add matmul_grad bf16 kernel * add emptylike bf16 kernel * fix compile * suppport rocm * fix error * fix rocm * add bf16 header file * fix compile
-
- 15 2月, 2022 2 次提交
-
-
由 ronnywang 提交于
* [CustomRuntime] Add DeviceManager * [CustomRuntime] Add DeviceInterface * [CustomRuntime] Add Stream, Event, DeviceGuard, CallbackManager * [CustomRuntime] Add plug-in device * [CustomRuntime] Memory module support PluggableDevice * [CustomRuntime] Add WITH_PLUGGABLE_DEVICE cmake option * update * [API] update API doc based on comments, test=develop Co-authored-by: Nqili93 <qili93@qq.com>
-
由 Aurelius84 提交于
* #1 migrate dist-related type()-> dtype() * move datatype function from pten -> fluid/framework * change type() in imperative into convert(dtype()) * modify xx_tensor->type into xx_tensor->dtype * change the set_type interface and the caller * modify xx_tensor.type into xx_tensor.dtype * fix mutable_data(place, dtype()) * change caller of mutable_data in pten and distributed * change the caller of mutable_data in fluid/framework * change the caller of mutable_data in imperative directory * mutable_data: inference * update the call of mutable_data * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType * pass the compile. the next step is remove VarType in Pten * fix all and remove VarType from pten. success in linux. Next task is other platform * fix conflict with develop * fix compiled error * Fix reset conversion * fix conflict * fix compiled problem * fix typo * Fix << in tensor_utils.cc * fix type->dtype * fix unittest * fix tensor init constructor * fix DataTypeSize for BFloat16 * fix code style * fix npu compiled error * fix npu * compile npu sucessfully * fix conflict * fix conflict Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
-
- 09 2月, 2022 1 次提交
-
-
由 qipengh 提交于
-
- 08 2月, 2022 1 次提交
-
-
由 From00 提交于
* Rough implementation for experiment * Support allocate cuda managed memory * Fix CI error * Modify UT * Check whether support memory oversubscription * Fix ROCM Compile error * Fix ROCM Compile error * Fix UT cuda_managed_memory_test * Set UT timeout to 40 * Add UT OOMExceptionTest * Set UT timeout to 50
-
- 07 2月, 2022 1 次提交
-
-
由 tanzhipeng 提交于
-
- 06 2月, 2022 1 次提交
-
-
由 Wilber 提交于
-
- 30 1月, 2022 1 次提交
-
-
由 mhhhh1 提交于
-
- 29 1月, 2022 2 次提交
-
-
由 Liu-xiandong 提交于
* Add XPU compiler for paddle, test=develop * clean code * clean useless code * clean useless code * clean useless code * test * add include path * use clang compiler * xpu2.cmake * XPU2 compiler passed * update * update after pten * combination the WITH_XPU and WITH_XPU2 * update the fuse operation in WITH_XPU and WITH_XPU2 * update * update * update * fix the merge error * update * update the code * update the code * add run_kp_kernel flag * update * update * fix prepared type_ bug * clean and update the code * reset the kernel_primitives * update * clean the code * delete useless comment * fix the bug in WITH_XPU * update * update * modify the abi * delete some useless code * Parameter automation in xpu compilation * Parameter automation in xpu compilation * delete kps in cmake * delete useless comment * clean the code * clean the code
-
由 QingshuChen 提交于
* fix kunlun2 softmax unitest bug *test=kunlun * minor
-