- 06 4月, 2023 1 次提交
-
-
由 Sonder 提交于
* add kernel functions * update kernel functions * update func parameters' name * create codes for gpu device * 调整文件位置 * fix include error * remove dependent files to phi/ * restore fused_attention_op.cu * fix dependence errors * fix dependence errors * fix include error * fix all depandence errors[build success] * remove useless include * recover useless include * use phi::ToNCCLDataType * fix namespace * update new register code * fix error in fused_gemm_epilogue_utils * fix error in FusedAttentionKernel parm * finish fused_attention registe code[build success] * add paddle::optional * add sig file * fix build error * fix a include error * update CMkaeList * fix parameter sequence * add include file * update #if before include * fix grammly error * update codes for DropoutParam * remove const cast * trans some fluid api to phi api * add #if * update test code * update test codes * recover test codes * trans fused_attention to fluid * move #endif to end * move #endif * delete useless files * use fused attention utils and recover random seed * remove fluid include in phi
-
- 23 3月, 2023 1 次提交
-
-
由 sneaxiy 提交于
* remove fluid deps in fused_linear_param_grad_add_kernel * fix compile error * fix ut error * follow comments
-
- 22 3月, 2023 1 次提交
-
-
由 Ruibiao Chen 提交于
-
- 20 3月, 2023 1 次提交
-
-
由 limingshu 提交于
* optimization for fused linear op * fix code format * optimization for linear fused forward * merge with develop * fix bugs for gemm_ephilog * package of cublaslt ephilogue type with enmu * final fix before code reviewing * fix missed fusedType typo * fix code according to review suggestions * fix windows ci error * change location of MatmulPlanner * add some changes for compiler error fix ---------
-
- 26 2月, 2023 1 次提交
-
-
由 Yiqun Liu 提交于
* Enable matmul + bias fusion in fused_gat_attention. * Add a variable to control whether using fused matmul + bias.
-
- 07 12月, 2022 1 次提交
-
-
由 张春乔 提交于
-
- 28 9月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* remove needless using tensor * remove needless using tensor * resolve conflict * replace tensor using * fix format error * revert needless changing * fix rocm and npu compile error * fix cinn compile error * fix format error * fix mkldnn format error * fix mkldnn format error * fix cinn compile error * fix cinn compile error * fix cinn compile error * resolve conflict
-
- 01 8月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* remove cudaDeviceContext * remove more template * fix rocm compile * remove alias name CUDADeviceContext * fix compile * fix tests * revert changes
-
- 26 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 30 5月, 2022 1 次提交
-
-
由 crystal 提交于
-
- 24 5月, 2022 1 次提交
-
-
由 YuanRisheng 提交于
* move grad_add * fix unittest bugs * fix compile bugs
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 18 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move blas related files * move lapack related files
-
- 08 2月, 2022 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 06 2月, 2022 1 次提交
-
-
由 Wilber 提交于
-
- 18 1月, 2022 1 次提交
-
-
由 Zhanlue Yang 提交于
* Merged LoDTensor with Tensor,test=allcases * Patched python level LoDTensor * Patched python level LoDTensor * Merge Tensor into DenseTensor * Fixed namespace issues,test=allcases * Fixed merge issues * Fixed inference issues * Fixed NPU test issues * Fixed merge issues
-
- 16 12月, 2021 1 次提交
-
-
由 niuliling123 提交于
* Add the transformop parameter in TensorReduceFunctorImpl
-
- 16 11月, 2021 1 次提交
-
-
由 Li Min 提交于
fused_attention_op的实现中,使用了bias_add,且其实现是通过使用kernel primitive来实现的,之后kernel primitive的WriteData api接口及函数内部实现发生了更改,将判断越界的逻辑移到了template的参数中,使得调用的分支有错误,产生了越界赋值操作,污染了别的显存空间的内容。具体表现为:test_fused_attention_op_api.py 单次执行基本上不会报错,多次循环执行不同shape的输入,结果计算不对,具有偶发性,bug不易察觉。
-
- 23 9月, 2021 1 次提交
-
-
由 Li Min 提交于
-