- 15 9月, 2022 5 次提交
-
-
由 Jacek Czaja 提交于
* - mul & matmul changes - fix - bs16 correction of strides * - cosmetic fixes * - lint * - fix * - fix * - format -> mem_desc * - fix * - fix * - fix * - fix * - fix
-
由 傅剑寒 提交于
-
由 WangZhen 提交于
Support 0 shapes input Tensor for MKL slice kernel
-
由 limingshu 提交于
* first commit * fix some bugs in code * fix bugs * to optimize merge one dimension feature
-
由 Li Min 提交于
-
- 14 9月, 2022 8 次提交
-
-
由 Jiabin Yang 提交于
* support bmm and bmm_grad in xpu * add error removal * test=kunlun * refactor code for better structure * test=kunlun * add fp16 kernel for bmm * test=kunlun
-
由 Li Min 提交于
-
由 Leo Guo 提交于
Migrate scale and scatter to phi, and modify the code style for roi_align_kernel. test=kunlun (#45938)
-
由 ykkk2333 提交于
-
由 Leo Chen 提交于
-
由 zhangbo9674 提交于
* support bfloat16 for amp_decorate * add check_finite for bf16 * fix bug * add ut * add ut * refine code
-
由 Yiqun Liu 提交于
-
由 zhangkaihuo 提交于
-
- 13 9月, 2022 3 次提交
-
-
由 JingZhuangzhuang 提交于
* add softmax infer kernel
-
由 ykkk2333 提交于
-
由 ykkk2333 提交于
-
- 09 9月, 2022 7 次提交
-
-
由 engineer1109 提交于
paddle::platform::CudaAtomicAdd https://github.com/PaddlePaddle/Paddle/issues/45881
-
由 sneaxiy 提交于
* fix softmax int64 * follow comments
-
由 5u13 提交于
-
由 Chen Weihang 提交于
* migrate load kernel * remove load op * fix test failed
-
由 Chen Weihang 提交于
* add fusion dir and fuse_softmax_mask kernel * remove fusion kernel dir * migrate infershape * fix code errror
-
由 xiaoguoguo626807 提交于
* modify slice infershape * code style * modify slice_unittest
-
由 Chen Weihang 提交于
* simplify size op * trans to cuda manuly * fix copy error
-
- 08 9月, 2022 2 次提交
-
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * remove fluid code * onednn renaming * Move classes from mkldnn_reuse.h to onednn_reuse.h * Migrate pool+grad, clip+grad and cast oneDNN kernels to PHI * Refactor grad kernels into separate files * Fix CI failures * Fix Codestyle * Implement reviewer suggestions * Add new lines after includes for readability Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 Leo Guo 提交于
-
- 07 9月, 2022 9 次提交
-
-
由 Chen Weihang 提交于
* add save kernel * add save_sr_kernel * remove original save_op * add save gpu kernel * remove combine kernel * add port.h include * add save selected rows test * remove useless kernel.h
-
由 houj04 提交于
* [XPU] update xdnn to 0906. test=kunlun * [XPU] update xdnn to 0907. test=kunlun
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * Migrate reduce_op oneDNN kernels to phi * Remove unnecessary header * remove fluid code * onednn renaming * Change std::vector<int64_t> to IntArray * Fix code style * Move classes from mkldnn_reuse.h to onednn_reuse.h * Move more functions from mkldnn_helper.h to onednn_helpper.h * Change MKLDNN to OneDNN in VLOG message * Implement reviewer suggestions Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 WangZhen 提交于
Adapt tensor output_size for conv2d_transpose and depthwise_conv2d_transpose
-
由 houj04 提交于
-
由 limingshu 提交于
* first commit * merged with develop * merged with develop * fix merge sequential one dims bugs
-
由 Sławomir Siwek 提交于
* scale kernel * endline * add inplace * fix merge conflicts * Merge conflicts
-
由 zhangkaihuo 提交于
-
由 sneaxiy 提交于
* fix amp kernel * update to remove PADDLE_WITH_XPU macro
-
- 06 9月, 2022 6 次提交
-
-
由 YuanRisheng 提交于
* add tensor array * fix ci bugs * fix ci bugs * fix ci bugs * fix ci bugs * update by comment * update code
-
由 ykkk2333 提交于
-
由 ykkk2333 提交于
-
由 Weilong Wu 提交于
[Eager, Performance optimization] reduce_all interface move reduce_all flag from python to C++ (#45744) * [Eager, Performance optimization] move reduce_all flag from python to c++ * polish reduce_all * fix ci error * fix errors
-
由 Weilong Wu 提交于
* [Eager, Performance optimization] reduce_max / min polish * polish reduce_max / min * update min/max kernel reduce_all logic * fix a mistake * fix ci errors * fix errors
-
由 xiaohemaikoo 提交于
-