- 13 9月, 2022 2 次提交
- 09 9月, 2022 11 次提交
-
-
由 engineer1109 提交于
paddle::platform::CudaAtomicAdd https://github.com/PaddlePaddle/Paddle/issues/45881
-
由 sneaxiy 提交于
* fix softmax int64 * follow comments
-
由 Charles-hit 提交于
* fix split bug in static mode * modify code style * modify code style * add unit test for split
-
由 Chen Weihang 提交于
-
由 Leo Chen 提交于
* add operator<< for BuildStrategy * add fake_coalesce * fit allreduce mode for new_exe * remove dubeg code * follow comments
-
由 5u13 提交于
-
由 Chen Weihang 提交于
* migrate load kernel * remove load op * fix test failed
-
由 Charles-hit 提交于
-
由 Chen Weihang 提交于
* add fusion dir and fuse_softmax_mask kernel * remove fusion kernel dir * migrate infershape * fix code errror
-
由 xiaoguoguo626807 提交于
* modify slice infershape * code style * modify slice_unittest
-
由 Chen Weihang 提交于
* simplify size op * trans to cuda manuly * fix copy error
-
- 08 9月, 2022 4 次提交
-
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * remove fluid code * onednn renaming * Move classes from mkldnn_reuse.h to onednn_reuse.h * Migrate pool+grad, clip+grad and cast oneDNN kernels to PHI * Refactor grad kernels into separate files * Fix CI failures * Fix Codestyle * Implement reviewer suggestions * Add new lines after includes for readability Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 Leo Guo 提交于
-
由 Chen Weihang 提交于
-
由 HongyuJia 提交于
-
- 07 9月, 2022 13 次提交
-
-
由 Chen Weihang 提交于
* add save kernel * add save_sr_kernel * remove original save_op * add save gpu kernel * remove combine kernel * add port.h include * add save selected rows test * remove useless kernel.h
-
由 houj04 提交于
* [XPU] update xdnn to 0906. test=kunlun * [XPU] update xdnn to 0907. test=kunlun
-
由 Chen Weihang 提交于
* fix infermeta bug for vector input and output * add unittest
-
由 BiynXu 提交于
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * Migrate reduce_op oneDNN kernels to phi * Remove unnecessary header * remove fluid code * onednn renaming * Change std::vector<int64_t> to IntArray * Fix code style * Move classes from mkldnn_reuse.h to onednn_reuse.h * Move more functions from mkldnn_helper.h to onednn_helpper.h * Change MKLDNN to OneDNN in VLOG message * Implement reviewer suggestions Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 WangZhen 提交于
Adapt tensor output_size for conv2d_transpose and depthwise_conv2d_transpose
-
由 zyfncg 提交于
* clear extra attrs of reduce op in opmaker * fix reduce_mean
-
由 houj04 提交于
-
由 limingshu 提交于
* first commit * merged with develop * merged with develop * fix merge sequential one dims bugs
-
由 Sławomir Siwek 提交于
* scale kernel * endline * add inplace * fix merge conflicts * Merge conflicts
-
由 xiongkun 提交于
* add compile-time infermeta logic for stack infermeta. * add unittest for stack infermeta where -1 exists in shapes. * remove backward changes.
-
由 zhangkaihuo 提交于
-
由 sneaxiy 提交于
* fix amp kernel * update to remove PADDLE_WITH_XPU macro
-
- 06 9月, 2022 10 次提交
-
-
由 YuanRisheng 提交于
* add tensor array * fix ci bugs * fix ci bugs * fix ci bugs * fix ci bugs * update by comment * update code
-
由 Wilber 提交于
-
由 ykkk2333 提交于
-
由 ykkk2333 提交于
-
由 OccupyMars2025 提交于
-
由 Weilong Wu 提交于
[Eager, Performance optimization] reduce_all interface move reduce_all flag from python to C++ (#45744) * [Eager, Performance optimization] move reduce_all flag from python to c++ * polish reduce_all * fix ci error * fix errors
-
由 Weilong Wu 提交于
* [Eager, Performance optimization] reduce_max / min polish * polish reduce_max / min * update min/max kernel reduce_all logic * fix a mistake * fix ci errors * fix errors
-
由 xiaohemaikoo 提交于
-
由 zyfncg 提交于
* set use_cudnn=true for conv2d * clear opmaker of matmul_v2 * fix bug of set_attr * add extra attr checker in infer_shape
-
由 zyfncg 提交于
-