- 20 9月, 2022 10 次提交
-
-
由 houj04 提交于
* [XPU] update xdnn activations. (#46246) * [XPU] update xpu cmake. test=kunlun
-
由 HongyuJia 提交于
* polish code comments * polish data_device_transform.cc
-
由 Jiabin Yang 提交于
* [Eager] Fix ocr (#46124) * fix linspace error in amp * fix log * fix amp error * fix ocr error which caused by amp * add more check * rename dtype ns * [Eager Bug fix]Fix Detection (#46147) * fix linspace error in amp * fix log * fix amp error * Revert "Simplify size op impl (#45808)" This reverts commit c252b1de. * fix_seg * fix detection Co-authored-by: NChen Weihang <sunny_cwh@163.com> Co-authored-by: NChen Weihang <sunny_cwh@163.com>
-
由 Ghost Screaming 提交于
* Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result is wrong. * Cherry-pick of PR 46045 * Fix bug of reduce_sum kp op. * Fix bug of reduce_sum kp operator compilation. If compilation device is XPU, eigen kernel should be ignored.
-
由 WangZhen 提交于
* Fix TransDataBackend Error when call unsqueeze using MKL Tensor * Add UT * Refine UT
-
由 zhangkaihuo 提交于
cherry-pick : #46016, #46021, #45974 * [Sparse]Sparse add support gpu (#45974) * [Sparse]Remove unused code (#46021) * [Sparse] Add infer meta (#46016)
-
由 Jiabin Yang 提交于
* fix linspace error in amp * fix log * fix amp error
-
由 Charles-hit 提交于
* support cast op backward refuse forward and fix some bugs (#46173) * support cast op backward refuse forward * Fix the bug of high order unit test framework * support sign op backward refuse forward (#46002)
-
由 niuliling123 提交于
cherry-pick from #45826 LayoutAutotune 支持 inplace 类型的OP 根据 Add eager layout autotune #45409 修改意见调整UseAutotune 将LayoutAutotune判断放到controller中,与AMP 判断保持一致
-
由 zyfncg 提交于
* fix wrong eigen header include * fix complie bug * fix nan_inf_utils_detail * fix resource_manager * fix conv_miopen_helper
-
- 19 9月, 2022 7 次提交
-
-
由 RichardWooSJTU 提交于
[vision.ops.nms] Fix return order error and duplicate results with specific inputs (#46148) (#46193) * fix return order error and duplicate results with specific inputs
-
由 weishengying 提交于
-
由 Charles-hit 提交于
* add unit test for sum higher level op (#45961) * support slice op backward refuse forward and add high level unit test (#45960) * support tile op backward refuse forward (#45942) * support expand_v2 op backward refuse forward (#45941) * support concat backward refuse forward (#45940)
-
由 Jiabin Yang 提交于
* [PHI] Support bmm and bmm_grad in xpu (#45887) * support bmm and bmm_grad in xpu * add error removal * test=kunlun * refactor code for better structure * test=kunlun * add fp16 kernel for bmm * test=kunlun * test=kunlun
-
由 minghaoBD 提交于
Co-authored-by: NRichardWooSJTU <37864677+RichardWooSJTU@users.noreply.github.com>
-
由 sneaxiy 提交于
-
由 Chen Weihang 提交于
This reverts commit c252b1de.
-
- 17 9月, 2022 1 次提交
-
-
由 Yuanle Liu 提交于
-
- 16 9月, 2022 2 次提交
-
-
由 Charles-hit 提交于
(cherry-pick)Fix split infershape in static mode and add convert rules for fill_any_like op (#46079) * Fix split bug in static mode (#45906) * fix split bug in static mode * modify code style * modify code style * add unit test for split * add convert rules for fill_any_like op in paddle science (#45985) * add convert rules for fill_any_like op in paddle science * add unit test for fill_any_like op in paddle science * modify fill_any_like convert rule * modify fill_any_like convert rule dtype
-
由 Chen Weihang 提交于
* normalize yaml file name (#45894) * Clear extra attributes of activation op in OpMaker (#45772) * clear extra attr of activation op in opmaker * fix syntax bug * fix mkldnn kernel * fix merge conflict * fix bug * [PHI] Normalize yaml op label (#45976) * normalize yaml op label * revert op_compat yaml change * fix prelu and rnn compat problem * replace api by op * support assign op backward refuse forward (#45879) * normize yaml backward op label (#46028) Co-authored-by: Nzyfncg <zhangyunfei07@baidu.com> Co-authored-by: NCharles-hit <56987902+Charles-hit@users.noreply.github.com>
-
- 15 9月, 2022 2 次提交
-
-
由 WangZhen 提交于
Support 0 shapes input Tensor for MKL slice kernel
-
由 Chen Weihang 提交于
* fix arm fp16 compile error * polish macro impl
-
- 14 9月, 2022 3 次提交
-
-
由 JingZhuangzhuang 提交于
* cherry pick delay tensorrt log * Update trt_plugin.h
-
由 engineer1109 提交于
修复cuda11.7编译出错的问题
-
由 ykkk2333 提交于
-
- 13 9月, 2022 1 次提交
-
-
由 JingZhuangzhuang 提交于
-
- 09 9月, 2022 4 次提交
-
-
由 Charles-hit 提交于
-
由 Chen Weihang 提交于
* add fusion dir and fuse_softmax_mask kernel * remove fusion kernel dir * migrate infershape * fix code errror
-
由 xiaoguoguo626807 提交于
* modify slice infershape * code style * modify slice_unittest
-
由 Chen Weihang 提交于
* simplify size op * trans to cuda manuly * fix copy error
-
- 08 9月, 2022 4 次提交
-
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * remove fluid code * onednn renaming * Move classes from mkldnn_reuse.h to onednn_reuse.h * Migrate pool+grad, clip+grad and cast oneDNN kernels to PHI * Refactor grad kernels into separate files * Fix CI failures * Fix Codestyle * Implement reviewer suggestions * Add new lines after includes for readability Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 Leo Guo 提交于
-
由 Chen Weihang 提交于
-
由 HongyuJia 提交于
-
- 07 9月, 2022 6 次提交
-
-
由 Chen Weihang 提交于
* add save kernel * add save_sr_kernel * remove original save_op * add save gpu kernel * remove combine kernel * add port.h include * add save selected rows test * remove useless kernel.h
-
由 houj04 提交于
* [XPU] update xdnn to 0906. test=kunlun * [XPU] update xdnn to 0907. test=kunlun
-
由 Chen Weihang 提交于
* fix infermeta bug for vector input and output * add unittest
-
由 BiynXu 提交于
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * Migrate reduce_op oneDNN kernels to phi * Remove unnecessary header * remove fluid code * onednn renaming * Change std::vector<int64_t> to IntArray * Fix code style * Move classes from mkldnn_reuse.h to onednn_reuse.h * Move more functions from mkldnn_helper.h to onednn_helpper.h * Change MKLDNN to OneDNN in VLOG message * Implement reviewer suggestions Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 WangZhen 提交于
Adapt tensor output_size for conv2d_transpose and depthwise_conv2d_transpose
-