- 20 10月, 2022 7 次提交
-
-
由 Kaipeng Deng 提交于
* add fused_attention_pass. test=develop * support fp16. test=develop * fix format. test=develop
-
由 yeliang2258 提交于
* Fix quantize model deploy bugs when using MKLDNN (#45920) * fix immutable op quantize bugs * fix * fix build bug * fix test * notest,test=inference * fix ppyoloe acc drop bugs * fix test * fix test * add test * fix * fix * fix test * fix refined name bug * fix test * bias fix * fix matmul weight dequant bug * re-ci * fix tester * fix test * fix tester * update weight dequantize func * update code * update test for converage * update test * update cmake * update cmakelist * update code * rerun ci * remove useless code * re-ci * update code * update code * fix header * update code for log
-
由 zhoutianzi666 提交于
* stride_to_24 * fix CI failing
-
由 Wang Bojun 提交于
* Enhance the layernorm shift partation fuse op when shift size > 0 (roll shifting) * fix cherry-pick test
-
由 JingZhuangzhuang 提交于
-
由 sneaxiy 提交于
Fix some operators when the tensor.numel() > INT32_MAX
-
由 sneaxiy 提交于
support pure bfloat16 for more ops
-
- 19 10月, 2022 4 次提交
-
-
由 yeliang2258 提交于
* Add unsigned int8 propagation * Add or modify unit tests * Correct concat scale checking * Apply review suggestions * Corrections Co-authored-by: Njoanna.wozna.intel <joanna.wozna@intel.com>
-
由 Ghost Screaming 提交于
* Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result is wrong. * Support allow_partial switch, which can be configure in pipeline_configs. If sent tensor are not the same from different hosts, they shouldn't been sent partially and then concated as a whole tensor. * Change name allow_partial to enable_partial_send_recv. * Add global variable _enable_partial_send_recv
-
由 WangZhen 提交于
[CherryPick][Dy2St]Fix recurrent op eager deletion pass error in dy2st
-
由 Hui Zhang 提交于
Construct exec and ctx only once in cond op to speed up
-
- 18 10月, 2022 6 次提交
-
-
由 Wilber 提交于
-
由 weishengying 提交于
Add symbolic shape deduction function for unfold, scatter_nd_add, p_norm, grid_sampler, pad3d, etc (#46291) (#47003)
-
由 zhouweiwei2014 提交于
新增sparse.is_same_shape、sparse.reshape、sparse.transpose 三个API
-
由 zhoutianzi666 提交于
-
由 Haohongxiang 提交于
* [Dygraph] Fix performance of pp+mp by using send/recv_calc_stream instead of send/recv (#46116) * [Dygraph] Fix Perf of FusedFeedForward and FusedAttention with AllReduce (#46780) * update
-
由 Wang Bojun 提交于
* draft with debug print * remove debug print * bug fix for ci
-
- 17 10月, 2022 3 次提交
-
-
由 Wen Sun 提交于
* Support both use_calc_stream and sync_op in send recv APIs (#46023) * Support both use_calc_stream and sync_op in allgather API (#46295) * Support both use_calc_stream and sync_op in collective communication API (#46761) * Move group and all reduce from collective to communication (#45848) * Completes bfloat16 dtype for collective api in eager mode (#45844) * Fix collective APIs cannot be recognized when building docs (#46962) Co-authored-by: NLiYuRio <63526175+LiYuRio@users.noreply.github.com>
-
由 zhangkaihuo 提交于
cherry-pick : #46322, #46245 Sparse API 支持静态图
-
由 Allen Guo 提交于
* paddle-inference support custom-ops Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> * fix tolower Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai>
-
- 14 10月, 2022 5 次提交
-
-
由 Wilber 提交于
-
由 xiaoxiaohehe001 提交于
-
由 Aurelius84 提交于
* [BUG]Fix expand_as_v2 bug while X and Y with different dtype * fix commit
-
由 Zhang Jun 提交于
* fix reshape2 opteller; add elementwise min/max register for tensorrt
-
由 zhoutianzi666 提交于
-
- 13 10月, 2022 3 次提交
-
-
由 zhangbo9674 提交于
-
由 傅剑寒 提交于
Fix set_value failure when source tensor is fp16 Dtype and destiny value is a number (dev PR link:#46801)
-
由 Sławomir Siwek 提交于
* Revert pool+grad oneDNN kernel conversion (#45989) * [PHI] transpose2_grad op migration (#46139) * op migrated, Copy(OneDNNContext, ...) added * mutable_data & op registration in fluid removed * refactoring * OneDNNGetDataType to uppercase * missing cpu check added, handler moved to .h file * name changed to transpose_grad * Copy changed back to TensorCopy * Resizing corrected, Copy(OneDNNContext) removed Co-authored-by: NPiotr Paturej <48731682+piotrekobi@users.noreply.github.com> Co-authored-by: NPaulina Gacek <paulina.gacek@intel.com>
-
- 12 10月, 2022 1 次提交
-
-
由 niuliling123 提交于
Cherry-pick 46541 保证Reset50 TSM deeplabv3模型零修改下实现Layout自动调优
-
- 11 10月, 2022 4 次提交
-
-
由 Sławomir Siwek 提交于
* [PHI] Migrate gelu kernels (#45596) * gaussian random * mkldnn to onednn renaming * fix merge conflicts * remove fluid code * onednn renaming * gelu fwd * sort activations * gelu gradient * remove unused macros * merge conflicts * fix merge conflicts * remove extra contraint from gelu op * [PHI] relu6_grad kernel (#46501) * Relu6 * remove fluid handler * add individual kernel signature * coding style * replace bounded_relu with clip * whitespace * code style
-
由 Sławomir Siwek 提交于
Co-authored-by: NPiotr Paturej <48731682+piotrekobi@users.noreply.github.com>
-
由 ceci3 提交于
-
由 Yuanle Liu 提交于
-
- 10 10月, 2022 6 次提交
-
-
由 feng_shuai 提交于
* fix gather op convert to only support int32 index as input. * add ut
-
由 Sławomir Siwek 提交于
[cherry-pick] [PHI] Migrate concat+grad, expand+grad, fill_constant … oneDNN kernels (#45863) (#46727) * [PHI] Migrate concat+grad, expand+grad, fill_constant, nearest_interp and bilinear_interp oneDNN kernels (#45863) * Migrate concat+grad, expand+grad, fill_constant, nearest_interp_v2 and bilinear_interp_v2 oneDNN kernels to PHI * Remove old namespace variable * Fix invalid out dims error * Add mutable_data method to concat output * Add check for -1 dim before computing out_dims * Capitalize oneDNNGetDataType function name * Change fill_constant kernel to correct PHI kernel * Attempt to fix dims error * Fix fill_constant (full) kernel * update dependencies Co-authored-by: NPiotr Paturej <48731682+piotrekobi@users.noreply.github.com>
-
由 Sławomir Siwek 提交于
* [PHI] Migrate sgd and stack oneDNN kernels (#46374) * Convert slice+grad oneDNN fluid kernels to PHI * Change mutable_data to Alloc * Refactor licences * update dependencies Co-authored-by: NPiotr Paturej <48731682+piotrekobi@users.noreply.github.com>
-
由 Sławomir Siwek 提交于
* Convert split, pad and pad3d kernels * Convert slice+grad oneDNN fluid kernels to PHI * change out->mutable_data to dev_ctx.Alloc Co-authored-by: NPiotr Paturej <48731682+piotrekobi@users.noreply.github.com>
-
由 Sławomir Siwek 提交于
* init * remove softmaxop * merge dev * correct dir * style
-
由 Sławomir Siwek 提交于
* First approach * Shape kernel corrected * Compilation error fixed * Resize corrected * Registered types added * Mistake corrected & types added * sum kernel deleted Co-authored-by: NPaulina Gacek <paulina.gacek.pl@gmail.com>
-
- 29 9月, 2022 1 次提交
-
-
由 zyfncg 提交于
* set flag of clip_extra in save_inference_model to true (#46151) * open the clip_extra flag in paddle.static.save_inference_model, test=allcase (#46456) * Open the clip_extra flag in TracedLayer.save_inference_model (#46473) * open the clip_extra flag in paddle.static.save_inference_model, test=allcase * set the defalut value of clip_extra in TracedLayer from False to True, test=allcase * update english doc of paddle.static.save_inference_model, test=document_fix (#46484) * Fix clip_extra logic in remove_training_info (#46534) * fix clip_extra code in remove_training_info * revert rnn opmaker clear
-