- 05 9月, 2022 3 次提交
-
-
由 piotrekobi 提交于
* gaussian random * mkldnn to onednn renaming * fix merge conflicts * remove fluid code * onednn renaming * Move classes from mkldnn_reuse.h to onednn_reuse.h * Move more functions from mkldnn_helper.h to onednn_helpper.h * Change MKLDNN to OneDNN in VLOG message Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 chalsliu 提交于
-
由 sneaxiy 提交于
-
- 02 9月, 2022 1 次提交
-
-
由 kangguangli 提交于
-
- 01 9月, 2022 3 次提交
-
-
由 houj04 提交于
-
由 taixiurong 提交于
test=kunlun
-
由 Leo Chen 提交于
* refine cmake of framework * add deps for dense tensor * fix deps * remove alloc(ctx) * add depends on mkldnn
-
- 29 8月, 2022 3 次提交
-
-
由 Sławomir Siwek 提交于
* abs relu6 fwd * abs bwd * gaussian_random_kernel and mkldnn-onednn renaming * scale kernel * whitespace * whitespace * revert scale migration * whitespaces * revert changes to gaussian kernel * whitespaces
-
由 Allen Guo 提交于
* support depthwise_conv2d ops Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai> * fix duplicate name Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai>
-
由 Allen Guo 提交于
-
- 26 8月, 2022 1 次提交
-
-
由 houj04 提交于
-
- 25 8月, 2022 2 次提交
-
-
由 hong 提交于
* optimizer conv alog speed * code polish * remove useless code * fix compile error * fix cpu compile error * not use cudnn alog t * add search cache max number * polish code * fix cache test bug * add groups data format to conv args * fix cache test bug * fix cudnn_deterministic bug * fix test switch auto tune bug * fix test swith autotune bug; * fix conv cache bug * fix cache test error * fix cache test bug * fix windows mac compile error * fix workspace search error * update cudnn cache * fix cache test bug; test=develop * fix autotune swith test error * polish code * oplish code
-
由 haosicheng 提交于
-
- 24 8月, 2022 1 次提交
-
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun
-
- 23 8月, 2022 1 次提交
-
-
由 ronnywang 提交于
* [CustomDevice] add profiler apis * migrate CalculateEstOccupancy into cuda_tracer * update * add ut
-
- 22 8月, 2022 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Add int8 support for matmul+elementwiae_add fuse * Corrections after review and ernie test fix
-
- 19 8月, 2022 3 次提交
-
-
由 houj04 提交于
-
由 dongfangshenzhu 提交于
* add merged_momentum *test=kunlun * add merged_momentum *test=kunlun * add fp16 to merged_momentum,*test=kunlun * change dist_model.cc * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun
-
- 18 8月, 2022 1 次提交
-
-
由 zhangxiaoci 提交于
* change to async mode for xpu multi-card training in static graph mode * minor bugfix * irrelevant. move to another pr * move change to other pr * fix stream issue * fix 'stream not meet with current context' error * fix branch diverge, test=kunlun
-
- 17 8月, 2022 1 次提交
-
-
由 ykkk2333 提交于
* xpu unittest grad compute supports more types, *test=kunlun * add instance norm xpu, *test=kunlun
-
- 16 8月, 2022 1 次提交
-
-
由 houj04 提交于
-
- 15 8月, 2022 2 次提交
-
-
由 zhangyikun02 提交于
-
由 houj04 提交于
* [XPU] add some collective ops. test=kunlun * use XPUOpTestWrapper. test=kunlun * skip kl1 for collective ops. fix typo: deivce -> device. test=kunlun
-
- 12 8月, 2022 2 次提交
-
-
由 Allen Guo 提交于
-
由 Siming Dai 提交于
* add init file * add op definition and infermeta * add kernel definition funcs * add broadcast infer shape * add gpu forward kernel * delete SUB and DIV * add x_grad * add template * add e_grad for min and max * fix small bug * temp commit * temp commit * add e_grad for sum and mean * fix some compile bug * fix compile bugs * fix compile problem * add sum forward unittest * fix broadcast error, add kernel sig, register e_grad, change unit test * fix grad * add temp grad fix * temp commit * add min max unittest * add max, min unittest, fix mul bug * add cpu forward sum and mean * add forward min max, fix mean unittest * add cpu backward min max * fix code-style * add backward sum mean * fix rocm ci * set uniitest timeout * fix bug of x broadcast to e, gpu grad * fix bug of x broadcast to e, cpu grad * rename BOOST_GET_CONST macro * fix rocm ci * mv graph_send_e_recv to graph_send_ue_recv * move out_size to IntArray * add eager op test * fix max pool type bug, add unittest for api * revise api doc * add fp16 for atomic min and max, add unittest * add unittest * add fp16 support for graph_send_recv * fix unittest fp16 bug * change OutSizeTensor to Out_size * move E to Y * add copyright, fix comment * review code * fix thread block size * fix thread block size * change api attribute name: pool_type to reduce_op, compute_type to message_op * change api attribute name, move pool_type to reduce_op, move compute_type to message_op
-
- 11 8月, 2022 1 次提交
-
-
由 chenjian 提交于
* fix * add control flag and input shapes for new dygraph * fix file mode * improve code coverage * fix a bug in statstic * fix according to review * optimize performance * fix
-
- 10 8月, 2022 2 次提交
-
-
由 zhangxiaoci 提交于
* add macro control in enforce_xpu.h, test=kunlun * minor bugfix * minor bugfix
-
由 Leo Chen 提交于
* set cuda device before run * add header file * fix compile
-
- 09 8月, 2022 1 次提交
-
-
由 z8hanghuan 提交于
* add phi empty,*test=kunlun * support empty op in xpu, *test=kunlun * support empty op in xpu, *test=kunlun
-
- 08 8月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Polish function code * Rename funciton to engine * Fix Log msg and doc * Rename Function to Engine and using new Function class to warp Engine * Rename EngineInfo * Adjust member variable order
-
- 05 8月, 2022 3 次提交
-
-
由 YuanRisheng 提交于
* move mkldnn activation kernel * fix compile bugs * fix compile bugs * deal with conflict * fix compile bugs * fix windows compile bugs * mkldnn unittest fix * change mutable to alloc * fix unittest bugs * modify code according comment
-
由 joanna.wozna.intel 提交于
-
由 zhangxiaoci 提交于
-
- 04 8月, 2022 3 次提交
-
-
由 Sławomir Siwek 提交于
* Add unit tests * matmul_v2 + activation * matmuls + elementwise_add * matmul_v2 postops * transform matmul to v2 * opcompat * fix fusing matmul with multipe outs * add shape constraints * remove unused vars * change pass order * - Unit tests to be debugged - fix - refactor - diagnostic - more diagnostic - fix - Fix number two - fix - fix - fix - alpha added - more fixes - compilation fix - removed diagnostic code - cosmetic fixes * lint * add alpha constraint * merge matmul refactor * trigger CI * - fix * - another fix * code style * add support for matmul+elementwise_add+activation * code style * fix bfloat16 bugs * change append_binary to append_sum Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
-
由 dongfangshenzhu 提交于
* add merged_momentum *test=kunlun * add merged_momentum *test=kunlun * add fp16 to merged_momentum,*test=kunlun
-
由 王明冬 提交于
-
- 03 8月, 2022 2 次提交
-
-
由 z8hanghuan 提交于
* add sequence_unpad for xpu,*test=kunlun * add sequence_unpad, *test=kunlun * fix bug in testcase,should not be sequence_pad,*test=kunlun
-
由 Leo Chen 提交于
-
- 02 8月, 2022 1 次提交
-
-
由 houj04 提交于
* [XPU] fp16 for layer_norm op. test=kunlun
-