- 25 8月, 2022 2 次提交
-
-
由 hong 提交于
* optimizer conv alog speed * code polish * remove useless code * fix compile error * fix cpu compile error * not use cudnn alog t * add search cache max number * polish code * fix cache test bug * add groups data format to conv args * fix cache test bug * fix cudnn_deterministic bug * fix test switch auto tune bug * fix test swith autotune bug; * fix conv cache bug * fix cache test error * fix cache test bug * fix windows mac compile error * fix workspace search error * update cudnn cache * fix cache test bug; test=develop * fix autotune swith test error * polish code * oplish code
-
由 haosicheng 提交于
-
- 24 8月, 2022 1 次提交
-
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun * support fp16 of adam operator in xpu environment. test=kunlun
-
- 23 8月, 2022 1 次提交
-
-
由 ronnywang 提交于
* [CustomDevice] add profiler apis * migrate CalculateEstOccupancy into cuda_tracer * update * add ut
-
- 22 8月, 2022 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Add int8 support for matmul+elementwiae_add fuse * Corrections after review and ernie test fix
-
- 19 8月, 2022 3 次提交
-
-
由 houj04 提交于
-
由 dongfangshenzhu 提交于
* add merged_momentum *test=kunlun * add merged_momentum *test=kunlun * add fp16 to merged_momentum,*test=kunlun * change dist_model.cc * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun
-
- 18 8月, 2022 1 次提交
-
-
由 zhangxiaoci 提交于
* change to async mode for xpu multi-card training in static graph mode * minor bugfix * irrelevant. move to another pr * move change to other pr * fix stream issue * fix 'stream not meet with current context' error * fix branch diverge, test=kunlun
-
- 17 8月, 2022 1 次提交
-
-
由 ykkk2333 提交于
* xpu unittest grad compute supports more types, *test=kunlun * add instance norm xpu, *test=kunlun
-
- 16 8月, 2022 1 次提交
-
-
由 houj04 提交于
-
- 15 8月, 2022 2 次提交
-
-
由 zhangyikun02 提交于
-
由 houj04 提交于
* [XPU] add some collective ops. test=kunlun * use XPUOpTestWrapper. test=kunlun * skip kl1 for collective ops. fix typo: deivce -> device. test=kunlun
-
- 12 8月, 2022 2 次提交
-
-
由 Allen Guo 提交于
-
由 Siming Dai 提交于
* add init file * add op definition and infermeta * add kernel definition funcs * add broadcast infer shape * add gpu forward kernel * delete SUB and DIV * add x_grad * add template * add e_grad for min and max * fix small bug * temp commit * temp commit * add e_grad for sum and mean * fix some compile bug * fix compile bugs * fix compile problem * add sum forward unittest * fix broadcast error, add kernel sig, register e_grad, change unit test * fix grad * add temp grad fix * temp commit * add min max unittest * add max, min unittest, fix mul bug * add cpu forward sum and mean * add forward min max, fix mean unittest * add cpu backward min max * fix code-style * add backward sum mean * fix rocm ci * set uniitest timeout * fix bug of x broadcast to e, gpu grad * fix bug of x broadcast to e, cpu grad * rename BOOST_GET_CONST macro * fix rocm ci * mv graph_send_e_recv to graph_send_ue_recv * move out_size to IntArray * add eager op test * fix max pool type bug, add unittest for api * revise api doc * add fp16 for atomic min and max, add unittest * add unittest * add fp16 support for graph_send_recv * fix unittest fp16 bug * change OutSizeTensor to Out_size * move E to Y * add copyright, fix comment * review code * fix thread block size * fix thread block size * change api attribute name: pool_type to reduce_op, compute_type to message_op * change api attribute name, move pool_type to reduce_op, move compute_type to message_op
-
- 11 8月, 2022 1 次提交
-
-
由 chenjian 提交于
* fix * add control flag and input shapes for new dygraph * fix file mode * improve code coverage * fix a bug in statstic * fix according to review * optimize performance * fix
-
- 10 8月, 2022 2 次提交
-
-
由 zhangxiaoci 提交于
* add macro control in enforce_xpu.h, test=kunlun * minor bugfix * minor bugfix
-
由 Leo Chen 提交于
* set cuda device before run * add header file * fix compile
-
- 09 8月, 2022 1 次提交
-
-
由 z8hanghuan 提交于
* add phi empty,*test=kunlun * support empty op in xpu, *test=kunlun * support empty op in xpu, *test=kunlun
-
- 08 8月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Polish function code * Rename funciton to engine * Fix Log msg and doc * Rename Function to Engine and using new Function class to warp Engine * Rename EngineInfo * Adjust member variable order
-
- 05 8月, 2022 3 次提交
-
-
由 YuanRisheng 提交于
* move mkldnn activation kernel * fix compile bugs * fix compile bugs * deal with conflict * fix compile bugs * fix windows compile bugs * mkldnn unittest fix * change mutable to alloc * fix unittest bugs * modify code according comment
-
由 joanna.wozna.intel 提交于
-
由 zhangxiaoci 提交于
-
- 04 8月, 2022 3 次提交
-
-
由 Sławomir Siwek 提交于
* Add unit tests * matmul_v2 + activation * matmuls + elementwise_add * matmul_v2 postops * transform matmul to v2 * opcompat * fix fusing matmul with multipe outs * add shape constraints * remove unused vars * change pass order * - Unit tests to be debugged - fix - refactor - diagnostic - more diagnostic - fix - Fix number two - fix - fix - fix - alpha added - more fixes - compilation fix - removed diagnostic code - cosmetic fixes * lint * add alpha constraint * merge matmul refactor * trigger CI * - fix * - another fix * code style * add support for matmul+elementwise_add+activation * code style * fix bfloat16 bugs * change append_binary to append_sum Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
-
由 dongfangshenzhu 提交于
* add merged_momentum *test=kunlun * add merged_momentum *test=kunlun * add fp16 to merged_momentum,*test=kunlun
-
由 王明冬 提交于
-
- 03 8月, 2022 2 次提交
-
-
由 z8hanghuan 提交于
* add sequence_unpad for xpu,*test=kunlun * add sequence_unpad, *test=kunlun * fix bug in testcase,should not be sequence_pad,*test=kunlun
-
由 Leo Chen 提交于
-
- 02 8月, 2022 2 次提交
-
-
由 houj04 提交于
* [XPU] fp16 for layer_norm op. test=kunlun
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun
-
- 01 8月, 2022 3 次提交
-
-
由 Leo Chen 提交于
* remove cudaDeviceContext * remove more template * fix rocm compile * remove alias name CUDADeviceContext * fix compile * fix tests * revert changes
-
由 danleifeng 提交于
Co-authored-by: seemingwang <zsasuke@qq.com> Co-authored-by: NDesmonDay <908660116@qq.com> Co-authored-by: Nseemingwang <seemingwang@users.noreply.github.com> Co-authored-by: NThunderbrook <a754913769@163.com> Co-authored-by: Nxuewujiao <105861147+xuewujiao@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-sys-hic-k8s-v100-box-a225-0693.yq01.baidu.com> Co-authored-by: NThunderbrook <52529258+Thunderbrook@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-inf-hic-k8s-a100-ab2-0009.yq01.baidu.com> Co-authored-by: Nhuwei02 <53012141+huwei02@users.noreply.github.com> Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nlxsbupt <luoxsbupt@163.com> Co-authored-by: Nmiaoli06 <106585574+miaoli06@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-inf-hic-k8s-a100-ab2-0008.yq01.baidu.com> Co-authored-by: Nchao9527 <33347532+chao9527@users.noreply.github.com> Co-authored-by: Nqingshui <qshuihu@gmail.com> Co-authored-by: Nyangjunchao <yangjunchao@baidu.com>
-
由 zhouweiwei2014 提交于
-
- 29 7月, 2022 7 次提交
-
-
由 Leo Chen 提交于
* remove cudaDeviceContext * remove more template * fix rocm compile
-
由 QingshuChen 提交于
* add some fp16 op for kunlun resnet50 model *test=kunlun * tmp *test=kunlun
-
由 Aganlengzi 提交于
* add FLAGS_enable_api_kernel_fallback * deal with more cases * add ut for coverage
-
由 Jacek Czaja 提交于
* - Unit tests to be debugged - fix - refactor - diagnostic - more diagnostic - fix - Fix number two - fix - fix - fix - alpha added - more fixes - compilation fix - removed diagnostic code - cosmetic fixes * lint
-
由 Leo Chen 提交于
* init * move CUDAStream to phi * fix compilation * merge develop * add stream_owned_ member * split cuda_stream.h * fix cpu compile * fix constructor * fix bug * fix windows compile * fix inference test_levit * fix windows tests
-
由 Allen Guo 提交于
-
由 houj04 提交于
-