- 23 9月, 2021 1 次提交
-
-
由 Li Min 提交于
-
- 21 9月, 2021 1 次提交
-
-
由 Guoxia Wang 提交于
-
- 18 9月, 2021 2 次提交
-
-
由 Yiqun Liu 提交于
-
由 Jacek Czaja 提交于
* - REorder disabling caching * - compilation fix * - another compilation fix * - another compilation fix * - compilation fix * - Fix * - yet another compilation fix * - suppresingly another compilation fix * - lint * - fix after review * - fix
-
- 15 9月, 2021 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 14 9月, 2021 2 次提交
- 13 9月, 2021 2 次提交
- 08 9月, 2021 1 次提交
-
-
由 will-jl944 提交于
multiply supports bool
-
- 07 9月, 2021 1 次提交
-
-
由 niuliling123 提交于
-
- 06 9月, 2021 1 次提交
-
-
由 wawltor 提交于
* Add the extra flag for the some ops * fix the compile problem in matmul extra
-
- 03 9月, 2021 2 次提交
- 02 9月, 2021 1 次提交
-
-
由 wangxinxin08 提交于
add axis check for elementwise op while the dimension of x is equal to the dimension of tensor (#35340)
-
- 31 8月, 2021 1 次提交
-
-
由 Aganlengzi 提交于
-
- 27 8月, 2021 1 次提交
-
-
由 baoachun 提交于
* add elementwise max grad op for npu * add elementwise max grad op for npu * add elementwise max grad op for npu * add elementwise max grad op for npu * add elementwise max grad op for npu
-
- 26 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
[oneDNN] disable caching oneDNN primitives in matmul v2, Reduce grad and elementwise_add grad, expand_v2 (#35132) * - grad caching disabled of matmul_v1 - compilation fix - compilation fix * - reduction removed * - Matmul v2 disabled caching * Draft of further changes * - workaround for reducegrad * - fixes to UT * - fix to compilation * - another fix * - fix
-
- 25 8月, 2021 2 次提交
-
-
由 ronnywang 提交于
-
由 taixiurong 提交于
-
- 22 8月, 2021 1 次提交
-
-
由 Zhang Zheng 提交于
-
- 16 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - Added softmax without caching * - Binary is no longer manually cached * - Activation onednn caching removed * - Removed manual caching of activation * - modified UT * - fix * - fix * - fixes to building * - fix * - fix * - fix to UT * - Faulty UT workaround * - approval workaround * - Fixes after review * - compilation fixes * - more lint fixes * - more fixes after review * - fixes after another round of review * - hopefully compilation fix - compilation fix
-
- 12 8月, 2021 1 次提交
-
-
由 Chen Weihang 提交于
This reverts commit 0a5c99e8.
-
- 11 8月, 2021 2 次提交
-
-
由 Jacek Czaja 提交于
* - Added softmax without caching * - Binary is no longer manually cached * - Activation onednn caching removed * - Removed manual caching of activation * - modified UT * - fix * - fix * - fixes to building * - fix * - fix * - fix to UT * - Faulty UT workaround * - approval workaround * - Fixes after review * - compilation fixes * - more lint fixes * - more fixes after review * - fixes after another round of review
-
由 andyjpaddle 提交于
-
- 09 8月, 2021 1 次提交
-
-
由 ronnywang 提交于
* add broadcast supporting for elementwise_add * add broadcast supporting for elementwise_add * add more tests * remove the redundant code * update * fix place error in unittest * remove skip.If
-
- 05 8月, 2021 1 次提交
-
-
由 limingshu 提交于
-
- 07 7月, 2021 1 次提交
-
-
由 taixiurong 提交于
-
- 05 7月, 2021 2 次提交
- 24 6月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - fix to #33282 * - Increased threshold for elementwise_mul_bf16 grad * -disabled faulty UT * - fix to approval
-
- 23 6月, 2021 1 次提交
-
-
由 limingshu 提交于
-
- 12 6月, 2021 1 次提交
-
-
由 limingshu 提交于
-
- 04 6月, 2021 1 次提交
-
-
由 limingshu 提交于
-
- 02 6月, 2021 2 次提交
- 26 5月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* refine ~npuOpRunner * implement destructor and forbid copy * use reference to avoid copy * use const reference * relax adam precision * fix top_k
-
- 25 5月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* modify complex template for elementwise ops * modify mul, div grad struct * add complex template for CudaShuffleDownSync CudaShuffleXorSync funcs and fix the bug when delete cuda<9000 * fix shuffle func args bug * fix shuffle func args bug * fix shuffle func args bug
-
- 24 5月, 2021 1 次提交
-
-
由 limingshu 提交于
-
- 20 5月, 2021 1 次提交
-
-
由 TTerror 提交于
* fix gather op and add logsumexp op on kunlun * update xpu depence * update tests and fix elementwise_add
-