- 27 10月, 2021 1 次提交
-
-
由 baoachun 提交于
* fix matmul dim error * fix wrong dim check in matmul
-
- 23 10月, 2021 1 次提交
-
-
由 baoachun 提交于
-
- 21 10月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added base changes for matmul_v2+trans+resh fuse pass * added full matmul_v2+transpose+reshape pass * removed a file added by mistake * added reviewers suggestions * Changed ops type in checking capatibility version * Deteled one statement
-
- 18 10月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added softplus * refactored softplus op * deleted unnecessary file * added missing file * added formatting * disabled tests if GPU is used * added reviewer suggestion * unified softplus kernel
-
- 14 10月, 2021 1 次提交
-
- 13 10月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
- Lint - Merge with develop - lint
-
- 11 10月, 2021 1 次提交
-
-
由 jakpiase 提交于
-
- 08 10月, 2021 2 次提交
- 07 10月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
* Remove unused header. * Use ConvMKLDNNHandlerT for conv2d INT8. * Use absolute module path to import.
-
- 05 10月, 2021 1 次提交
-
-
由 jakpiase 提交于
* tmp * added concat BF16/FP32 BWD oneDNN kernel * minor change * minor change * fix for CI * added formatting * Reverted deleting static keyword * added reviewers suggestions * reverted deleting concat bf16 test file * fixed concat tests
-
- 27 9月, 2021 1 次提交
-
-
由 jakpiase 提交于
* refactored reshape multiop kernel and added flatten1/2 kernels * added formatting for flatten tests * CI fix * disabled reshape_kernel ops after succesful CI run * minor fix
-
- 24 9月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - candidate fix * - More fixes to #34554 * - another incosnstent fix to key * - Remvoed unneeded line * - matching the cache behaviour to other ops
-
- 21 9月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
* Create stateful OneDNNAXPYHandler object. This makes it possible to call it multiple times without recreating the oneDNN primitives every time. * Prepare SGDOpKernel to reuse its implementation from OneDNN kernel. * OneDNN SGD kernel. * Update call to use new OneDNNAXPYHandler object api. * Setup seed in proper place. * Enable OneDNN kernel only for single case. * For dense param and sparse grad. * Small refactor. * Enable oneDNN by op attr or by cmd line flag. * Use int64_t type for number of elements. * Support dense param and grad from OneDNN kernel. * Enable SGD OneDNN kernel when use MP BF16 optimizer. * Force non-copyable/movable OneDNNAXPYHandler. * Reuse OneDNNAXPYHandler for spare tensors in SUM op. * Fix SFINAE rules. * Remove recording event inside AXPY. * Get rid of internal primitive caching. * Stop use PP cache mechanims to store mem and primitive obj. * Handler obj store and reuse needed desc & prim * Do not derive from MKLDNNHandlerT
-
- 18 9月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - REorder disabling caching * - compilation fix * - another compilation fix * - another compilation fix * - compilation fix * - Fix * - yet another compilation fix * - suppresingly another compilation fix * - lint * - fix after review * - fix
-
- 15 9月, 2021 1 次提交
-
-
由 jakpiase 提交于
* fixed slice error * added handling of StartsTensor+List and EndsTensor+List * fix for ppyolo model
-
- 13 9月, 2021 1 次提交
-
-
由 jakpiase 提交于
* implemented clip op bf16/fp32 * added skipping if not cpu or bf16 * CI rerun after bf16 package change * added parentheses to ensure formatting
-
- 10 9月, 2021 1 次提交
-
-
由 ceci3 提交于
* fix bn/in/squeeze/syncbn extra * update bn * update * update
-
- 07 9月, 2021 2 次提交
-
-
由 Jacek Czaja 提交于
* - refactoring progressing - Fix - compilation fix - another compilation fix - refactoring * - fix * - compilation fix * - compilation fix * - missing set_format * - compilation fix * - reverted setting memeory format * - Brought back format * - Fix * - fixes after review * CI rerun * CI rerun
-
由 jakpiase 提交于
* fix for reshape2 * added reviewers sugestions
-
- 01 9月, 2021 1 次提交
-
-
由 jakpiase 提交于
* aded slice FWD FP32 * added tests for slice FWD FP32 * added slice bwd * added bf16 tests * CI fix * CI fix * added reason to skip_if * minor change * temporary fix for failing test * temporary fix * changes after review * CI rerun
-
- 26 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
[oneDNN] disable caching oneDNN primitives in matmul v2, Reduce grad and elementwise_add grad, expand_v2 (#35132) * - grad caching disabled of matmul_v1 - compilation fix - compilation fix * - reduction removed * - Matmul v2 disabled caching * Draft of further changes * - workaround for reducegrad * - fixes to UT * - fix to compilation * - another fix * - fix
-
- 25 8月, 2021 1 次提交
-
-
由 jakpiase 提交于
* temporary change * fix for expand_v2 * changes after review, activated ppyolov inference test
-
- 24 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - concat refactoring draft * - cmpilation fixes * - yet another compilation fix * - fix * - compilation fix * - fixes to compilation * - another compilation fix * - fix * - Added overloaded AcquirePrimitiveDesc for concat * - fix * - reserve introduced * - UT fixes * - test concat int8 improved * - fixes * - fix to crash * - lint fixes * - fixes after review * - some other fixes from review
-
- 23 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - disabled interpolate onednn * - compilation fix * - draft of batch_norm cache disabling * - fixes to UT
-
- 17 8月, 2021 2 次提交
-
-
由 chentianyu03 提交于
* copy boost optional.hpp to paddle * copy boost optional.hpp to paddle * move directions * del fluid/utils * modify .hpp to .h * move directions * modify to paddle::optional * add modification description * format code stype for the files in paddle/utils * format code stype
-
由 Jacek Czaja 提交于
* - disabled caching of layer norm - fix in compilation - compilation fix - transpose caching disabled - compilation fix - more compilation fixes - sum caching disabled - compilation fix * - LRN with disabled cache * lint fixes
-
- 16 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - Added softmax without caching * - Binary is no longer manually cached * - Activation onednn caching removed * - Removed manual caching of activation * - modified UT * - fix * - fix * - fixes to building * - fix * - fix * - fix to UT * - Faulty UT workaround * - approval workaround * - Fixes after review * - compilation fixes * - more lint fixes * - more fixes after review * - fixes after another round of review * - hopefully compilation fix - compilation fix
-
- 12 8月, 2021 1 次提交
-
-
由 Chen Weihang 提交于
This reverts commit 0a5c99e8.
-
- 11 8月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - Added softmax without caching * - Binary is no longer manually cached * - Activation onednn caching removed * - Removed manual caching of activation * - modified UT * - fix * - fix * - fixes to building * - fix * - fix * - fix to UT * - Faulty UT workaround * - approval workaround * - Fixes after review * - compilation fixes * - more lint fixes * - more fixes after review * - fixes after another round of review
-
- 30 7月, 2021 3 次提交
-
-
由 jakpiase 提交于
* test version of matmul_v2 * added matmul_v2 grad kernel * minor changes * minor changes * minor change for CI approval * CI fix * CI fix * trigger CI * changes after review, not working yet * moved ops to anonymous namespaces * changes after review
-
由 jakpiase 提交于
* test version of matmul_v2 * added matmul_v2 grad kernel * minor changes * minor changes * minor change for CI approval * CI fix * CI fix * added squeeze and squeeze2 kernels * CI fix * CI fix * CI fix * disabled tests when compiled with cuda * added setting format_tag by strides * added sigmoid BF16 FWD/BWD and gelu BF16 BWD * changes after review * Revert "added sigmoid BF16 FWD/BWD and gelu BF16 BWD" This reverts commit 6e3f76720b545abfcff9f6052b46b73a1e745cae. * Revert "Merge branch 'matmul_v2_grad' into squeeze2_op" This reverts commit 06fcf67843a4a7884eccdf67a02a03575e1d4cb8, reversing changes made to 6e3f76720b545abfcff9f6052b46b73a1e745cae. * minor change * added reshape1/2 kernels * moved some functions into private block * CI fix * CI fix * CI fix
-
由 jakpiase 提交于
* added expand_v2 bf16/fp32 kernel * minor change * CI fix * added missing test file * added formatting * reduced binary size * CI fix
-
- 22 7月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added sigmoid BF16 FWD/BWD and gelu BF16 BWD * added newline at EOF * switched from lambdas to local functions * changed function names
-
- 19 7月, 2021 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 07 7月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added prelu bf16/fp32 fwd/bwd kernel
-
- 30 6月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added matmul_v2 bf16/fp32 FWD kernel added matmul_v2 bf16/fp32 FWD kernel * added formatting * removed some tests due to timeout in CI * refactored tests * merged tests classes into one file * minor change * removed test guard for CUDA * remove skipIf * changes after review * formated one file * minor change * added skipping UT in CUDA place
-
- 24 6月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
* - fix to #33282 * - Increased threshold for elementwise_mul_bf16 grad * -disabled faulty UT * - fix to approval
-
- 23 6月, 2021 1 次提交
-
-
由 jakpiase 提交于
* base changes for split op * 90% of split functionality added * full fp32 functionality * added bf16 test * added submemory caching * added bf test to static mode whitelist * minor change * enabled split op for inference * minor fix * minor fix
-
- 21 6月, 2021 1 次提交
-
-
由 lidanqing 提交于
* Add oneDNN AXPY handler. * Add fallback for small tensors. * Fix ifdefs * Remove unnecessary namespace prefixes and add missing headers. * Guard handler_axpy with proper ifdefs. * Compilation of this function is possible only when Paddle is not build with CUDA nor HIP. * Move AXPY handler code to separate files. * Use oneDNN AXPY handler in SGD op. * Use axpy handler only when Paddle is built with oneDNN. * Add test for SUM BF16 with big rows. * Fix SFINAE rules for elementwise_add_to. * Add test case for SGD with big rows. * update * update Co-authored-by: NAdam Osewski <adam.osewski@intel.com>
-