- 01 3月, 2022 2 次提交
-
-
由 joanna.wozna.intel 提交于
* Add mobilenetv3_large performance test * Disable the BF16 test if the device does not support BF16 computations * Change test timeout
-
由 wenbin 提交于
* remove * pass * more pass
-
- 19 2月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* add DistributedFusedLamb op * polish code * fix compile error * compatible with pten changement * fix rocm compile error * improve converage * update upstream/develop * fix cast_with_ptr.h * add FLAGS_distributed_lamb_divide_nranks_when_allreduce=1 * fix clip before allreduce * add use_master_param_norm * code polish * fix bug * fix ROCM ci
-
- 14 2月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* mish unit tests * code format * remove unused imports * code format * remove hard-coded shape values * remove timeouts * remove timeouts v2 * restore timeouts
-
- 09 2月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu
-
- 07 2月, 2022 1 次提交
-
-
由 arlesniak 提交于
* amp list updated * tests updated * gray list updated * amp list updated * test updated
-
- 27 1月, 2022 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Upadate pass in quant2_int8_mkldnn_pass * Back to the previous scale_matmul order * Change place of cpu_quantize_placement_pass
-
- 21 1月, 2022 1 次提交
-
-
由 ceci3 提交于
-
- 13 1月, 2022 1 次提交
-
-
由 jakpiase 提交于
* base changes for mul reimplementation * empty commit * tmp save * full implementation of mul bf16/fp32 fwd bwd * CI fix * CI rerun * changed unity build cmake to avoid gpu issues * removed mul mkldnn from unity build * added skipping tests if not cpu_bf16 * CI fix * CI fix * CI fix
-
- 12 1月, 2022 1 次提交
-
-
由 Sylwester Fraczek 提交于
* fix conv act int8 scale * add unit test for conv+hard_swish
-
- 06 1月, 2022 1 次提交
-
-
由 minghaoBD 提交于
-
- 05 1月, 2022 2 次提交
-
-
由 Jiaqi Liu 提交于
* make post training quant API support dataloader
-
由 joanna.wozna.intel 提交于
* Quantize nearest_interp and nearest_interp_v2 * Check if avx_core supported * Add depthwise_conv2d to supported quantization list
-
- 28 12月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix scatter_op fp16 perf problem. * Add scatter into black list. * Add scatter into black list for dygraph.
-
- 22 12月, 2021 1 次提交
-
-
由 Guanghua Yu 提交于
-
- 20 12月, 2021 1 次提交
-
-
由 sneaxiy 提交于
* support FP16 for more ops * add amp list tests * refine reduce_mean_grad * fix OP benchmark ci * fix fp16 reduce_mean * updat ut, but still have some problems * remove mean/reduce_mean fp16 kernel
-
- 17 12月, 2021 1 次提交
-
-
由 sneaxiy 提交于
* support multi precision update for LAMB * hide some api * fix ci uts * fix lamb output of dygraph * remove some changes to some PR * try to fix Py3 CI compile error * fix test_imperative_optimizer, add lars ut, add layer_norm ut * fix ut, fix format * fix ut * fix windows ci
-
- 14 12月, 2021 3 次提交
-
-
由 Sylwester Fraczek 提交于
* add map_matmul passes to quant2_int8_mkldnn_pass * fix fc+act fuse (activation scale) * ci fix, c++17 structured bindings not available * fix ci static check
-
由 Guanghua Yu 提交于
-
由 Sylwester Fraczek 提交于
* reshape+transpose+matmul_v2 * in_name->input_name * fix pr-ci-static-check
-
- 13 12月, 2021 1 次提交
-
-
由 xiongkun 提交于
* fix single card 8 unittests in new executor * fix * fix
-
- 10 12月, 2021 2 次提交
-
-
由 Guanghua Yu 提交于
* Support sub graph quant-post
-
由 Guanghua Yu 提交于
-
- 07 12月, 2021 1 次提交
-
-
由 Zuza 提交于
* quantize slice op * correct test * fix code formatting
-
- 01 12月, 2021 2 次提交
-
-
由 Sylwester Fraczek 提交于
* dequantize matmul and matmul_v2 Y weights in qat2_int8 * review fix * split conv and mul tests, add matmul test * fixup * fix ci build * remove unused variables * formatting fix * remove extra newline at end of file
-
由 Guanghua Yu 提交于
-
- 30 11月, 2021 1 次提交
-
-
由 Sylwester Fraczek 提交于
-
- 26 11月, 2021 1 次提交
-
-
由 zhaocaibei123 提交于
* test * test * rm test * update * update * update * add unittest * update * update save
-
- 04 11月, 2021 1 次提交
-
-
由 XGZhang 提交于
* fix a quantization bug
-
- 29 10月, 2021 1 次提交
-
-
由 Ming-Xu Huang 提交于
-
- 28 10月, 2021 1 次提交
-
-
由 XGZhang 提交于
-
- 27 10月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
本PR是fused_transformer的layer层代码,包含FusedFeedForward的layer层代码和FusedTransformerEncoderLayer的代码。
-
- 20 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
-
- 19 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
* add pow2_warmup op * remove contrib __all__ * add AttrT * rename * follow comments * fix duplicate PADDLE_RESTRICT
-
- 18 10月, 2021 1 次提交
-
-
由 ceci3 提交于
* quant support matmul_v2 * fix format
-
- 14 10月, 2021 2 次提交
-
-
由 Yanxing Shi 提交于
* add sparse_embedding doc * delete wrong space * fix error for sample code * fix error for doc compile * delete __all__ * modify sample code
-
由 Zhang Zheng 提交于
-
- 11 10月, 2021 1 次提交
-
-
由 zlsh80826 提交于
Sparse tensor core for convolution requires the input channel dimension is 2:4 structed sparse. So we have to mask the input channel dimension for using sparse tensor core
-
- 22 9月, 2021 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 21 9月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
* Create stateful OneDNNAXPYHandler object. This makes it possible to call it multiple times without recreating the oneDNN primitives every time. * Prepare SGDOpKernel to reuse its implementation from OneDNN kernel. * OneDNN SGD kernel. * Update call to use new OneDNNAXPYHandler object api. * Setup seed in proper place. * Enable OneDNN kernel only for single case. * For dense param and sparse grad. * Small refactor. * Enable oneDNN by op attr or by cmd line flag. * Use int64_t type for number of elements. * Support dense param and grad from OneDNN kernel. * Enable SGD OneDNN kernel when use MP BF16 optimizer. * Force non-copyable/movable OneDNNAXPYHandler. * Reuse OneDNNAXPYHandler for spare tensors in SUM op. * Fix SFINAE rules. * Remove recording event inside AXPY. * Get rid of internal primitive caching. * Stop use PP cache mechanims to store mem and primitive obj. * Handler obj store and reuse needed desc & prim * Do not derive from MKLDNNHandlerT
-