- 30 6月, 2022 1 次提交
-
-
由 JingZhuangzhuang 提交于
* modify graph_pattern to thread_local * modify graph_pattern to thread_local
-
- 26 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 23 6月, 2022 1 次提交
-
-
由 Sylwester Fraczek 提交于
* sylwek prototype params to int8 pass * trying to make warmup work * wip * wip * change test to cpp test * review fixes, refactoring * more refactoring * add erasevars * change test to fixture * rename pass and reorder erasevars and graphsaferemovenodes * fix * more refactoring and fixed bug * formatting * remove scale count * enfroce message too short * remove erasevars erasevars couldbe cauuse of memory issues some other fixes * add count of successfull fuses to name of new nodes * FindVar -> GetVar and use ConvResidual pattern * use tensor->clear() instead of new variable * Update paddle/fluid/framework/ir/mkldnn/params_quantization_mkldnn_pass_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * Update paddle/fluid/framework/ir/mkldnn/params_quantization_mkldnn_pass_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * Update paddle/fluid/inference/tests/api/analyzer_lexical_analysis_gru_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * add log (review fix)c * review fix (2 functions to one) * code review: Conv->QuantizeConv * revert * fix formatting * remove unused functions * add paddle enforce Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
-
- 21 6月, 2022 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 22 5月, 2022 1 次提交
-
-
由 Zuza Gawrysiak 提交于
* Add elementwise_sub quantization * Remove unnecessary comments * Specify names for tests * Remove comments * Remove comments leftovers
-
- 19 5月, 2022 1 次提交
-
-
由 shentanyue 提交于
* support yolov5s static/int8 * fix eltwise_sub and div weight compute * fix delete_fill_constant_pass
-
- 12 5月, 2022 1 次提交
-
-
由 Shuangchi He 提交于
-
- 11 5月, 2022 1 次提交
-
-
由 Zuza Gawrysiak 提交于
* Add int8 scales gathering pass for convolution * Fix typo * Add unittest * Add corrected unit test * Change test name * Remove enabling mkldnn in test * Speed up test * Change max examples * Add functional test * Change test name * Add new test case * Rename pass
-
- 10 5月, 2022 1 次提交
-
-
由 JingZhuangzhuang 提交于
* pdnode_compare * panode compare * pdnode_compare
-
- 28 4月, 2022 1 次提交
-
-
由 Tomasz Socha 提交于
* Refactor Quantization * Refactor Dequantization * Classy solution * Style I * Style II * Style III * Use VLOG(4) for debug info * Style IV
-
- 04 4月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* DRY * change nodes names * add const prefix * change asX to as_x in all files
-
- 02 4月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* paddle inference support new quant_model
-
- 16 3月, 2022 1 次提交
-
-
由 Zuza 提交于
* Quantize elementwise mul op * Parametrize elementwise functions * Fix code formatting
-
- 14 3月, 2022 1 次提交
-
-
由 Tomasz Socha 提交于
* Add elementwise add and activation fuse pass * Fix copy ellision * More flexible pattern detector * More flexible fusion pass * Update lists for pass * Add support for Pow operator * Add support for more activation types * Style * Rename fusion pass * First version of tests * Dirty version of pass * Polished version * Update pbtxt * Style * Update names * Style * Use PADDLE_ENFORCE_EQ * Save error message to variable * WO for error checks * CR * Static style check * Add missing 'activation_scale' attribute * Add relu6 and sigmoid activations * Style * Fix fuse list formating * Sync filenames for fuse pass files * Fix cmake after move * Fix registration * Fix pass name in tests * Add missing activations to checker * WIPS * Working mul op * Working sub * Working Add * Remove pten includes * Remove some forward declarations * Remove Includes * Fixes * Remove default kernels * Add check if post_ops attributes are avaliable * Style * Code adjustment * Register default kernels * We have year 2022 not 2021... Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Fast review fixes Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Review Fix * Rename one_dnn -> onednn * Style after review * Fast and dirty fix for quantization * Update tests * Style * Fix mkldnn_quantizer config * Add Joanna's suggestion. * Check if operator is explicitly disables on OneDNN * Try to use unregistered attributes * Style * Test new framework * FXI * FXII * Update test * Style Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
-
- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 24 2月, 2022 1 次提交
-
-
由 jakpiase 提交于
* Fix for split bf16 inference * added test for pass * changes after review
-
- 08 2月, 2022 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Fix quantization next op findings * Corrections according to the review
-
- 05 1月, 2022 2 次提交
-
-
由 joanna.wozna.intel 提交于
-
由 joanna.wozna.intel 提交于
* Quantize nearest_interp and nearest_interp_v2 * Check if avx_core supported * Add depthwise_conv2d to supported quantization list
-
- 20 12月, 2021 1 次提交
-
-
由 heliqi 提交于
* add matmul_scale matmul_v2_scale fuse pass * add scaletensor judge * modify var name * add timeout notest;test=coverag * fix error commit * fix use_mkldnn attr * fix use_mkldnn attr
-
- 15 12月, 2021 1 次提交
-
-
由 wenbin 提交于
* remove bf16 * remove comments * remove wrong return * fix UT
-
- 14 12月, 2021 1 次提交
-
-
由 Sylwester Fraczek 提交于
* reshape+transpose+matmul_v2 * in_name->input_name * fix pr-ci-static-check
-
- 07 12月, 2021 1 次提交
-
-
由 Zuza 提交于
* quantize slice op * correct test * fix code formatting
-
- 11 11月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added softplus + activation fuse plass * minor change * implemented reviewer suggestion * minor fix * minor fix * added scale_out parameter * minor fix * fix for iScan CI * conditionally disabled logs * refactored pass builder
-
- 26 10月, 2021 1 次提交
-
-
由 Wangzheee 提交于
[Paddle-Inference]Add MatmulV2ToMatmul convert Pass, fix (matmul_v2, matmul, mul) convert pass, fix (matmul, mul) op_teller (#36652) * new_Matmul2ToMatmulToMul * new_Matmul2ToMatmulToMul * fix paddle_pass_builder * fix paddle_pass_builder * fix paddle_pass_builder * tem * tem * Add MatmulV2ToMatmul convert Pass; MatmulV2ToMul convert Pass * Add MatmulV2ToMatmul convert Pass; MatmulV2ToMul convert Pass * add matmul_broadcast_unitest * fix op_teller
-
- 21 10月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added base changes for matmul_v2+trans+resh fuse pass * added full matmul_v2+transpose+reshape pass * removed a file added by mistake * added reviewers suggestions * Changed ops type in checking capatibility version * Deteled one statement
-
- 14 10月, 2021 1 次提交
-
-
由 Wilber 提交于
* support bert when exists matmul_v2 * update
-
- 13 10月, 2021 1 次提交
-
-
由 Wangzheee 提交于
* add_int_pass * add_int8_flag_pass * add_int8_flag_pass * fix CMakeLists.txt * fix test_trt_fc_fuse_quant_dequant_pass.py * fix python/paddle/fluid/tests/unittests/ir/inference/test_trt_fc_fuse_quant_dequant_pass.py * fix test_trt_fc_fuse_quant_dequant_pass.py
-
- 22 9月, 2021 1 次提交
-
-
由 Wangzheee 提交于
-
- 06 9月, 2021 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Add fusion_lstm INT8 PTQ * Correct mkldnn_cache_capacity and enable fc_lstm_fuse_pass only for this test * Change mkldnn_cache_capacity
-
- 28 4月, 2021 1 次提交
-
-
由 denglin-github 提交于
* Add dlnne engine runtime * Fix log * Remove <const_cast> and remove unrelated modify with dlnne, +clang-format * Fix CMakeList format error * Add copyright message * Fix dlnne CMakeList.txt * Add some paddlepaddle_pass to support more networks * Fix some format bug * Add delete dropout_op pass * Fix some format bug * Fix format bug
-
- 30 3月, 2021 1 次提交
-
-
由 Pei Yang 提交于
* support multihead_matmul_fuse_pass_v3 * fix compile problems * embedding_eltwise_ln pass support lookup_table_v2 * suppoort matmul and matmul_v2 in qkv matmul
-
- 26 3月, 2021 1 次提交
-
-
由 tianshuo78520a 提交于
* delete include framework.pb.h * fix error
-
- 23 2月, 2021 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Unification of bfloat16 enablement process and refactor * Remove unnecessary function * Standardize the output name search
-
- 03 2月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
-
- 13 1月, 2021 1 次提交
-
-
由 alncat 提交于
* added support for inference using qunatization aware trained dygraph * added support for inference using qunatization aware trained dygraph correct boost get usage * Delete incorrect warning message (#30196) * fix warning and no grad * clean redundant API alias in 2.0 - part 2 (#30013) * delete paddle.nn.functional.assign * fix dynamic to static error * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * Add Static Variable Clone (#30208) Add clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat * use wget to replace curl to download the lcov file (#30229) * use wget to replace curl to download the lcov file * add cache for lcov * fix test_pool3d_op timeout issue (#30248) * Fix unittests bugs. (#30250) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * Fix bug for 'save mutiple method' (#30218) * Fix bug for 'save mutiple method' * To pass coverage. * edit code to pass coverage. * edit code to pass coverage. * add unittest for coverage. * change for coverage. * edit for coverage. * added support for inference using qunatization aware trained dygraph * Alias from paddle.fluid.layers.auc to paddle.static.auc (#30206) * add alias from fluid.layers.auc to static.auc * Update __init__.py * added support for inference using qunatization aware trained dygraph correct boost get usage * corrected boost get usage * corrected naming issues and enforcing zero check * correct paddle enforce message * added more error checkings * corrected error report message and optimized code * corrected findvar usage * corrected paddle_enforce in scope * correct error messages * correct error reporting format Co-authored-by: NLielinJiang <50691816+LielinJiang@users.noreply.github.com> Co-authored-by: NXiaoguangHu <46782768+XiaoguangHu01@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: NYUNSHEN XIE <1084314248@qq.com> Co-authored-by: NBai Yifan <me@ethanbai.com> Co-authored-by: Ngongweibao <weibao.gong@gmail.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJiaqi Liu <liujiaqi06@baidu.com>
-
- 29 12月, 2020 1 次提交
-
-
由 cc 提交于
* map matmul/squeeze2+matmul/reshape2+matmul to mul
-
- 24 12月, 2020 1 次提交
-
-
由 jakpiase 提交于
-
- 30 11月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 26 11月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Fix cpu_bfloat16_pass * Add output_format * Fix incorrect SetOutput * Change fromating
-