- 22 8月, 2022 1 次提交
-
-
由 Yuanle Liu 提交于
-
- 16 8月, 2022 1 次提交
-
-
由 feng_shuai 提交于
* convert multihead to oss * fix:bug * fix:delete const cast * fix:don't support bias_qk * add vit pass * fix:convert bug and add preln_residual_bias * support length=-1 * add UT for convert * add no_bias_qk support for gpu_multihead_op * delete infer_shape depends on bias_qk * oss just can be used in T4 and A* * fix:change api for ROCM CI
-
- 15 8月, 2022 1 次提交
-
-
由 Yuanle Liu 提交于
-
- 05 8月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* remove v2_transpose_reshape * matmul_transpose_reshape * reshape_transpose_matmul * restore ut * adjust old ut * restore parallel UT ruels * feedback from review
-
- 04 8月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* Add unit tests * matmul_v2 + activation * matmuls + elementwise_add * matmul_v2 postops * transform matmul to v2 * opcompat * fix fusing matmul with multipe outs * add shape constraints * remove unused vars * change pass order * - Unit tests to be debugged - fix - refactor - diagnostic - more diagnostic - fix - Fix number two - fix - fix - fix - alpha added - more fixes - compilation fix - removed diagnostic code - cosmetic fixes * lint * add alpha constraint * merge matmul refactor * trigger CI * - fix * - another fix * code style * add support for matmul+elementwise_add+activation * code style * fix bfloat16 bugs * change append_binary to append_sum Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
-
- 26 7月, 2022 2 次提交
-
-
由 Ruibiao Chen 提交于
-
由 Ruibiao Chen 提交于
* Set more attrs in ReplaceScaleLossGradOp * Fix typos * Fix CI errors * Add UT
-
- 12 7月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* add method for post ops * format code * gpd * format style * add matmul+act test * implement matmul+activation * whitespaces * code style * python code format * Increase UT timeout * code format * update style * generalize activation fuse passes * change order * Unify activation GPD * Revert changes with op_act * remove softmax mkldnn attrs * set common name for act attributes * whitespace * append postops by helper function * ut style * revert changes related to quantization * Reduce redundancy * reduce number of parameters * trigger CI * validate attribute * trim unit test
-
- 07 7月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* copy onnxruntime.dll to c++ test folder in windows * remove ut that failed due to onnxrumtime.dll * test_api_impl failed of diff * use TARGET to make sure if the test exist; use POST_BUILD to add copy command
-
- 04 7月, 2022 1 次提交
-
-
由 yaozhixin 提交于
-
- 24 6月, 2022 1 次提交
-
-
由 Wilber 提交于
* revert 40531 * update
-
- 23 6月, 2022 1 次提交
-
-
由 Sylwester Fraczek 提交于
* sylwek prototype params to int8 pass * trying to make warmup work * wip * wip * change test to cpp test * review fixes, refactoring * more refactoring * add erasevars * change test to fixture * rename pass and reorder erasevars and graphsaferemovenodes * fix * more refactoring and fixed bug * formatting * remove scale count * enfroce message too short * remove erasevars erasevars couldbe cauuse of memory issues some other fixes * add count of successfull fuses to name of new nodes * FindVar -> GetVar and use ConvResidual pattern * use tensor->clear() instead of new variable * Update paddle/fluid/framework/ir/mkldnn/params_quantization_mkldnn_pass_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * Update paddle/fluid/framework/ir/mkldnn/params_quantization_mkldnn_pass_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * Update paddle/fluid/inference/tests/api/analyzer_lexical_analysis_gru_tester.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * add log (review fix)c * review fix (2 functions to one) * code review: Conv->QuantizeConv * revert * fix formatting * remove unused functions * add paddle enforce Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
-
- 20 6月, 2022 1 次提交
-
-
由 whs 提交于
-
- 13 6月, 2022 1 次提交
-
-
由 Ruibiao Chen 提交于
-
- 09 6月, 2022 1 次提交
-
-
由 minghaoBD 提交于
-
- 04 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 02 6月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* new general transformer inference support
-
- 19 5月, 2022 1 次提交
-
-
由 shentanyue 提交于
* support yolov5s static/int8 * fix eltwise_sub and div weight compute * fix delete_fill_constant_pass
-
- 17 5月, 2022 1 次提交
-
-
由 zhupengyang 提交于
-
- 13 5月, 2022 1 次提交
-
-
由 Qi Li 提交于
* [IPU] fix ipu and add python infer api, test=develop * [IPU] add paddlepaddle-ipu package name, test=develop
-
- 12 5月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* [Paddle-Inference] support transformer generation: some passes
-
- 11 5月, 2022 1 次提交
-
-
由 Zuza Gawrysiak 提交于
* Add int8 scales gathering pass for convolution * Fix typo * Add unittest * Add corrected unit test * Change test name * Remove enabling mkldnn in test * Speed up test * Change max examples * Add functional test * Change test name * Add new test case * Rename pass
-
- 10 5月, 2022 1 次提交
-
-
由 piotrekobi 提交于
* Readd conv_affine_channel fuse pass as mkldnn pass * Fix formatting * Add new test to parallel_UT_rule.py * Fix Coverage and Windows CI issues * Revert "Fix Coverage and Windows CI issues" This reverts commit f33459846385c9fd51c07f9f44e7ff283a652637. * Fix CI errors * Remove unnecessary conv_eltwise_add_affine_channel fuse pass * Remove test from parallel_UT_rule.py
-
- 06 5月, 2022 1 次提交
-
-
由 Allen Guo 提交于
* rm transfer_cast_op_pass * rm header
-
- 27 4月, 2022 1 次提交
-
-
由 jakpiase 提交于
* added test for shuffle_channel_mkldnn_detect_pass * added UT using new framework * CI fix
-
- 14 4月, 2022 3 次提交
-
-
由 Sławomir Siwek 提交于
* Change tensor name to match activation * declare fc_eltwise_add pass * merge conv_eltwise refactor PR * first compilable draft * unittest feedback tools * Fuse pass tester * Move IsReachable() to shared file * 100% coverage of fuse_pass_tester.cc * register pass * Add bias node * Improve unit tests / remove bias node from pattern * improve fc_eltwiseadd_unittest * cancel eltwise_add fuse if act is already fused * Add elementwise_input scale * Residual MVP * Add new FC attrs * Add more test cases * Add missing op attrs * Adapt code to new Elementwise pattern * reuse existing fcpattern * improve code style * remove unused arguments * fix typo * remove whitespace * remove int8 related code * Remove attributes from base ops * style * style check * Remove input from base op * Set attribute during fuse * ut timeout * download and test model * DRY * apply feedback from review * Style check * fix typo * cosmetic changes * explicitly set residual as output * VIT-OCR accuracy check * trigger CI * remove whitespaces * fix missing data file
-
由 baoachun 提交于
* add mkldnn int8 pass [step3] * Add test for compute_propagate_scales_mkldnn_pass * update pass * update api comment and python api Co-authored-by: Nwozna <joanna.wozna@intel.com>
-
由 jakpiase 提交于
* added shuffle_channel bf16/fp32 fwd kernel * added missing files * CI fix * changed from pten to phi * tmp save * added reviewers suggestions * fix for test
-
- 10 4月, 2022 2 次提交
- 06 4月, 2022 1 次提交
-
-
由 Allen Guo 提交于
* remove paddle_ipu shared library * fix unique_name
-
- 02 4月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* paddle inference support new quant_model
-
- 17 3月, 2022 1 次提交
-
-
由 baoachun 提交于
-
- 14 3月, 2022 1 次提交
-
-
由 Tomasz Socha 提交于
* Add elementwise add and activation fuse pass * Fix copy ellision * More flexible pattern detector * More flexible fusion pass * Update lists for pass * Add support for Pow operator * Add support for more activation types * Style * Rename fusion pass * First version of tests * Dirty version of pass * Polished version * Update pbtxt * Style * Update names * Style * Use PADDLE_ENFORCE_EQ * Save error message to variable * WO for error checks * CR * Static style check * Add missing 'activation_scale' attribute * Add relu6 and sigmoid activations * Style * Fix fuse list formating * Sync filenames for fuse pass files * Fix cmake after move * Fix registration * Fix pass name in tests * Add missing activations to checker * WIPS * Working mul op * Working sub * Working Add * Remove pten includes * Remove some forward declarations * Remove Includes * Fixes * Remove default kernels * Add check if post_ops attributes are avaliable * Style * Code adjustment * Register default kernels * We have year 2022 not 2021... Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Fast review fixes Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Review Fix * Rename one_dnn -> onednn * Style after review * Fast and dirty fix for quantization * Update tests * Style * Fix mkldnn_quantizer config * Add Joanna's suggestion. * Check if operator is explicitly disables on OneDNN * Try to use unregistered attributes * Style * Test new framework * FXI * FXII * Update test * Style Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
-
- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 01 3月, 2022 1 次提交
-
-
由 wenbin 提交于
* remove * pass * more pass
-
- 22 2月, 2022 1 次提交
-
-
由 Allen Guo 提交于
-
- 15 2月, 2022 1 次提交
-
-
由 Wangzheee 提交于
[Paddle-Inference] support preln_ernie: add preln_embedding_eltwise_layernorm_fuse_pass, preln_skip_layernorm_fuse_pass (#39508) * support preln_ernie * support preln_ernie
-
- 09 2月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu * rebuild matmul pass: trt and gpu_cpu
-
- 26 1月, 2022 1 次提交
-
-
由 Allen Guo 提交于
* sync misc changes * apply comments 01 * fix compile error * remove is_ipu_place check * add authors Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NAllen Guo <alleng@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NHaicheng Jiang <haichengj@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai> * sync changes * restore cmake * update ir cmake and setup.py * update inference_lib cmake * split PR Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NHaicheng Jiang <haichengj@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai>
-