- 14 4月, 2022 3 次提交
-
-
由 Sławomir Siwek 提交于
* Change tensor name to match activation * declare fc_eltwise_add pass * merge conv_eltwise refactor PR * first compilable draft * unittest feedback tools * Fuse pass tester * Move IsReachable() to shared file * 100% coverage of fuse_pass_tester.cc * register pass * Add bias node * Improve unit tests / remove bias node from pattern * improve fc_eltwiseadd_unittest * cancel eltwise_add fuse if act is already fused * Add elementwise_input scale * Residual MVP * Add new FC attrs * Add more test cases * Add missing op attrs * Adapt code to new Elementwise pattern * reuse existing fcpattern * improve code style * remove unused arguments * fix typo * remove whitespace * remove int8 related code * Remove attributes from base ops * style * style check * Remove input from base op * Set attribute during fuse * ut timeout * download and test model * DRY * apply feedback from review * Style check * fix typo * cosmetic changes * explicitly set residual as output * VIT-OCR accuracy check * trigger CI * remove whitespaces * fix missing data file
-
由 baoachun 提交于
* add mkldnn int8 pass [step3] * Add test for compute_propagate_scales_mkldnn_pass * update pass * update api comment and python api Co-authored-by: Nwozna <joanna.wozna@intel.com>
-
由 jakpiase 提交于
* added shuffle_channel bf16/fp32 fwd kernel * added missing files * CI fix * changed from pten to phi * tmp save * added reviewers suggestions * fix for test
-
- 10 4月, 2022 2 次提交
- 07 4月, 2022 1 次提交
-
-
由 liutiexing 提交于
* Profile Executors * update * fix ut * fix names * update * update
-
- 06 4月, 2022 1 次提交
-
-
由 Allen Guo 提交于
* remove paddle_ipu shared library * fix unique_name
-
- 04 4月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* DRY * change nodes names * add const prefix * change asX to as_x in all files
-
- 02 4月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* paddle inference support new quant_model
-
- 31 3月, 2022 2 次提交
- 30 3月, 2022 1 次提交
-
-
由 YuanRisheng 提交于
-
- 24 3月, 2022 2 次提交
-
-
由 joanna.wozna.intel 提交于
* Correct MultipleQuantizeSquash * Correct logging
-
由 Chen Weihang 提交于
* add mul phi kernel * remove mul op kernel * remove original mul grad op * fix cinn test * fix dygraph test failed
-
- 23 3月, 2022 1 次提交
-
-
由 Zhanlue Yang 提交于
* Removed redundant use of declarations.h * Fixed minor bug
-
- 21 3月, 2022 2 次提交
-
-
由 From00 提交于
* Move conv-transpose OPs to phi * Fix CI errors * Fix CI errors
-
由 Allen Guo 提交于
* sync changes * copy sOpNamescope * fix UTs * add authors Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NAllen Guo <alleng@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai> * fix code-format * fix compile error * add comments for feed_op Co-authored-by: NXiaobing Wang <xiaobingw@graphcore.ai> Co-authored-by: NZhixin Yao <zhixiny@graphcore.ai> Co-authored-by: NZhaorui Chen <zhaoruic@graphcore.ai> Co-authored-by: NHan Zhao <hanzhao@graphcore.ai>
-
- 18 3月, 2022 1 次提交
-
-
由 shentanyue 提交于
* add gelu * fix gelu * add log_softmax * add prelu kernel and prelu/gelu/logsoftmax infershape * fix * fix * fix * fix * fix ci * log_softmax rewrite * fix * fix * fix conflict * fix compile error * fix comment * fix * ci_fix Co-authored-by: NYan Li <liyan665@gmail.com>
-
- 17 3月, 2022 2 次提交
-
-
由 TeFeng Chen 提交于
-
由 baoachun 提交于
-
- 16 3月, 2022 2 次提交
-
-
由 Zuza 提交于
* Quantize elementwise mul op * Parametrize elementwise functions * Fix code formatting
-
由 Yulong Ao 提交于
* [Auto Parallel] Support the auto completion of while_op * [Auto Parallel] Improve the completion algorithms * [Auto Parallel] Fix bugs for ernie inference * [Auto Parallel] Remove attrs which cannot be pickled * [Auto Parallel] make the dims_mappings of LodTensorArray vars empty * [Auto Parallel] Fix bugs for the ernie inference in the pipeline parallel * [Auto Parallel] Remove unncessary comments * [Auto Parallel] Fix a bug of the CMakeLists * [Auto Parallel] Use the newest APIs to write the unit test * [Auto Parallel] Remove unnecessary statements
-
- 15 3月, 2022 2 次提交
-
-
由 Jacek Czaja 提交于
* - Prototype of third solution - fix - compilation fixes - fix - fixe - fix - fix - compilation fix - comment fix - lint update mkldnn conv_elementwise_add_fuse_pass ut - NHWC changes to prelu - alhpa dims - UT fix - fix to UT - lint - Some fixes - added to BWD of prelu NHWC support - reverted removal of resetting cu_layout in clearing of caching * - Small changes * - compilation fix * - fix * - fix * lint * - fixes after internal review * - compilation fix * - lint
-
由 YuanRisheng 提交于
* move activation op * adjust code format * fix compile bugs * fix ci bugs * code format adjust * code format adjust2 * activate ci status * modify according to comment * move activation kernel * revert relu6 * reduce add code * perfect use_phi_functor * completing func name * fix bugs when run ci * fix bugs when run infr * modifpy infrt get kernel signature
-
- 14 3月, 2022 1 次提交
-
-
由 Tomasz Socha 提交于
* Add elementwise add and activation fuse pass * Fix copy ellision * More flexible pattern detector * More flexible fusion pass * Update lists for pass * Add support for Pow operator * Add support for more activation types * Style * Rename fusion pass * First version of tests * Dirty version of pass * Polished version * Update pbtxt * Style * Update names * Style * Use PADDLE_ENFORCE_EQ * Save error message to variable * WO for error checks * CR * Static style check * Add missing 'activation_scale' attribute * Add relu6 and sigmoid activations * Style * Fix fuse list formating * Sync filenames for fuse pass files * Fix cmake after move * Fix registration * Fix pass name in tests * Add missing activations to checker * WIPS * Working mul op * Working sub * Working Add * Remove pten includes * Remove some forward declarations * Remove Includes * Fixes * Remove default kernels * Add check if post_ops attributes are avaliable * Style * Code adjustment * Register default kernels * We have year 2022 not 2021... Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Fast review fixes Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com> * Review Fix * Rename one_dnn -> onednn * Style after review * Fast and dirty fix for quantization * Update tests * Style * Fix mkldnn_quantizer config * Add Joanna's suggestion. * Check if operator is explicitly disables on OneDNN * Try to use unregistered attributes * Style * Test new framework * FXI * FXII * Update test * Style Co-authored-by: Njakpiase <jakpia21@gmail.com> Co-authored-by: NSylwester Fraczek <sylwester.fraczek@intel.com>
-
- 11 3月, 2022 2 次提交
-
-
由 Sylwester Fraczek 提交于
-
由 Chen Weihang 提交于
* remove needless deps in unittests * add gpu marco * fix other unittests * fix kernel name error * fix test_prepare_op * fix failed dygraph unittests * fix gpu failed tests * fix cinn test failed * fix cinn test failed * fix dropout tests
-
- 10 3月, 2022 1 次提交
-
-
由 wawltor 提交于
* add the infer shape meta for the graph_send_recv * move the infershape code to another file
-
- 08 3月, 2022 1 次提交
-
-
由 YuanRisheng 提交于
[Phi]Move Relu/Cos/Sin/Tan/Acos/Asin/Atan/Sinh/Cosh/Asinh/Acosh/Atanh kernels in Activation to Phi (#40175) * move activation op * adjust code format * fix compile bugs * fix ci bugs * code format adjust * code format adjust2 * activate ci status * modify according to comment
-
- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 03 3月, 2022 1 次提交
-
-
由 hong 提交于
* add bn cpu version; test=develop * move batch norm to pten * move batch norm to pten; test=develop * fix bug; test=develop * fix func::tranpose depend bug; test=develop * fix compile bugs; test=develop * fix use_op batch_norm bug; test=develop * fix cudnn bn add relu test; test=develop * fix pten context build and double grad bug; test= develop * remve useless code; test=develop * add batch norm gpu fp16 support; test=develop * fix test bn op bug; test=develop * remove output dtype set; test=develop * fix bug; test=develop * fix bug; test=develop * fix applay pass to program bug; test=develop * revert to develop; test=develop * fix rocm bug; test=develop * revert operator to develop; test=develop * fix pre_commit; test=develop * fix statci check error; test=develop * resolve conflict; test=develop * ana batch norm bug; * revert batch norm op * resolve conlict * fix nan inf and speed bug; test=develop * fix bug; test=develop * fix error; test=develop * test expand op; test=develop * fix bug; test=develop * resolve confilct * resolve confilct; test=develop * polish code; test=develop * polish code; test=develop * change mutable data to ctx alloc; test=develop * make format same with ci; test=develop * fix format error with ci; test=develop
-
- 01 3月, 2022 1 次提交
-
-
由 wenbin 提交于
* remove * pass * more pass
-
- 28 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten_utils to phi_utils * rename pten_utils target * rename Pten to Phi * replace pten with phi * resolve conflict
-
- 25 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* support cudnn kernel moving * polish cmake rules * add unittest for coverage * remove orig kernel * remove softmax cudnn kernel * fix softmax test failed * fix npu func error * resolve conflict * rename gpu dnn kernels * fix name rule error * fix compile error * update fp16 namespace
-
- 24 2月, 2022 1 次提交
-
-
由 jakpiase 提交于
* Fix for split bf16 inference * added test for pass * changes after review
-
- 22 2月, 2022 2 次提交
- 21 2月, 2022 1 次提交
-
-
由 chenjian 提交于
* fix RecordEvent interface * modify default level to 4 * update interface use * add const default trace level * update record event interface using * update record event interface using * update operator.cc * update part2 * update part1 * fix include profiler.h header in ps server * fix include profiler.h header in ps server * fix profiler.h header
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 19 2月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Unify paddle/pten::framework::ddim into pten::ddim * fix paddle namespace * compile sucessfully * fix npu src file * fix conflict * fix conflict * fix tensorrt compiler error * fix conflict * fix conflict * fix tesst file conflict * fix conflict * fix mlu file conflict * fix mlu file conflict * fix cinn header file conflict * fix conflict * fix conflict * fix conflict * fix conflict
-