- 10 4月, 2023 14 次提交
-
-
由 JZ-LIANG 提交于
* unique id for mesh * rng ctrl * support dropout * register op * adopt for recompute * update unitest * support pp
-
由 LyndonKong 提交于
* add PoissonNLLLoss API * update unittests * Fix poisson_nll_loss init and update data type support * remove type comment * Update doc string * Fix doc string erro * Fix doc string math equation format * Add float16 and bfloat16 support
-
由 Zhang Ting 提交于
* support set master_grad * move register_hook to auto_cast * update unittest * fix fp16 test * update for review comments
-
由 xiaoxiaohehe001 提交于
* Support two inputs of multihead attention named qk_multihead
-
由 HongyuJia 提交于
* [Opt Performance] Optimize custom operator performance, reconstruct python API auto-gen, add cache and use const inference * opt AutoGradMeta implementation * remove profiler codes * fix unit test * change year, 2021->2023 * fix int64_t parse bug
-
由 cyberslack_lee 提交于
-
由 lzydev 提交于
* autogen softmax_with_cross_entropy * fix error in softmax_with_cross_entropy version
-
由 kangguangli 提交于
* add strategy force_sequential_run * remove flag * fix * fix * fix * fix * fix * fix * fix * fix * fix
-
由 Vvsmile 提交于
* adjust defalut tolerance of output and grad * fix a bug in the grad of OpTest * fix the type of setting defalut value in optest, both forward and backward * add defalut * fix test_sum_op * adjust tolerance * fix the tolerance of eager * add bf16 and fp16 to the activation tests * remove some fixs * fix activation * fix fp16 * fix gelu * fix the activation tests * add bfloat16 specialization to singrad and cosgrad * fix bugs * fix bugs * add unittest * add skip * add fp/bf to rrelu/rrelu_grad * git add rrelu * fix bugs
-
由 Wilber 提交于
-
由 qizhaoaoe 提交于
* add fp16 and bf16 support for instance_norm * fix /= operator which not support bf16 * fix instance_norm_grad kernel and unittests. * fix fp32 unittests. * fix instance_norm_kernel and unittests. * fix instance_norm_grad_kernel and unittest threshold. * add fp16/bf16 for instance_norm_grad_grad op. * add bf16 dtype check. * fix conflicts. * fix cpu support for fp32 op and fix type in instance_norm_grad_kernel. * fix type in instance_norm_kernel. * fix bf16 outputs in unittests and refine codes. * fix dx computation. * delete unuseful params and head including. * add fp16/bf16 for static graph. * fix device condiction for instance_norm op. * fix instance_norm_grad_grad and bf16 op tests. * fix op_test to support grad of bf16 can be compared with fp32. * remove updates. * add self-defined grad.
-
由 Roc 提交于
-
由 WangZhen 提交于
-
由 张春乔 提交于
* mv WITH_ASCEND_CL * mv WITH_ASCEND * rollback * remove WITH_ASCEND * remove WITH_ASCEND
-
- 09 4月, 2023 4 次提交
-
-
由 cyber-pioneer 提交于
-
由 Chitsing KUI 提交于
-
由 ShenLiang 提交于
* add seed control * fix bug
-
由 shaojie_wang 提交于
-
- 08 4月, 2023 2 次提交
-
-
由 kangguangli 提交于
* add strategy force_sequential_run * fix * fix * fix * fix * fix
-
由 张春乔 提交于
* mv WITH_ASCEND_CL * mv WITH_ASCEND * rollback
-
- 07 4月, 2023 12 次提交
-
-
由 Wangzheee 提交于
* remove matrix_multiply unitest
-
由 kangguangli 提交于
* remove run_program * remove FLAGS_USE_STANDALONE_EXECUTOR
-
由 cyberslack_lee 提交于
-
由 Zhang Zheng 提交于
* [AMP OP&Test] Fix the logic of calling infer_dtype func in op test * add fp16
-
由 WangZhen 提交于
-
由 Guanghua Yu 提交于
-
由 TaoTao Li 提交于
fix merge conflicts
-
由 risemeup1 提交于
* remove zeros * remove zeros * apply gcc12 to py3 * apply gcc12 to py3 * fluid api clear * fluid api clean * fluid api clean
-
由 feifei-111 提交于
* fix dy2s grad name parse * pre-commit * bug fix * Fix grad/ error * Format code --------- Co-authored-by: N0x45f <wangzhen45@baidu.com>
-
由 Roc 提交于
* fix mkdir * update
-
由 Happyd99 提交于
* [Test MV] standalone_executor * update as * update as * update codestyle
-
由 Wang Xin 提交于
-
- 06 4月, 2023 8 次提交
-
-
由 ceci3 提交于
-
由 Sławomir Siwek 提交于
* replace matmul with matmul_v2 in fuse passes * Remove fusion logic from matmul * removing fusion methods * add proper name * adjust namespaces * clean attrs in python tests * delete checkpoint and restore matmul version * remove unused code * matmul and reshape/transpose fuses migrated * split MatmulOneDNN headers * fuse activation and eltwise_add * add fuse_activation * matmul_transpose_reshape/reshape_transpose_matmul * matmul + elementwise_add (fused) * activation temporary modifciation * restore matmul(v1) version 0 * merge newest develop * remove depedency from other PR * revert pbtxt * remove placeholders from matmul_v2 * add description in OPMaker * remove matmul_v2_op.h and all depedencies * remove dims changing in base op * add possibility to fuse already fused_matmul * restart broken CI * Empty-Commit * revert matmul_utils.h * codestyle * adjust imports * add pbtxt file * 100% matmul unit tests coverage * trigger CI with minimal changes to develop * adjust changes to develop * add fused_matmul op * inherit base ops * add "v2" * move OPMaker * Gradually add fused_matmul files * second batch of fused_matmul changes * split infershapes of matmul_v2 and fused_matmul * merge code from other PR * 2023 * inherit fused_matmul from matmul_v2 * Update paddle/phi/backends/onednn/onednn_reuse.h Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * Update paddle/phi/kernels/fusion/onednn/fused_matmul_kernel.cc Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> * resolve conflicts * codestyle * simplify isgemmlinear * 2023 * remove import * reuse methods * matmul_v2_mkldnn cleanup * simplify ExecuteMatMulV1Grad * matmul refactored * fc * SetOutMemDescWithLogicalLayoutFusesSupport * matmul_v2 * alpha support * group repetetive funcs * matmul utils * execute matmul methods * restore registered kernel names * split header and impl files * remove double negatives * reduce numer of modified files * adjust ExecuteMatmul * add scales for ut * dates * limit number of modified files * fluid imports * remove alpha * codestyle --------- Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
-
由 Sonder 提交于
* add kernel functions * update kernel functions * update func parameters' name * create codes for gpu device * 调整文件位置 * fix include error * remove dependent files to phi/ * restore fused_attention_op.cu * fix dependence errors * fix dependence errors * fix include error * fix all depandence errors[build success] * remove useless include * recover useless include * use phi::ToNCCLDataType * fix namespace * update new register code * fix error in fused_gemm_epilogue_utils * fix error in FusedAttentionKernel parm * finish fused_attention registe code[build success] * add paddle::optional * add sig file * fix build error * fix a include error * update CMkaeList * fix parameter sequence * add include file * update #if before include * fix grammly error * update codes for DropoutParam * remove const cast * trans some fluid api to phi api * add #if * update test code * update test codes * recover test codes * trans fused_attention to fluid * move #endif to end * move #endif * delete useless files * use fused attention utils and recover random seed * remove fluid include in phi
-
由 scotty 提交于
-
由 Nyakku Shigure 提交于
-
由 Zhang Zheng 提交于
-
由 Kang Zhao 提交于
* feat: add relu composite rule * feat: add relu composite rule, maximum op * feat: add relu composite rule, maximum op * feat: add relu composite rule, polish comments * feat: add relu composite rule, polish comments * feat: add relu composite rule, add python api of relu * feat: add relu composite rule, commit hook * fix: maximum type error & ban cinn test * fix: maximum input sequence bugs * resolve conflicts * fix: code style bugs * add: relu fp16 test * feat: add rsqrt composite rule * feat: add rsqrt composite rule * resolve conflicts of composite rule * fix: delete check eager * feat: add roll grad composite rule * fix minus shift * fix test roll op
-
由 jiangcheng 提交于
* [CINN] disable CINN test_mean_op unittest to pass CINN CI * disable test_mean_op for pass ci
-