- 28 2月, 2023 9 次提交
-
-
由 shentanyue 提交于
-
由 yuehuayingxueluo 提交于
-
由 张春乔 提交于
* add unittest for nn.DropOut2D * add fp16 * add fp16 in docs of temporal_shift_op.cc * Update test_dropout_op.py
-
由 zhoutianzi666 提交于
* forbid tensorrt_engine op's output is a persistable var
-
由 taixiurong 提交于
-
由 Yuanle Liu 提交于
-
由 wenbin 提交于
* fix concat bug * recommit for ci
-
由 niuliling123 提交于
-
由 Jiabin Yang 提交于
* support transpose and reshape * support reshpe, transpose, cast vjp * merge develop * recover unused file * remove prim base * support problem * remove additional status settting * remove additional status settting * fix ut * fix ut * fix ut * fix no grad branch * add more test * disable fp16 in cpu * fix test
-
- 27 2月, 2023 18 次提交
-
-
由 jiangcheng 提交于
-
由 zyfncg 提交于
* add inferface of get registered phi kernels * change KernelType to KernelKey * add test * refactor code
-
由 houj04 提交于
* [XPU] add fp16 support for shape op. * [XPU] add fp16 support for lookup_table_v2 op. * update approval list: add qingshu's id.
-
由 Zhang Jun 提交于
-
由 张春乔 提交于
* remove utils * remove utils * remove utils * remove utils * Update get_data_from_tensor.h * Update rnn_functor.h * Update rnn_grad_kernel.cu.cc * Update rnn_kernel.cu.cc * Update rnn_kernel.cc * Update rnn_grad_kernel.cu.cc * Update rnn_functor.h * Update rnn_kernel.cu.cc * Update rnn_kernel.cc * remove utils * Update rnn_functor.h * remove utils * remove utils * remove utils * remove utils * remove utils * Update rnn_functor.h * Update unsqueeze_op.h * Update utils.h * roll back * Update tensor_utils.h * Update tensor_utils.h * Update tensor_utils.h * Update tensor_utils.h * Update tensor_utils.h * use TensorToVector * use TensorToVector * use TensorToVector * use TensorToVector * use TensorToVector * Update rnn_kernel.cc * Update rnn_grad_kernel.cc * Update rnn_functor.h * Update rnn_grad_kernel.cu.cc * Update rnn_kernel.cu.cc * Update rnn_functor.h * Update rnn_grad_kernel.cu.cc * Update rnn_kernel.cu.cc * Update rnn_functor.h * Update rnn_grad_kernel.cu.cc * Update rnn_kernel.cu.cc * add TensorToVector * roll back * Update tensor_utils.h * Update rnn_functor.h * Update rnn_grad_kernel.cu.cc * Update tensor_utils.h * Update rnn_kernel.cu.cc * Update rnn_grad_kernel.cc * Update rnn_kernel.cc * Update rnn_grad_kernel.cu.cc * Update rnn_kernel.cu.cc * Update rnn_grad_kernel.cc * Update rnn_kernel.cc * TensorCopySync to phi::Copy * fix codestyle * rnn_kernel.cc: add ; * replace all GetDataFromTensor with phi::GetVectorFromTensor * delete include of util.h
-
由 Wang Bojun 提交于
* add sm version check * use GetGPUComputeCapability
-
由 HongyuJia 提交于
* [Tensor Operants & Prim] Tensor pow API uses elementwise_pow * unittest change to fill_constant+elementwise_pow
-
由 HongyuJia 提交于
* [Error Msg] Polish error message when GPU kernel not found * Only test in GPU environment
-
由 Bo Zhang 提交于
* conflict * add UpdateSliceAttrs
-
由 gaoziyuan 提交于
-
由 csy0225 提交于
-
由 jameszhang 提交于
* [kunlun] support reduce_scatter * uncomment unittest * update xccl to 1.0.10
-
由 Yiqun Liu 提交于
-
由 zhouweiwei2014 提交于
-
由 zhangbo9674 提交于
* add TypeUniquer and IrContext * refine include code * add Type, TypeBase * add built-in type * add bulit-in Float32Type * refine ut * refine code * refine code * delete type_base * rename ImplType to StorageType * rename ImplType to StorageType * add macros util for register type * add macros util for register type * refine name * refine name * change storage manager * add multi_thread for ir_ctx * rwlock_2_spinlock, add REGISTER_TYPE_2_IRCONTEXT * DECLARE_TYPE_UTILITY_FUNCTOR * refine ircontext singleton * del destructor for ParametricStorageManager * refine code * Add necessary logs for debugging * refine ir_context instance * refine type get interface * refine code by comment
-
由 wangshengxiang 提交于
* [XPU] bind op scatter_nd_add * [XPU] add more data type for op: clip, transpose2 & assign_value
-
由 shaojie_wang 提交于
* register bfloat16 datatype for squared l2 norm * register bfloat16 datatype for softmax with upper triangular mask * register bfloat16 for tril triu cuda kernel
-
由 wangzhen38 提交于
* [mv fleet] mv fleet to distributed * [mv fleet] for ci * [mv fleet] for ci * [mv fleet] solve ci of version
-
- 26 2月, 2023 2 次提交
-
-
由 limingshu 提交于
* implement of matmul using cublasLt instead of cublas * Update matmul_kernel_impl_via_blasLt.h --------- Co-authored-by: Nzhangbopd <1299246947@qq.com> Co-authored-by: NBo Zhang <105368690+zhangbopd@users.noreply.github.com> Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
由 Yiqun Liu 提交于
* Enable matmul + bias fusion in fused_gat_attention. * Add a variable to control whether using fused matmul + bias.
-
- 25 2月, 2023 1 次提交
-
-
由 zyfncg 提交于
* rename elementwise_heaviside to heaviside * delete __init__.py * fix bug
-
- 24 2月, 2023 10 次提交
-
-
由 yunyaoXYY 提交于
-
由 zhoutianzi666 提交于
* allow fall back to fp16 when int8 * refine code * refine code * refine code
-
由 Sławomir Siwek 提交于
* ConvertToFusedOp * change static to inline Co-authored-by: NTomasz Socha <tomasz.socha@intel.com> --------- Co-authored-by: NTomasz Socha <tomasz.socha@intel.com>
-
由 niuliling123 提交于
-
由 Jiabin Yang 提交于
* change amp with to_prim * fix prim amp * fix rules * fix liear * add amp test * add test * disable this test on cpu * disable this test on cpu --------- Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
-
由 Charles-hit 提交于
-
由 Yuanle Liu 提交于
-
由 HappyHeavyRain 提交于
* support 'backend' in static ops * change bitwise_xx comment in python * change bitwise_xxx comment in python * change 'backend' and 'data_type' in GetExpectedKernelType
-
由 YuanRisheng 提交于
-
由 xiaoguoguo626807 提交于
* support prim test in OpTest * fix cmake * fix op test * fix test_input_spec * disable cinn in reduce_sum unit test * add bfloat16 dtype for sum * add approve rules * polish code * add clear jit program function * convert grad out from tensor to numpy * remove unnecessary code * add only_prim flag * fix flag * fix op test * add attr * fix optest comp inplace error * fix op test * fix op test with guard * add initialization of check_comp flag * fix comp inplace error in op test * rename check_comp with check_prim and add bfloat16 dtype convert * rename comp_op_type to prim_op_type * rename comp to prim * remove useless code * skip ci check for only prim * add no_grad_vars and grad_outputs in prim test * fix var_dict * fix op test for only_prim * fix dy2static bugs * polish some code * temp * modify op test * except cinn test * modify bfp16 * modify pad grad * add pad_grad dtype * start cinn part --------- Co-authored-by: NCharles-hit <wanghao107@baidu.com>
-