- 21 3月, 2023 9 次提交
-
-
由 iSerendipity 提交于
* move DataType from paddle::experimental to phi * convert namespace * convert namespace * convert namespace * clarify namespace * convert more datatype * Revert "convert more datatype" This reverts commit 083b462959e6a22d4d8767707b628b95b396642e. * convert more in auto_code_generator * fix conflicts for XPU * fix namespace conflicts * fix errors * Revert "fix errors" This reverts commit f9d9958b54ee32141112274c8a5c3c381ab0f876. * fix errors * fix formatting
-
由 ShenLiang 提交于
* fix flash_attention * Update mp_layers.py
-
由 Zhang Zheng 提交于
-
由 zhouweiwei2014 提交于
* [Zero-Dim] Support output 0D for argmin/argmax/median/kthvalue/mode/equal_all/allclose * fix CI
-
由 Siming Dai 提交于
* add fp16 unittest * support bf16 and add unittest * fix according to review
-
由 houj04 提交于
* [XPU] add fp16 support for compare ops. * fix ci.
-
由 zhouweiwei2014 提交于
-
由 Bo Zhang 提交于
* with printf * add DropOutNdForwardKernel * PR comment
-
由 zhouweiwei2014 提交于
[Zero-Dim] Support 0D for numel/rank/size/optimizer/create_parameter/create_global_var, fix some usage to adapt 0D (#51566)
-
- 20 3月, 2023 20 次提交
-
-
由 HappyHeavyRain 提交于
-
由 201716010711 提交于
-
由 chenxujun 提交于
-
由 YuanRisheng 提交于
* remove init * delete fluid in context pool * fix custom op bugs * fix profiler bugs * fix ci bugs * fix window compile bugs * fix windows bugs * fix window bugs
-
由 xiaoguoguo626807 提交于
* Add flatten composite rule * get the right xshape and pass func test * add cinn unit test * Remove cinn test, wait for it to be added after repair * add comp test to test_flatten_contiguous_range_op.py * remove func test on composite_ops * Add comments to maybe_wrap_dim func * remove commented code * fix the problem with 0D tensor case * add flatten split rule comment * fix syntax issues * block flatten on resnet_prim_cinn * init change * tmp commit * add layer_norm InferMeta check * cast type modify * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * recover * big tol * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * Cxx prim custom vjp (#8) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [Prim] enable whitelist and blacklist for custom_vjp * debug log * clear log * fix * nothing * less memory * recover utils * fix * modify threshold value * skip layer_norm for test_bert * back to bert success state * add epsion * delete unnecessary compute * modify amp dtype * modify * order * delete sqrt check and fp16 --------- Co-authored-by: Nxuyongsheng <xuyongsheng@baidu.com> Co-authored-by: Nxysheng-baidu <121540080+xysheng-baidu@users.noreply.github.com> Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> Co-authored-by: Nxiongkun <807377414@qq.com>
-
由 duanyanhui 提交于
-
由 Zhang Na 提交于
-
由 limingshu 提交于
* optimization for fused linear op * fix code format * optimization for linear fused forward * merge with develop * fix bugs for gemm_ephilog * package of cublaslt ephilogue type with enmu * final fix before code reviewing * fix missed fusedType typo * fix code according to review suggestions * fix windows ci error * change location of MatmulPlanner * add some changes for compiler error fix ---------
-
由 iSerendipity 提交于
* fix Werror in roi_align_grad_kernel * adopt a better way
-
由 zyfncg 提交于
* register some custom kernel * fix bug
-
由 HongyuJia 提交于
-
由 Weilong Wu 提交于
-
由 mayang002 提交于
-
由 tianshuo78520a 提交于
-
由 FormlessUnit 提交于
shape support bf16
-
由 zhouweiwei2014 提交于
-
由 ykkk2333 提交于
* add xpu tile and concat kernel int64, test=kunlun * fix previous xpu dataoader bug, and add maxpool3dgrad special dim support, test=kunlun
-
由 Huang Jiyi 提交于
-
由 Jiabin Yang 提交于
-
由 HongyuJia 提交于
* [Tensor Operants & Prim-Relevant] Tensor supports compare operants * fix dependence of test_comp_static * fix unit test
-
- 19 3月, 2023 2 次提交
- 17 3月, 2023 7 次提交
-
-
由 denglianbin 提交于
* finish task * fix some question. * fix error * change unittest:zeroDim.
-
由 Infinity_lee 提交于
-
由 PuQing 提交于
* add multinomial output defs * fix register on gpu
-
由 Zhang Zheng 提交于
* [AMP OP&Test] Support float & bfloat16 when using cub * fix compile error * fix * fix rocm compile error
-
由 chenxujun 提交于
-
由 gouzil 提交于
* [phi][jit] rm Softmax StrideScal * [phi][jit] rm kStrideScal * [phi][jit] fix Softmax clean omission * [phi][jit] fix Softmax clean omission * [phi][jit] fix StrideScal clean omission * [phi][jit] fix mkl SoftmaxKernel clean omission * [phi][jit] fix test error * [phi][jit] fix test error * [phi][jit] rm NCHW16CMulNC * [phi][jit] fix test error * [phi][jit] rm HSum HMax * [phi][jit] fix test error * [phi][jit] rm StrideASum * add AUTHORS.md * [phi][jit] fix test error
-
由 cyber-pioneer 提交于
* add bn vjp * fix example * fix code * fix code * fix cinn case * fix code * fix example * fix code * fix example * fix example
-
- 16 3月, 2023 2 次提交
-
-
由 HongyuJia 提交于
* init unit test commit, contains register thinking * support inplace * get inplaced x.grad * Try support inplace and hook at the same time * Support inplace, need debug * Support inplace successfully * Inplace use Tensor&, consistent with Tensor* * fix MapPlainOutputs bug * fix double grad inplace error
-
由 Chitsing KUI 提交于
* rename flash_attn_raw to flash_attn_unpadded * fix static api * fix static return
-