- 20 3月, 2023 3 次提交
-
-
由 xiaoguoguo626807 提交于
* Add flatten composite rule * get the right xshape and pass func test * add cinn unit test * Remove cinn test, wait for it to be added after repair * add comp test to test_flatten_contiguous_range_op.py * remove func test on composite_ops * Add comments to maybe_wrap_dim func * remove commented code * fix the problem with 0D tensor case * add flatten split rule comment * fix syntax issues * block flatten on resnet_prim_cinn * init change * tmp commit * add layer_norm InferMeta check * cast type modify * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * recover * big tol * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * Cxx prim custom vjp (#8) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [Prim] enable whitelist and blacklist for custom_vjp * debug log * clear log * fix * nothing * less memory * recover utils * fix * modify threshold value * skip layer_norm for test_bert * back to bert success state * add epsion * delete unnecessary compute * modify amp dtype * modify * order * delete sqrt check and fp16 --------- Co-authored-by: Nxuyongsheng <xuyongsheng@baidu.com> Co-authored-by: Nxysheng-baidu <121540080+xysheng-baidu@users.noreply.github.com> Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> Co-authored-by: Nxiongkun <807377414@qq.com>
-
由 mayang002 提交于
-
由 zhouweiwei2014 提交于
-
- 16 3月, 2023 2 次提交
-
-
由 Chitsing KUI 提交于
* rename flash_attn_raw to flash_attn_unpadded * fix static api * fix static return
-
由 Infinity_lee 提交于
* fix atan2 * fix * fix * fix * fix error * fix error * fix
-
- 15 3月, 2023 2 次提交
-
-
由 Infinity_lee 提交于
* fix eig * fix * fix * fix * fix
-
由 iSerendipity 提交于
* Revert "Revert "【Hackathon No.67】remove operator.h in blas.h (#50989)" (#51467)" This reverts commit b9d91531. * remove cout * add header * fix missing header * fix refer fluid error * fix missing header * 更新 repeat_interleave_grad_kernel_impl.h Change to phi style datatype. * 更新 repeat_interleave_grad_kernel_impl.h Fix missing header * datatype fluid -> phi * paddle::experimental -> phi * fix reference error * fix reference error * fix reference error * fix errors * fix missing FLAGS * fix missing headers * fix missing headers * fix missing headers * fix missing headers * fix missing header * fix missing header * fix errors
-
- 14 3月, 2023 2 次提交
-
-
由 wangxiaoning 提交于
-
由 Infinity_lee 提交于
-
- 13 3月, 2023 3 次提交
-
-
由 TaoTao Li 提交于
* add all_gather and fix conflicts * fix code format * fix ut * fix broadcast ut
-
由 YuanRisheng 提交于
* remove transpose infershape * fix ci bugs * fix ci bugs * delete transpose infershape * fix ci bugs * fix ci bugs
-
由 iSerendipity 提交于
* remove fused_matmul from list * add infermeta for fused matmul
-
- 09 3月, 2023 3 次提交
-
-
由 iSerendipity 提交于
* Add output defs for sgd kernel * add datatype infer for sgd * add infer logic
-
由 TaoTao Li 提交于
* * add comm context for device context * add broadcast phi operator kernel and api * add broadcast support dtype, update ut * fix broadcast bfloat16 type * fix ut * update test_collective_broadcast_api timeout to 300
-
由 张春乔 提交于
-
- 08 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 06 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 03 3月, 2023 1 次提交
-
-
由 niuliling123 提交于
-
- 01 3月, 2023 2 次提交
-
-
由 Chitsing KUI 提交于
* flash attn * seed * almost * softmax * fix workspace * add unitest; linux only * fix setup * fix datatype include * fix setup typo * fix def scope * new error api * use paddle fork * fix attr bug; complete ut * update flash hash * fix rng reset * fix offset * fix comments
-
由 niuliling123 提交于
-
- 27 2月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 24 2月, 2023 1 次提交
-
-
由 yunyaoXYY 提交于
-
- 23 2月, 2023 1 次提交
-
-
由 csy0225 提交于
-
- 22 2月, 2023 3 次提交
-
-
由 zhouweiwei2014 提交于
-
由 Shuangchi He 提交于
* Fix some typos. Signed-off-by: Yulv-git <yulvchi@qq.com> * pre-commit Signed-off-by: Yulv-git <yulvchi@qq.com> --------- Signed-off-by: Yulv-git <yulvchi@qq.com>
-
由 zhupengyang 提交于
-
- 18 2月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 17 2月, 2023 2 次提交
-
-
由 yuehuayingxueluo 提交于
* rename multi_tensor_adam to fused_adam * fix some bugs * fix CI coverage * rename test_fused_adam.py * fix some bug * add test_fused_adam_op.py * fix some bugs * fix fused_adam_op.cc * fix CI bugs * fix CI bug * fix CI bug
-
由 zhupengyang 提交于
[XPU] add multi_encoder_xpu_slice_fuse_pass, generate_sequence_xpu_fuse_pass, generate_sequence_xpu kernel (#50570)
-
- 16 2月, 2023 2 次提交
-
-
由 Chen Weihang 提交于
* add logspace yaml * update by comments * resolve test framework conflicct
-
由 zhupengyang 提交于
-
- 10 2月, 2023 2 次提交
-
-
由 Infinity_lee 提交于
-
由 zhupengyang 提交于
-
- 09 2月, 2023 1 次提交
-
-
由 yuehuayingxueluo 提交于
* add multi_tenosr_adam * update multi_tensor_base.py, test_multi_tensor_adam.py, adamw.py * fix adam.py optimizer.py * fix adamw.py * fix test_multi_tensor_adam.py * fix CI bug * fix CI coverage * fix ci bug * fix betapow * fix some bugs * fix test_adamw_op.py * fix CI coverage * fix multi_tensor_adam_kernel.cc * fix CI bug * fix multi_tensor_adam_op.cc and test_multi_tensor_adam.py * fix code style * update C++ parts * remove python parts modification temporarily * add C++ ut * update betapow copy code logic * fix ci ut * fix windows ci * fix coverage ci * improve coverage rate --------- Co-authored-by: Nsneaxiy <sneaxiy@126.com>
-
- 08 2月, 2023 1 次提交
-
-
由 Zhong Hui 提交于
-
- 07 2月, 2023 1 次提交
-
-
由 张春乔 提交于
-
- 06 2月, 2023 2 次提交
-
-
由 张春乔 提交于
-
由 duanboqiang 提交于
-
- 01 2月, 2023 2 次提交
-
-
由 RedContritio 提交于
* add check for input of slice * add unittest
-
由 Zhong Hui 提交于
* fix 0-d tensor for arg_min_max op. * fix xpu. * fix zero dims * fix * Update arg_min_max_kernel.cc * Update arg_min_max_kernel.cc * Update arg_min_max_kernel.cc * Update test_zero_dim_tensor.py * Update test_zero_dim_tensor_xpu.py * Update test_zero_dim_tensor.py * Update arg_min_max_kernel.cc * Update arg_min_max_kernel.cc * Update arg_min_max_kernel.cc
-