- 19 4月, 2023 1 次提交
-
-
由 Wang Xin 提交于
* add autogen code support for mean_all op * bug fixed * bug fixed * bug fixed
-
- 18 4月, 2023 3 次提交
-
-
由 cyber-pioneer 提交于
* add gn vjp * fix 0 * fix args num * fix type * debug2 * remove unused expand * support fp16 * fix typo * fix reshape bug * test3 * test4 * fix bug3 * add comment
-
由 LoneRanger 提交于
* add autogen code support for lu * fix bug * fix bug * fix bug * fix bug
-
由 Xiaoxu Chen 提交于
-
- 17 4月, 2023 1 次提交
-
-
由 LoneRanger 提交于
-
- 14 4月, 2023 1 次提交
-
-
由 zhangyuqin1998 提交于
-
- 13 4月, 2023 2 次提交
- 12 4月, 2023 1 次提交
-
-
由 YepKong 提交于
* add autogen code support for squared_l2_norm_op * Update ops.yaml
-
- 11 4月, 2023 3 次提交
-
-
由 zhangyuqin1998 提交于
-
由 RedContritio 提交于
* support auto generate for flatten (flatten_contiguous_range) * add data_type for flatten_grad
-
由 Wang Xin 提交于
* add autogen code support for reverse op * bug fixed
-
- 10 4月, 2023 6 次提交
-
-
由 lzydev 提交于
* autogen segment_pool * delete legacy_dygraph about segment_pool
-
由 gouzil 提交于
* add autogen code bilinear_tensor_product * [phi] rm cc file
-
由 lzydev 提交于
* autogen softmax_with_cross_entropy * fix error in softmax_with_cross_entropy version
-
由 Wang Xin 提交于
-
由 cyberslack_lee 提交于
-
由 Wang Xin 提交于
* add autogen code support for affine_grid op * update op_compat.yaml for affine_grid * update op_compat.yaml for affine_grid * fix AffineGridGradInferMeta * fix CI error * update AffineGridInferMeta
-
- 07 4月, 2023 1 次提交
-
-
由 Zhenghai Zhang 提交于
-
- 06 4月, 2023 1 次提交
-
-
由 zhangyuqin1998 提交于
* Rename conv2d transpose grad grad * fix
-
- 04 4月, 2023 2 次提交
-
-
由 cyberslack_lee 提交于
* bce_loss * fix error * fix * fix * fix * reslove confilict
-
由 zhangyuqin1998 提交于
* rename_bilinear_tensor_product * fix
-
- 03 4月, 2023 1 次提交
-
-
由 zhangyuqin1998 提交于
-
- 31 3月, 2023 2 次提交
-
-
由 zhangyuqin1998 提交于
-
由 chenjian 提交于
* first commit * add registry * add unit test * fix format * add unit test * fix bug * replace unsuqeeze to reshape * fix * fix unit test * update test * update test * fix unit test * fix * fix
-
- 30 3月, 2023 3 次提交
-
-
由 Wang Xin 提交于
* add autogen code support for spectral_norm * bug fixed * fix PR-CI-Static-Check fail
-
由 Ainavo 提交于
* support auto generate for prelu * op_compat 中增加输入参数 * del attrs ; add kernel data_type * add PreluGradInferMeta
-
由 gouzil 提交于
* add autogen code support for sigmoid_cross_entropy_with_logits * add inplace
-
- 28 3月, 2023 5 次提交
-
-
由 cyberslack_lee 提交于
-
由 张春乔 提交于
* mv cumprod * add attrs * Update backward.yaml * Update backward.yaml
-
由 cyberslack_lee 提交于
* fix huber_loss * fix * fix ops.yaml add intermediate * fix * fix test
-
由 Wang Xin 提交于
-
由 RedContritio 提交于
* support auto generate for log_softmax * add data_type
-
- 23 3月, 2023 1 次提交
-
-
由 xiaoguoguo626807 提交于
* delete prim flag for matmul_2_grad * delete prim flag for matmul_2_grad * add new setgradoutmeta for matmul_double_grad_node * modify test and delete log * deal with review
-
- 22 3月, 2023 3 次提交
-
-
由 Wang Xin 提交于
* add autogen code for index_add op * bug fixed
-
由 RedContritio 提交于
* supoort auto generate p_norm * fix bug in backward
-
由 wangxiaoning 提交于
* max comp * fix * add test * fix * fix * fix * fix * fix test * fix api
-
- 20 3月, 2023 1 次提交
-
-
由 xiaoguoguo626807 提交于
* Add flatten composite rule * get the right xshape and pass func test * add cinn unit test * Remove cinn test, wait for it to be added after repair * add comp test to test_flatten_contiguous_range_op.py * remove func test on composite_ops * Add comments to maybe_wrap_dim func * remove commented code * fix the problem with 0D tensor case * add flatten split rule comment * fix syntax issues * block flatten on resnet_prim_cinn * init change * tmp commit * add layer_norm InferMeta check * cast type modify * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * recover * big tol * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * Cxx prim custom vjp (#8) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [Prim] enable whitelist and blacklist for custom_vjp * debug log * clear log * fix * nothing * less memory * recover utils * fix * modify threshold value * skip layer_norm for test_bert * back to bert success state * add epsion * delete unnecessary compute * modify amp dtype * modify * order * delete sqrt check and fp16 --------- Co-authored-by: Nxuyongsheng <xuyongsheng@baidu.com> Co-authored-by: Nxysheng-baidu <121540080+xysheng-baidu@users.noreply.github.com> Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> Co-authored-by: Nxiongkun <807377414@qq.com>
-
- 17 3月, 2023 1 次提交
-
-
由 cyber-pioneer 提交于
* add bn vjp * fix example * fix code * fix code * fix cinn case * fix code * fix example * fix code * fix example * fix example
-
- 15 3月, 2023 1 次提交
-
-
由 SylarTiaNII 提交于
* add assign composite backward op * fix log msg * code style * fix comp rule * replace assign with by_pass
-