- 15 3月, 2023 39 次提交
-
-
由 umiswing 提交于
-
由 Infinity_lee 提交于
* fix eig * fix * fix * fix * fix
-
由 SylarTiaNII 提交于
* add assign composite backward op * fix log msg * code style * fix comp rule * replace assign with by_pass
-
由 Jiabin Yang 提交于
* [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * Cxx prim custom vjp (#8) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [Prim] enable whitelist and blacklist for custom_vjp * support softmax grad * remove additional code * add test back --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> Co-authored-by: Nxiongkun <807377414@qq.com>
-
由 limingshu 提交于
-
由 risemeup1 提交于
* add option for setup.py * add option for setup.py * add option for setup.py * add option for setup.py * add ennv_dict.py and dist/ to .gitignore * add ennv_dict.py and dist/ to .gitignore * modify .gitignore
-
由 risemeup1 提交于
* optimizing setup.py develop command * add libpaddle.so * modify setup.py * add python/paddle/distributed/fleet/.gitignore * add libpaddle.so to .gitignore * add *.so to python/paddle/libs/.gitignore * add new gitignore
-
由 thunder95 提交于
* untracked files * prelu_perf * remove unused files * upd * fix bug
-
由 umiswing 提交于
-
由 Leo Chen 提交于
-
由 Leo Chen 提交于
* support set_default_dtype bf16 * support float
-
由 Kang Zhao 提交于
* feat: add relu composite rule * feat: add relu composite rule, maximum op * feat: add relu composite rule, maximum op * feat: add relu composite rule, polish comments * feat: add relu composite rule, polish comments * feat: add relu composite rule, add python api of relu * feat: add relu composite rule, commit hook * fix: maximum type error & ban cinn test * fix: maximum input sequence bugs * resolve conflicts * fix: code style bugs * add: relu fp16 test * feat: add rsqrt composite rule * feat: add rsqrt composite rule * resolve conflicts of composite rule * fix: delete check eager
-
由 ronnywang 提交于
* [XPU] add int32,fp32 support for conv2d_transpose* * update
-
由 kangguangli 提交于
* remove unit tests about GraphExecutionOptimizer * remove test file
-
由 JingZhuangzhuang 提交于
-
由 HongyuJia 提交于
-
由 iSerendipity 提交于
* Revert "Revert "【Hackathon No.67】remove operator.h in blas.h (#50989)" (#51467)" This reverts commit b9d91531. * remove cout * add header * fix missing header * fix refer fluid error * fix missing header * 更新 repeat_interleave_grad_kernel_impl.h Change to phi style datatype. * 更新 repeat_interleave_grad_kernel_impl.h Fix missing header * datatype fluid -> phi * paddle::experimental -> phi * fix reference error * fix reference error * fix reference error * fix errors * fix missing FLAGS * fix missing headers * fix missing headers * fix missing headers * fix missing headers * fix missing header * fix missing header * fix errors
-
由 pangengzheng 提交于
-
由 zhangyuqin1998 提交于
* Delete randperm raw op * fix
-
由 RedContritio 提交于
-
由 HappyHeavyRain 提交于
* test_get_kernel * add invoke signature * change reduce_max * change frobenius_norm * reset reduce_max according to composite and change reduce_all * fix the bug when Scalar(*) * fix 'scalar when support_tensor' * change code according to review * change 'keep_signature' to 'manual_signature' and add some erro info
-
由 kangguangli 提交于
* remove parallel_executor related unit tests * fix CI
-
由 pangyoki 提交于
-
由 wangna11BD 提交于
* support to_static for SpectralNorm
-
由 wangxiaoning 提交于
-
由 Jiabin Yang 提交于
* support amp logic for layer_norm and softmax * fix layer_norm amp * fix layernorm api and dropout fp16 * fix layernorm api and dropout fp16 * fix bn, ln dtype in float16 * fix dropout fp16 * fix comment
-
由 kangguangli 提交于
* remove with_data_parallel in example code * revert python/paddle/fluid/data_feeder.py * fix static.nn.fc api
-
由 Weilong Wu 提交于
* support gather test on prim and cinn * reset timeout for gather
-
由 chenjian 提交于
* add pow composite rule * fix test * fix unit test * update test * fix test * update
-
由 Weilong Wu 提交于
-
由 Yuang Liu 提交于
-
由 Guanghua Yu 提交于
-
由 Siming Dai 提交于
* add fp16 test for divide, matmul, pnorm * add cumsum fp16 unittest * fix threshold * revert cumsum * fix code-style * fix according to review * fix kernel not found
-
由 Guoxia Wang 提交于
-
由 WangZhen 提交于
-
由 zhangyuqin1998 提交于
* Delete hardswish_raw op * fix ut
-
由 ronnywang 提交于
* [CustomDevice] fix SyncDefaultStream for process_group_custom * update
-
由 wanghuancoder 提交于
* refine _found_inf
-
由 xiaoguoguo626807 提交于
* modify_yaml * delete default param * add output for matmul_double_grad
-
- 14 3月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-