- 05 5月, 2023 1 次提交
-
-
由 co63oc 提交于
* Add addmm tests * Fix code
-
- 04 5月, 2023 5 次提交
-
-
由 Roc 提交于
-
由 tianshuo78520a 提交于
-
由 co63oc 提交于
-
由 hua-zi 提交于
-
由 Aurelius84 提交于
* [Perf]Removed usless assign op in while_loop * refine assign
-
- 30 4月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 28 4月, 2023 12 次提交
-
-
由 zhurou603 提交于
-
由 GGBond8488 提交于
* add 0d support for dist, trace, paddle.linalg.cond test=allcase * add_0d_output_support_for_det * test=allcase * support_0d_output_for_linalg.norm * support linalg.norm 0d output, test=allcase * fix 0D test * fix zero dim test, test=allcase * fix 0D test * fix tets,test=allcase * fix error,test=allcase * fix errors ,test=allcase * add static backward , test=allcase * add static backwward test, test=allcase * fix pr-ci-build error;test=document_fix (#53060) * [Cherry-Pick] Unique support float16&bfloat16 (#53023) unique支持float16和bfloat16数据类型,并完善相关单测。 * slogdet_support_0D_output * add new case * fix tests, test=allcase * fix p_norm related test, test=allcase * fix some err, test=allcase * test=allcase * move out trace * open some case, test=allcase * fix norm all case, test=allcase * fix some test error, test=allcase * fix typro,test=allcase * fix test err, test=allcase * test=allcase * test * fix test error, test=allcase * fix test error, test=allcase * fallback norm, test=allcase --------- Co-authored-by: Ntianshuo78520a <707759223@qq.com> Co-authored-by: NZhang Zheng <32410583+ZzSean@users.noreply.github.com>
-
由 zqw_1997 提交于
* test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * fix test cases, test=allcase * fix test cases, test=allcase * modify the test_squeeze to not use Tensor type axis, test=allcase * add grad check for unbind and unstack, test=allcase * check for squeeze axis tensor type, test=allcase * fix bug, test=allcase
-
由 Meteor Liu 提交于
-
由 megemini 提交于
* 【Hackathon 4th No.12】为 Paddle 新增 Cauchy API * [Change]修改初始化方法与类型检查 * [Change]将测试用例移动到新的目录下 * [Change]适配to_tensor的0D
-
由 dasen 提交于
-
由 Zhan Rongrui 提交于
-
由 iSerendipity 提交于
-
由 Roc 提交于
To make it synchronized at the first recv operator. If warping all send and recv operators with group start and end, the received tensor will be not complete.
-
由 co63oc 提交于
-
由 superwinner1 提交于
* 'fmin' * 'fix' * 'fix'
-
由 xiaoguoguo626807 提交于
* add mul doubel grad * add sub_double_grad * add add sub high test * add mutiply test * modify other unsqueeze * delete api.yaml * only for make ci run * midify unsqueeze * modify unsqueeze * tmp * modify operants gen
-
- 27 4月, 2023 14 次提交
-
-
由 WangZhen 提交于
[Dy2St]Get grad names when call append backward to fix high order gradient (#53250)
-
由 yangguohao 提交于
-
由 NetPunk 提交于
* support fp16 for maxout op * format code * change api * add test for static float16 * format code * formatting code * atol alignment * experiment—1 * experiment-2 * experiment-3 * format code
-
由 Sonder 提交于
* trans fused_feedward Compute function to phi * add register info * remove maxfunctor * move fused feedward to phi * remove sig file * remove fliud include * add include * add include * add sig file * add output register info * fix sig file * Update fused_feedforward_sig.cc * fix grad kernel * update output register info * fix * open fused_feedforward static build * add optional and fix code style * fix output info for fused attention * add optional param * merge
-
由 Zhang Ting 提交于
* support OD level and skip dynamic loss scaling for bf16
-
由 HongyuJia 提交于
* [Fix CppExtension Unittest] Change CUDAExtension to CppExtension if necessary * Temporarily test cpp_extension under GPU * Split mixed_extension unittest
-
由 HydrogenSulfate 提交于
* add jacobian and hessian in paddle.autograd * disable unitest 'func_multi_input' for bug in high-order gradient of multiply * add dimension checks * add support for 0-D tensor * change return type from Jacobian to Hessian in hessian function * refine Jacobian _flatten function for single xs * refine support for 0-D tensor * 1. add 'func_multi_input' unitest for multiply_grad_kernel bug fixed already. 2. support non-inplace math operation via magical method overwriting. * add unitest for math operation and raise error when 0-D tensor is indexed * add ndim check on ys and xs according to is_batched, and add one unitest * refine docstring of jacobian and hessian * move paddle.incubate.autograd.Jacobian/Hessian to paddle.incubate.autograd.functional.Jacobian/Hessian * remove single_input unitest case because numerical differentiation is wrong * remove 3 unitest for numerical result(reference result) is wrong * 1. rename autodiff.py to autograd.py 2. increase TIMEOUT to 100 * cancel modification for functional Jacobian/Hessian * 1. use tuple as return type instead of list 2. refine docstring * add more unitest case to improve coverage * remove 2 unitest of Hessian for numerical result is wrong * remove 1 unitest of Hessian for numerical result is wrong * remove 1 unitest of Hessian for numerical result is wrong * change unit test to shape check * correct doc and replace incubate API to stable API in _grad
-
由 xiaoguoguo626807 提交于
* modify concat_grad add sum comp rule * modify opcompat
-
由 hua-zi 提交于
* updata Adamw.py out.backward() -> loss.backward() * Update adamw.py
-
由 JYChen 提交于
-
由 houj04 提交于
* [XPU] remove scale_loss in parallel.py * [XPU] throw Unimplemented when using Reducer
-
由 superwinner1 提交于
-
由 cyberslack_lee 提交于
-
由 mengziheng 提交于
* add pad op * add_some_code * modify some code * add some code * add some code * modify some code * add some code * modify some code * Update composite_backward_api.h * modify some code * add some code * add some code * add some code
-
- 26 4月, 2023 7 次提交
-
-
由 zhouweiwei2014 提交于
-
由 zhenhailiu 提交于
* polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish * polish
-
由 mhy-666 提交于
* add scatter_nd_add comp * add scatter_nd_add prim * fix * fix * add public_python_api in TestScatterNdAddSimpleOp setup function * fix composite_backward_api.h * fix composite_backward * add test cases * fix composite_backward_api.h, unittest
-
由 zqw_1997 提交于
* add test cases, test=allcase * fix test cases, test=allcase * fix test cases, test=allcase * assert_allclose, test=allcase * 1e-5 to 1e-4, test=allcase * change rtol from 1e-4 to 1e-3, test=allcase
-
由 lijialin03 提交于
* modify numel in lbfgs and add a new test case. test=develop * change param 'lr' to 'learning_rate' in lbfgs and its test * add opt LBFGS and change test
-
由 骑马小猫 提交于
-
由 ShenLiang 提交于
-