-
由 HydrogenSulfate 提交于
* add jacobian and hessian in paddle.autograd * disable unitest 'func_multi_input' for bug in high-order gradient of multiply * add dimension checks * add support for 0-D tensor * change return type from Jacobian to Hessian in hessian function * refine Jacobian _flatten function for single xs * refine support for 0-D tensor * 1. add 'func_multi_input' unitest for multiply_grad_kernel bug fixed already. 2. support non-inplace math operation via magical method overwriting. * add unitest for math operation and raise error when 0-D tensor is indexed * add ndim check on ys and xs according to is_batched, and add one unitest * refine docstring of jacobian and hessian * move paddle.incubate.autograd.Jacobian/Hessian to paddle.incubate.autograd.functional.Jacobian/Hessian * remove single_input unitest case because numerical differentiation is wrong * remove 3 unitest for numerical result(reference result) is wrong * 1. rename autodiff.py to autograd.py 2. increase TIMEOUT to 100 * cancel modification for functional Jacobian/Hessian * 1. use tuple as return type instead of list 2. refine docstring * add more unitest case to improve coverage * remove 2 unitest of Hessian for numerical result is wrong * remove 1 unitest of Hessian for numerical result is wrong * remove 1 unitest of Hessian for numerical result is wrong * change unit test to shape check * correct doc and replace incubate API to stable API in _grad
e8d296ef