- 02 4月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
Enhance vjp/jvp/Jacobian/Hessian API for supporting dynamic, static graph and batched, unbatched mode (#40692) * modify vjp/jvp for both dynamic and static graph * enforce jacobian class for supporting first/last batch * add unittest for jvp, jacobian withlast batch, jacobian with first batch * fix the incorrect shape when multi-index Jacobian * enforce Hessian class for supporting dynamic graph * add Hessian class unittest * bugfix, jvp double_backward_trick zeros_like return stop_gradient=True in static graph * add API beta warnnings * add white_list for cuda11.x ci windows. * optimize some code snippets and documments * set unittest timeout to 100 seconds * move vjp,jvp,Jacobian,Hessian to incubate * fix vjp,vjp import path of sample code * fix code style error of augtograd/__init__ file
-
- 30 3月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
* Supported Complex2Real Conversion for Eager Dygraph * Supported Complex2Real Conversion for Eager Dygraph * Enabled complex type promotion test for matmul_v2 * pylayer, test=develop * Fix CI issues * Support initializing specific grad tensors to zero for selected operators * finish forward, test=develop * create grad node finish, test=develop * Merged adj_edges_ with GradSlotMeta * Fixed monir issue * backward finish, start dbg, test=develop * Adjusted num runs * Recovered Eager performance tests configurations * Recovered Eager performance tests configurations * finish, test=develop * polish, test=develop * polish, test=develop * refine, test=develop * eager, test=develop * Adjusted performance tests configurations * Fixed Minor Issues with performance tests * [Phi] Fix macro name typo * support set_materialize_grads, test=develop * suppotr mark_non_differentiable, test=develop * support once_differentiable, test=develop * refine, test=develop * refine, test=develop * Moved out Edge from GradSlotMeta * Fixed issues from merge * Fixed typo * Addressed review comments * Fixed merge issues * Fixed minor issues * Fixed minor issue * refine, test=develop * refine, test=develop * refine, test=develop * Fixed major issues and enabled auto_prune test cases * Fixed issues from merge * refine, test=develop * refine, test=develop * refine, test=develop * refine, test=develop * refine, test=develop Co-authored-by: Njim19930609 <jim19930609@gmail.com> Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
-
- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 14 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* eager, test=develop * fix bug, test=develop * eager, test=develop * merge legacy to fluid * eager, test=develop * eager, test=develop * Refactor TensorAdd func by template and remove gradient_accumulation in eager * Remove needless target name * eager, test=develop * eager, test=develop * Use overload instead of template * Remove legacy code * Remove legacy code * selectedrows, test=develop * Remove DataType test * eager, test=develop * eager, test=develop * support gan, test=develop * Using Tensor directly instead of using EagerTensor * support gradient_accumulation * make test_imperative_lod_tensor_to_selected_rows longer * make test_imperative_lod_tensor_to_selected_rows longer * refine code * ptb, test=develop * Rename all EagerTensor to Tensor * Rename some EagerTensor to Tensor * rename EagerTensor to EagerVariable * eager, test=develop * eager, test=develop * eager, test=develop * eager, test=develop * add more test * eager, test=develop * Support copiable selected rows and merge develop * save load, eager, test=develop * save load, eager, test=develop * refine, test=develop * remove useless _set_value method * refine, test=develop * refine, test=develop * revert static_runner, test=develop * EagerTensor to Tensor, test=develop * refine, test=develop * refine, test=develop * clear grad, test=develop * merge, develop * merge, develop * merge, test=develop * merge, test=develop * Support quant and part of slice * support legacy static save * extend slim tests time * remove imperative on inference * remove imperative on inference * merge develop * fix typo * fix typo * split slice related code into 2 part for imperative and eager * split slice from inference * split slice from inference * fix test_tensor_register_hook * support custom op in eager mode * fix inference deps error * split eager utils from custom operator * fix type match * fix typo Co-authored-by: NWang Huan <wanghuan29@baidu.com> Co-authored-by: NWeilong Wu <veyron_wu@163.com> Co-authored-by: Nwanghuancoder <wanghuancoder@163.com>
-
- 27 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* fix pylayer problem with amp * add ut * refine code
-
- 29 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted. * [autograd] adding Hessian, in progress. * [autograd] Hessian passes. A double grad bug fixed. * [autograd] fix renaming conflict in double backward pass. * [autograd] polish test.s * fix a bug when using brackets * debug for ci * [autograd] fixing Hessian test. * polish format. Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 24 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted.
-
- 23 12月, 2021 1 次提交
-
-
由 wuhuanzhou 提交于
* add control/status API, test=develop * fix import error, test=develop * add is_grad_enabled unittest, test=develop * add code comment for example code and API, test=develop * add checking for type, test=develop * add api description, test=develop * fix docs index_en, test=document_fix * fix doc of is_floating_point, test=document_fix
-
- 29 11月, 2021 1 次提交
-
-
由 Weilong Wu 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std * support batch in jacobian and hessian * add batch jacobian and batch hessian * Add batch_jacobian test, draft version * [New features] Add elementwise_mul triple grad kernel (#37152) * Add elementwise_mul triple grad kernel * Removed InplaceInferer and polished code * Add numerical_batch_jacobian,numerical_batch_hessian and tests * Support batch_jacobian and batch_numerical * Use pre-commit to check code format * Update doc, polish code, add unit test * Reset the TIMEOUT properties of test_jacobian to pass CI Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NJiabin Yang <360788950@qq.com>
-
- 18 10月, 2021 2 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * init vhp * finish vhp API logically * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * save status * polish code * rm _stop_gradient_pre_process * save status * add example for vhp interface * add _compute_numerical_vjp and _compute_numerical_vhp * test is ok * vhp is ok * add testVHPFloat64 * modify for comments * modify format * modify format * save status * test_vhp is ok * finish code polish * small modify for v is None Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change * [autograd.functional] Fixed vjp and jvp's v=None bug.
-
- 14 10月, 2021 1 次提交
-
-
由 levi131 提交于
-
- 13 10月, 2021 1 次提交
-
-
由 levi131 提交于
* modify format * modify format
-
- 12 10月, 2021 1 次提交
-
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change
-
- 29 9月, 2021 1 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * polish code * rm _stop_gradient_pre_process Co-authored-by: NJiabinYang <360788950@qq.com>
-
- 27 9月, 2021 1 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * polish API docstring * modify docstring
-
- 16 9月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* remove autograd/grad api * import grad, no_grad set_grad_enable from autograd module * modify import no_grad_ as no_grad
-
- 04 8月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add gradients_with_optimizer api * modify gradients_with_optimizer * add gradients_with_optimizer api into paddle.auto.backward_mode * add gradients_with_optimizer test case * add doc for gradients_with_optimizer * add doc for gradients_with_optimizer
-
- 11 6月, 2021 1 次提交
-
-
由 zhiboniu 提交于
* update 2.0 public api in all left files * reverse device.py all list; fix some flake8 errors
-
- 25 4月, 2021 1 次提交
-
-
由 WeiXin 提交于
-
- 15 4月, 2021 1 次提交
-
-
由 WeiXin 提交于
* custom python backward * polish up the code * polish up the code * polish up the code. * Fix code format and comments. * Delete redundant files. * add unnittest. * edit unnittest. * edit unnittest. * Remove redundant header files. * Improve coverage and remove redundant code. * support saving for backward. * polish code according to comments. * Add support type for PyLayer. * Modify the DOC. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish code and make the code robust. * Modify the code format.
-
- 01 4月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add custom init grad for backward function * add custom init grad for backward function * handle when the grad_tensor is none * handle when the grad_tensor is none * fix the args type error on windows platform * modify the args order and doc * format code * add grad_tensor to xpu * modify the grad_tensor type check * add paddle.backward api to support multi tensors gradient compute * add paddle.backward api to support multi tensors gradient compute * add paddle.atuograd module and backward api * change tensor.backward func args * modify tensor backward api * remove create_graph intputs args * add doc and examplex code for backward api * when have the same tensor, throw error * modify test Init func args * modify the execute.Init func args in test files * add paddle.autograd package in setup.py.in * modify error msg, remove _run_backward method in class Tensor * add test cases for backward api
-