- 01 9月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
* add erf_p primitive operators * add gelu orig2prim rule
-
- 20 8月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* add max_p without test * add test of max_p * make max_p consistent with paddle.maximum
-
- 16 8月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* add select_p * fix bugs * add custom test for select_p; modify select_p primrules * modify according to xiaoxu's comment * add eq_p, select_p, pow_p, use autograd to test grad of high order * add requirement of autograd, modify expected type of eq * modify according to xiaoxu's comment * import primops to use primops.pow
-
- 08 8月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* add log_p for auto_grad * add log_p_op.cc in prim_op_test srcs * fix bug of wrong op name; add test in test_primops * add test case of log in testprimapi * fix bug of test_without_guard * no need to fix test_without_guard
-
- 26 7月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
-
- 18 7月, 2022 2 次提交
-
-
由 Xiaoxu Chen 提交于
-
由 levi131 提交于
-
- 15 7月, 2022 1 次提交
-
-
由 zlsh80826 提交于
* Fix test_functional_conv2d_transpose random seed * Fix random seed and use np.testing * Fix random seed for test_lu_unpack_op * Fix test_autograd_functional_dynamic random seed
-
- 14 7月, 2022 1 次提交
-
-
由 levi131 提交于
* hide prim2orig in executor * add some test cases without param guard * fix spell error param into program * Use absolute path when import paddle.incubate.autograd.prim2orig
-
- 13 7月, 2022 1 次提交
-
-
由 levi131 提交于
-
- 11 7月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
* move _gradients to primapi and rename to grad * modify jvp to call forward_grad in primitive mode * add primapi unittest and remove some unused test cases. * fix circular import problem * move paddle/autograd/functional into paddle/incubate.autograd/functional * remove unused JacobianBatchLast class
-
- 28 6月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
* enable Jacobian,Hessian supporting new autograd * fix prim mode failed in PR-CI-Windows * add forward_gradients api * add forward_gradients api * skip test_autograd_functional_prim in windows ci * fix test_autograd_funciton_prim timeouot * remove the block parameter in prim2orig method * remove duplicate to_tensors code snippet # test=allcases
-
- 14 6月, 2022 1 次提交
-
-
由 Wilber 提交于
* cmake-lint * update
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 04 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 02 6月, 2022 1 次提交
-
-
由 Weilong Wu 提交于
* [Eager] FLAGS_retain_grad set false * Add FLAGS_retain_grad_ for some tests * Add FLAGS_retain_grad_ to some tests * modified set_flags * modified set_flags * fix windows-ci and windows-openblas-ci * import paddle.fluid
-
- 18 5月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std * merge upstream * add priops.py * add_p * rm useless files * add sub_p mul_p div_p * add sqrt_p and tanh_p * add reshape_p * add broadcast_p * Add python primitive wrappers. * Jvp rules updated. * JVP rules done for all the 17 primops. * quick check and fixes. * add jvp(op, *args) * add broadcast_p fill_constant_p matmul_p reduce_p reshape_p transpose_p * add split_p and concat_p * add gather_p and scatter_add_p * add slice_select_p and slice_assign_p * Add transpose rules. * add multi input check for add_p, sub_p, mul_p, div_p * update concat_p * Linearize and transpose in progress.. * refine gather_p and scatter_add_p * updated. * update transpose. * refine slice_assign_p and slice_select_p * init commit for lower * Merged with primitive ops. * small update * add rules for orig2prim and prim2orig * add 9 test for prim ops * add more test and fix some bug * add more test * register proto * Adding primops test. * add shape valid check for broadcast_p op, and add keepdim attr into reduce_p op proto * support multi input and multi output for split_p and concat_p * Test updated. * update * fix slice bug for slice_select_p and slice_assign_p * updated. * Ops updated. * Refactor and bug fixes. * updated. * finish orig2prim and prim2orig rules * dtype for axis attr should be long int * update dtype for axis attr int64_t * update for iscan CI * Update primx. * Refactor vars in primx. * update for lower transform * add more shape and dtype check * update primx.py * change IndexTensor into int32 dtype * update * Fix linearize and transpose. * Update is_dot * Update is_dot * Update is_dot * add gradient aggregation, fix add_transpose. * pass first linearize+transpose test. * update test * refactor op registration and primx. * update rule for slice_assign * try test lower * update orig2prim and prim2orig * pass simple lower pass * update * Update input types in the unit test. * orig2prim segfault. * 50% for adam.minimize * test updated. * temp fix erros in removing vars. * primx updated. * update for matmul_v2 and reshape2 orig2prim * update for minimize * Refine primrules * Remove some code * supporting unused and unreachable vars. * update for use prim2orig in minimize * fix gather and scatter_add transpose. * Add rules UT * update scatter_add * Refine UT code * fix nonetype check in topo * Update gather_p pywrapper. * remove useless print * Merge tongxin PR and refine code * readd some test * rm useless print * polish code. * fix bug in minimize * add get_input_var_list and get_output_var_list and use it in lower * Fix scatter_add_p prim2orig * Update code and fix orig2prim/prim2orig UT * delete vars after block.desc._remove * Improve ops and vars clean up logics. * fix some bug in linearize and lower * update tanh transpose. * use set instead of list for var2remove * test updated. * polish code. * fix dot2bar delete. * merge tx/ad * add indextensor_dot for gather and scatter_add * add sorted for set * Fix scale_orig2prim params * fix some syntax bug * add golbal_lower_update list * Better handling of unused vars. * update tests. * Fix elementwise_sub orig2prim * support none for transpose rule * Merge and add transform UT * fix a bug in transpose * Fix transpose and UT * a hacky fix for cancat op * Fix exector place * Refine variable name * Add elementwise_mul orig2prim and support p_norm when p=1 * Add sqrt orig2prim rule and UT * merge wz test * rename files, add enable_prim, disable_prim, prim_enabled, delete global_lower_update * fix a bug in test_ad_transform_trans * revert modify in framework.py * add paddle.fluid.incubate.ad_transform to python/setup.py.in * Fix remove vars error * Fix p_norm_orig2prim * merge wz * Modify the code directory * Add utils.py and remove get_input/output_vars functions * Update maolin code * Rename UT and refine test_ad_transform_primops * Fix div_p jvp rule * Add higher derivatives UT * Remove UT to autograd dir * Fix comments * import paddle in primops.py * Add some error message for assert * Refine UT class name and refine some comments in primreg.py * update minimize of paddle/optimizer for supporting new autograd * resolve cicular importing between backward.py and optimizer.py * fill gradients and minimize unittest * Replace `assert isinstance` with `raise TypeError` * Add some assert message for primx.py * Polish variable name * Add some assert message * add some docstring * refine some name * update the format of english documents * Split test_transform.py to two files to avoid ci error * fix the document format of enable_prim/disable_prim/prim2orig/prim_enabled * polish test_gradients_and_minimize * add default value for prim_enabled api doc * Remove some UT to avoid windows ci error * Enlarge test_gradients_and_minimize limit time * Fix ut limit time Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: NJiabin Yang <360788950@qq.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NTongxin Bai <waffle.bai@gmail.com> Co-authored-by: NXiaoxu Chen <chenxx_id@163.com> Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
-
- 14 4月, 2022 1 次提交
-
-
由 Zhanlue Yang 提交于
* [DoubleGrad] Enabled double grad test cases in eager_mode for test_imperative_double_grad * Fixed elementwise issue * Addressed CI failures * [DoubleGrad] Enabled test_imperative_triple_grad test cases under eager_mode * [DoubleGrad] Enabled test_autograd_functional_dynamic.py under eager mode * Enabled more test cases * Fixed performance issues * Fixed minor issue
-
- 08 4月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
-
- 04 4月, 2022 1 次提交
-
-
由 Zhanlue Yang 提交于
-
- 02 4月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
Enhance vjp/jvp/Jacobian/Hessian API for supporting dynamic, static graph and batched, unbatched mode (#40692) * modify vjp/jvp for both dynamic and static graph * enforce jacobian class for supporting first/last batch * add unittest for jvp, jacobian withlast batch, jacobian with first batch * fix the incorrect shape when multi-index Jacobian * enforce Hessian class for supporting dynamic graph * add Hessian class unittest * bugfix, jvp double_backward_trick zeros_like return stop_gradient=True in static graph * add API beta warnnings * add white_list for cuda11.x ci windows. * optimize some code snippets and documments * set unittest timeout to 100 seconds * move vjp,jvp,Jacobian,Hessian to incubate * fix vjp,vjp import path of sample code * fix code style error of augtograd/__init__ file
-
- 29 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted. * [autograd] adding Hessian, in progress. * [autograd] Hessian passes. A double grad bug fixed. * [autograd] fix renaming conflict in double backward pass. * [autograd] polish test.s * fix a bug when using brackets * debug for ci * [autograd] fixing Hessian test. * polish format. Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 24 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted.
-
- 29 11月, 2021 1 次提交
-
-
由 Weilong Wu 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std * support batch in jacobian and hessian * add batch jacobian and batch hessian * Add batch_jacobian test, draft version * [New features] Add elementwise_mul triple grad kernel (#37152) * Add elementwise_mul triple grad kernel * Removed InplaceInferer and polished code * Add numerical_batch_jacobian,numerical_batch_hessian and tests * Support batch_jacobian and batch_numerical * Use pre-commit to check code format * Update doc, polish code, add unit test * Reset the TIMEOUT properties of test_jacobian to pass CI Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NJiabin Yang <360788950@qq.com>
-
- 18 10月, 2021 2 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * init vhp * finish vhp API logically * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * save status * polish code * rm _stop_gradient_pre_process * save status * add example for vhp interface * add _compute_numerical_vjp and _compute_numerical_vhp * test is ok * vhp is ok * add testVHPFloat64 * modify for comments * modify format * modify format * save status * test_vhp is ok * finish code polish * small modify for v is None Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change * [autograd.functional] Fixed vjp and jvp's v=None bug.
-
- 14 10月, 2021 1 次提交
-
-
由 levi131 提交于
-
- 13 10月, 2021 1 次提交
-
-
由 levi131 提交于
* modify format * modify format
-
- 12 10月, 2021 1 次提交
-
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change
-
- 30 9月, 2021 1 次提交
-
-
由 levi131 提交于
-
- 29 9月, 2021 1 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * polish code * rm _stop_gradient_pre_process Co-authored-by: NJiabinYang <360788950@qq.com>
-
- 28 9月, 2021 1 次提交
-
-
由 Jiabin Yang 提交于
* fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * add test for partial_engine.cc
-
- 27 9月, 2021 1 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * polish API docstring * modify docstring
-