- 16 5月, 2023 3 次提交
-
-
由 zhouweiwei2014 提交于
-
由 xiaoguoguo626807 提交于
* add rules * modify no kernel yaml parse * success op generate * success test_silu_double * modify bug * modify static error * modify silu_grad input * modify kernel signature * modify kernel signature * code style * code style * review * delete opinfo modify * modify gradOpMaker * modify gradOpMaker * modify genarated-j2 * add approve rules * modify aytograd_functional_static_test
-
由 cxxly 提交于
-
- 12 5月, 2023 1 次提交
-
-
由 Xiaoxu Chen 提交于
* [Dy2St]Fix x grad names when high order gradient * Polish error msg * Add inputs var to backward in dy2st * Fix error * Get grad names for backward API * Fix save load * Polish code * Add ut * [prim] fix not support optional grad bugs in higher order autodiff * [prim] remove duplicate fill_any_like caused by infershape_for_composite * fix _strip_grad_suffix_ bugs in higher-order autodiff * [prim] create output for test_static_prim.cc --------- Co-authored-by: N0x45f <wangzhen45@baidu.com>
-
- 28 4月, 2023 1 次提交
-
-
由 xiaoguoguo626807 提交于
* add mul doubel grad * add sub_double_grad * add add sub high test * add mutiply test * modify other unsqueeze * delete api.yaml * only for make ci run * midify unsqueeze * modify unsqueeze * tmp * modify operants gen
-
- 27 4月, 2023 1 次提交
-
-
由 WangZhen 提交于
[Dy2St]Get grad names when call append backward to fix high order gradient (#53250)
-
- 20 4月, 2023 1 次提交
-
-
由 co63oc 提交于
-
- 11 4月, 2023 1 次提交
-
-
由 Xiaoxu Chen 提交于
-
- 18 2月, 2023 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 16 2月, 2023 1 次提交
-
-
由 xiongkun 提交于
* [dy2static-bugfix] fix backward gradient aggregation bugs 1. Yolov3 and Yolov5 all face the same problem. * remove set_device * code review fix
-
- 02 2月, 2023 1 次提交
-
-
由 Xiaoxu Chen 提交于
【PRIM】Support use operator's output metadata info in constructing static backward composite (#50043) * [prim] support custom target_gradients * support infershape after append one gradop * [prim] add simple net test * fix test_loop segment fault bug * [prim] fix infer shape segment fault bug when output of grad_op_desc is empty
-
- 20 1月, 2023 1 次提交
-
-
由 Jiabin Yang 提交于
-
- 17 1月, 2023 2 次提交
-
-
由 Jiabin Yang 提交于
-
由 Huihuang Zheng 提交于
Support 0d Tensor in ConditionalBlockOp 1. Add dygraph 0d tensor support for ConditionalBlockOp 2. Set scalar loss shape when `append_backward`
-
- 16 12月, 2022 1 次提交
-
-
由 hong 提交于
* change staticRNN to while * update code * fix rnn bug * update * fix _find_op_path_ bugs in append_backward. * polish code * revert op proto * update * udpate while * format * revert test while loop op * fix create array * fix windows error * fix bug * update * fix array write bug Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
-
- 22 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
[CodeStyle][py36-][E722] remove import handling for collections.abc in different python versions (#48165)
-
- 08 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][py2][U004] unecessary explicit `object` inheritance in class definition * fix an increment
-
- 01 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][E711] use `is`/`is not` for comparison with `None` * `self.assertTrue($A is None)` -> `self.assertIsNone($A)` * `self.assertTrue($A is not None)` -> `self.assertIsNotNone($A)` * `self.assertFalse($A is None)` -> `self.assertIsNotNone($A)` * `self.assertEqual($A, None)` -> `self.assertIsNone($A)` * `self.assertNotEqual($A, None)` -> `self.assertIsNotNone($A)`
-
- 25 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][py2] remove `paddle.compat` * remove compat from `paddle.__init__` * enable_static in sample code * Revert "enable_static in sample code" This reverts commit ffccaa633900154ea5f3d056e746aae9a1927399. * enable_static in sample code
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 19 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 18 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][py2] remove `compat` module (to_text) * remove some unnecessary decode * remove to_text definition and unittest * Revert "remove to_text definition and unittest" This reverts commit a6b69cb8dca8b9b031ce10ea32d1040e7e0dd267. * remove an assertion * empty commit
-
- 17 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][py2] remove `compat` module (to_bytes) * remove some unused imports * clean up to_bytes definition and unittests * Revert "clean up to_bytes definition and unittests" This reverts commit e726539e1768172a411ff60e63fab82f164343cf. * use `b` prefix instead of `encode()`
-
- 27 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle] remove all future import * revert test_error.py * restore future import in example code
-
- 14 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* trim trailing whitespace * fix `.cmake-format.py` * revert npu ut changes, avoid npu ci error
-
- 07 9月, 2022 1 次提交
-
-
由 Charles-hit 提交于
* relace fill_zeros_like op with fill_any_like op in backward.py and tensor.py * Remove unnecessary comments * modify create op_desc param
-
- 23 8月, 2022 1 次提交
-
-
由 xiongkun 提交于
-
- 11 7月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
* move _gradients to primapi and rename to grad * modify jvp to call forward_grad in primitive mode * add primapi unittest and remove some unused test cases. * fix circular import problem * move paddle/autograd/functional into paddle/incubate.autograd/functional * remove unused JacobianBatchLast class
-
- 07 6月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 30 5月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* use original id in dist_op_context.grad_op_id_to_op_id * del assert * remove redundant map
-
- 18 5月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std * merge upstream * add priops.py * add_p * rm useless files * add sub_p mul_p div_p * add sqrt_p and tanh_p * add reshape_p * add broadcast_p * Add python primitive wrappers. * Jvp rules updated. * JVP rules done for all the 17 primops. * quick check and fixes. * add jvp(op, *args) * add broadcast_p fill_constant_p matmul_p reduce_p reshape_p transpose_p * add split_p and concat_p * add gather_p and scatter_add_p * add slice_select_p and slice_assign_p * Add transpose rules. * add multi input check for add_p, sub_p, mul_p, div_p * update concat_p * Linearize and transpose in progress.. * refine gather_p and scatter_add_p * updated. * update transpose. * refine slice_assign_p and slice_select_p * init commit for lower * Merged with primitive ops. * small update * add rules for orig2prim and prim2orig * add 9 test for prim ops * add more test and fix some bug * add more test * register proto * Adding primops test. * add shape valid check for broadcast_p op, and add keepdim attr into reduce_p op proto * support multi input and multi output for split_p and concat_p * Test updated. * update * fix slice bug for slice_select_p and slice_assign_p * updated. * Ops updated. * Refactor and bug fixes. * updated. * finish orig2prim and prim2orig rules * dtype for axis attr should be long int * update dtype for axis attr int64_t * update for iscan CI * Update primx. * Refactor vars in primx. * update for lower transform * add more shape and dtype check * update primx.py * change IndexTensor into int32 dtype * update * Fix linearize and transpose. * Update is_dot * Update is_dot * Update is_dot * add gradient aggregation, fix add_transpose. * pass first linearize+transpose test. * update test * refactor op registration and primx. * update rule for slice_assign * try test lower * update orig2prim and prim2orig * pass simple lower pass * update * Update input types in the unit test. * orig2prim segfault. * 50% for adam.minimize * test updated. * temp fix erros in removing vars. * primx updated. * update for matmul_v2 and reshape2 orig2prim * update for minimize * Refine primrules * Remove some code * supporting unused and unreachable vars. * update for use prim2orig in minimize * fix gather and scatter_add transpose. * Add rules UT * update scatter_add * Refine UT code * fix nonetype check in topo * Update gather_p pywrapper. * remove useless print * Merge tongxin PR and refine code * readd some test * rm useless print * polish code. * fix bug in minimize * add get_input_var_list and get_output_var_list and use it in lower * Fix scatter_add_p prim2orig * Update code and fix orig2prim/prim2orig UT * delete vars after block.desc._remove * Improve ops and vars clean up logics. * fix some bug in linearize and lower * update tanh transpose. * use set instead of list for var2remove * test updated. * polish code. * fix dot2bar delete. * merge tx/ad * add indextensor_dot for gather and scatter_add * add sorted for set * Fix scale_orig2prim params * fix some syntax bug * add golbal_lower_update list * Better handling of unused vars. * update tests. * Fix elementwise_sub orig2prim * support none for transpose rule * Merge and add transform UT * fix a bug in transpose * Fix transpose and UT * a hacky fix for cancat op * Fix exector place * Refine variable name * Add elementwise_mul orig2prim and support p_norm when p=1 * Add sqrt orig2prim rule and UT * merge wz test * rename files, add enable_prim, disable_prim, prim_enabled, delete global_lower_update * fix a bug in test_ad_transform_trans * revert modify in framework.py * add paddle.fluid.incubate.ad_transform to python/setup.py.in * Fix remove vars error * Fix p_norm_orig2prim * merge wz * Modify the code directory * Add utils.py and remove get_input/output_vars functions * Update maolin code * Rename UT and refine test_ad_transform_primops * Fix div_p jvp rule * Add higher derivatives UT * Remove UT to autograd dir * Fix comments * import paddle in primops.py * Add some error message for assert * Refine UT class name and refine some comments in primreg.py * update minimize of paddle/optimizer for supporting new autograd * resolve cicular importing between backward.py and optimizer.py * fill gradients and minimize unittest * Replace `assert isinstance` with `raise TypeError` * Add some assert message for primx.py * Polish variable name * Add some assert message * add some docstring * refine some name * update the format of english documents * Split test_transform.py to two files to avoid ci error * fix the document format of enable_prim/disable_prim/prim2orig/prim_enabled * polish test_gradients_and_minimize * add default value for prim_enabled api doc * Remove some UT to avoid windows ci error * Enlarge test_gradients_and_minimize limit time * Fix ut limit time Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: NJiabin Yang <360788950@qq.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NTongxin Bai <waffle.bai@gmail.com> Co-authored-by: NXiaoxu Chen <chenxx_id@163.com> Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
-
- 06 5月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* add default_ctx in backward.py * record grad_var_to_var with grad_times * fix backward * update annotation * add complete_high_order_grad in complete_forward * add dist slice op * update grad_var_to_var type * update partition_block init mapping before loss op * update compatible for 'XShape' & update 'allreduce_vars' * add dist reshape op when input dim equal to output dim * update 'set_grad_var_shape' with grad_var_to_var * fix dist slice * fix set_grad_var_shape * add dist pnorm op * fix dist pnorm dist_attr * fix engine startprogram & adapt highorder grad * fix set_grad_var_shape when mp * update unittest * update cmakelist * default strategy in engine: dp * bug fix * tiny fix * flatten outputs * fix default strategy * init default ctx * tiny fix * test=allcase
-
- 28 4月, 2022 1 次提交
-
-
由 pangyoki 提交于
* fix collections.Sequence in python3.10 * fix format
-
- 25 4月, 2022 1 次提交
-
-
由 Yilingyelu 提交于
fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604) * Update scope_guard; test=document_fix * gradients; test=document_fix * gradients; test=document_fix * name_scope; test=document_fix * cpu_places; test=document_fix * WeightNormParamAttr; test=document_fix * cuda_places; test=document_fix * load_program_state; test=document_fix * device_guard; test=document_fix * device_guard; test=document_fix * ParamAttr; test=document_fix * scale; test=document_fix * scale; test=document_fix * update code example;test=document_fix Co-authored-by: NChen Long <1300851984@qq.com>
-
- 20 4月, 2022 1 次提交
-
-
由 Yilingyelu 提交于
* gradients; test=document_fix * fix VarType; test=document_fix * fix vartype; test=document_fix * cumsum; test=document_fix * t; test=document_fix
-
- 04 4月, 2022 1 次提交
-
-
由 hong 提交于
* add dropout slice yaml * remove useless code * fix infer shape error * skip infrt compile for dropout
-
- 28 3月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Fix bug while specifying target grad in high order gradient * add more unittest * add more unittest
-
- 29 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted. * [autograd] adding Hessian, in progress. * [autograd] Hessian passes. A double grad bug fixed. * [autograd] fix renaming conflict in double backward pass. * [autograd] polish test.s * fix a bug when using brackets * debug for ci * [autograd] fixing Hessian test. * polish format. Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 30 12月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto parallel] Make the id of var and op unique * [Auto Parallel] Rename back dist_context to distop_context
-