- 07 9月, 2022 1 次提交
-
-
由 Charles-hit 提交于
* relace fill_zeros_like op with fill_any_like op in backward.py and tensor.py * Remove unnecessary comments * modify create op_desc param
-
- 23 8月, 2022 1 次提交
-
-
由 xiongkun 提交于
-
- 11 7月, 2022 1 次提交
-
-
由 Xiaoxu Chen 提交于
* move _gradients to primapi and rename to grad * modify jvp to call forward_grad in primitive mode * add primapi unittest and remove some unused test cases. * fix circular import problem * move paddle/autograd/functional into paddle/incubate.autograd/functional * remove unused JacobianBatchLast class
-
- 07 6月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 30 5月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* use original id in dist_op_context.grad_op_id_to_op_id * del assert * remove redundant map
-
- 18 5月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std * merge upstream * add priops.py * add_p * rm useless files * add sub_p mul_p div_p * add sqrt_p and tanh_p * add reshape_p * add broadcast_p * Add python primitive wrappers. * Jvp rules updated. * JVP rules done for all the 17 primops. * quick check and fixes. * add jvp(op, *args) * add broadcast_p fill_constant_p matmul_p reduce_p reshape_p transpose_p * add split_p and concat_p * add gather_p and scatter_add_p * add slice_select_p and slice_assign_p * Add transpose rules. * add multi input check for add_p, sub_p, mul_p, div_p * update concat_p * Linearize and transpose in progress.. * refine gather_p and scatter_add_p * updated. * update transpose. * refine slice_assign_p and slice_select_p * init commit for lower * Merged with primitive ops. * small update * add rules for orig2prim and prim2orig * add 9 test for prim ops * add more test and fix some bug * add more test * register proto * Adding primops test. * add shape valid check for broadcast_p op, and add keepdim attr into reduce_p op proto * support multi input and multi output for split_p and concat_p * Test updated. * update * fix slice bug for slice_select_p and slice_assign_p * updated. * Ops updated. * Refactor and bug fixes. * updated. * finish orig2prim and prim2orig rules * dtype for axis attr should be long int * update dtype for axis attr int64_t * update for iscan CI * Update primx. * Refactor vars in primx. * update for lower transform * add more shape and dtype check * update primx.py * change IndexTensor into int32 dtype * update * Fix linearize and transpose. * Update is_dot * Update is_dot * Update is_dot * add gradient aggregation, fix add_transpose. * pass first linearize+transpose test. * update test * refactor op registration and primx. * update rule for slice_assign * try test lower * update orig2prim and prim2orig * pass simple lower pass * update * Update input types in the unit test. * orig2prim segfault. * 50% for adam.minimize * test updated. * temp fix erros in removing vars. * primx updated. * update for matmul_v2 and reshape2 orig2prim * update for minimize * Refine primrules * Remove some code * supporting unused and unreachable vars. * update for use prim2orig in minimize * fix gather and scatter_add transpose. * Add rules UT * update scatter_add * Refine UT code * fix nonetype check in topo * Update gather_p pywrapper. * remove useless print * Merge tongxin PR and refine code * readd some test * rm useless print * polish code. * fix bug in minimize * add get_input_var_list and get_output_var_list and use it in lower * Fix scatter_add_p prim2orig * Update code and fix orig2prim/prim2orig UT * delete vars after block.desc._remove * Improve ops and vars clean up logics. * fix some bug in linearize and lower * update tanh transpose. * use set instead of list for var2remove * test updated. * polish code. * fix dot2bar delete. * merge tx/ad * add indextensor_dot for gather and scatter_add * add sorted for set * Fix scale_orig2prim params * fix some syntax bug * add golbal_lower_update list * Better handling of unused vars. * update tests. * Fix elementwise_sub orig2prim * support none for transpose rule * Merge and add transform UT * fix a bug in transpose * Fix transpose and UT * a hacky fix for cancat op * Fix exector place * Refine variable name * Add elementwise_mul orig2prim and support p_norm when p=1 * Add sqrt orig2prim rule and UT * merge wz test * rename files, add enable_prim, disable_prim, prim_enabled, delete global_lower_update * fix a bug in test_ad_transform_trans * revert modify in framework.py * add paddle.fluid.incubate.ad_transform to python/setup.py.in * Fix remove vars error * Fix p_norm_orig2prim * merge wz * Modify the code directory * Add utils.py and remove get_input/output_vars functions * Update maolin code * Rename UT and refine test_ad_transform_primops * Fix div_p jvp rule * Add higher derivatives UT * Remove UT to autograd dir * Fix comments * import paddle in primops.py * Add some error message for assert * Refine UT class name and refine some comments in primreg.py * update minimize of paddle/optimizer for supporting new autograd * resolve cicular importing between backward.py and optimizer.py * fill gradients and minimize unittest * Replace `assert isinstance` with `raise TypeError` * Add some assert message for primx.py * Polish variable name * Add some assert message * add some docstring * refine some name * update the format of english documents * Split test_transform.py to two files to avoid ci error * fix the document format of enable_prim/disable_prim/prim2orig/prim_enabled * polish test_gradients_and_minimize * add default value for prim_enabled api doc * Remove some UT to avoid windows ci error * Enlarge test_gradients_and_minimize limit time * Fix ut limit time Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: NJiabin Yang <360788950@qq.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NTongxin Bai <waffle.bai@gmail.com> Co-authored-by: NXiaoxu Chen <chenxx_id@163.com> Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
-
- 06 5月, 2022 1 次提交
-
-
由 zhaoyingli 提交于
* add default_ctx in backward.py * record grad_var_to_var with grad_times * fix backward * update annotation * add complete_high_order_grad in complete_forward * add dist slice op * update grad_var_to_var type * update partition_block init mapping before loss op * update compatible for 'XShape' & update 'allreduce_vars' * add dist reshape op when input dim equal to output dim * update 'set_grad_var_shape' with grad_var_to_var * fix dist slice * fix set_grad_var_shape * add dist pnorm op * fix dist pnorm dist_attr * fix engine startprogram & adapt highorder grad * fix set_grad_var_shape when mp * update unittest * update cmakelist * default strategy in engine: dp * bug fix * tiny fix * flatten outputs * fix default strategy * init default ctx * tiny fix * test=allcase
-
- 28 4月, 2022 1 次提交
-
-
由 pangyoki 提交于
* fix collections.Sequence in python3.10 * fix format
-
- 25 4月, 2022 1 次提交
-
-
由 Yilingyelu 提交于
fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604) * Update scope_guard; test=document_fix * gradients; test=document_fix * gradients; test=document_fix * name_scope; test=document_fix * cpu_places; test=document_fix * WeightNormParamAttr; test=document_fix * cuda_places; test=document_fix * load_program_state; test=document_fix * device_guard; test=document_fix * device_guard; test=document_fix * ParamAttr; test=document_fix * scale; test=document_fix * scale; test=document_fix * update code example;test=document_fix Co-authored-by: NChen Long <1300851984@qq.com>
-
- 20 4月, 2022 1 次提交
-
-
由 Yilingyelu 提交于
* gradients; test=document_fix * fix VarType; test=document_fix * fix vartype; test=document_fix * cumsum; test=document_fix * t; test=document_fix
-
- 04 4月, 2022 1 次提交
-
-
由 hong 提交于
* add dropout slice yaml * remove useless code * fix infer shape error * skip infrt compile for dropout
-
- 28 3月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Fix bug while specifying target grad in high order gradient * add more unittest * add more unittest
-
- 29 1月, 2022 1 次提交
-
-
由 Tongxin Bai 提交于
* [autograd] static Jacobian pass tests. * [autograd] apply CR suggested changes. * [autograd] more tests. * [autograd] add CPUPlace in tests. * [autograd] bug fixes. * [autograd] reformatted. * [autograd] adding Hessian, in progress. * [autograd] Hessian passes. A double grad bug fixed. * [autograd] fix renaming conflict in double backward pass. * [autograd] polish test.s * fix a bug when using brackets * debug for ci * [autograd] fixing Hessian test. * polish format. Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 30 12月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* [Auto parallel] Make the id of var and op unique * [Auto Parallel] Rename back dist_context to distop_context
-
- 20 10月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
* default dist op * add dist_attr for dist op * add unitest * update inputname * update function name * add unitest * update CMakeLists.txt for CI * fix dis_matmul * fix compile error * update matmul to matmul_v2 * unify api * unify api * todo * update distop forward func * update distop forward func * auto parallel backward * update dist op * autoparallel backward * add backward for embedding * temp1 * temp2 * temp3 * temp4 * backward done1 * backward done2 * backward done3 * dist embedding remove mp mode * dist matmul remove mp mode * update dist embedding 『 * dist op init1 * dist op init 2 * update unitest * context remove parallel mode * partitioner remove parallel mode * update unitest * a more general method to support varying mesh in pipeline parallel * support varying mesh in pipeline parallel * embedding support varying mesh in pipeline parallel * matmul support varying mesh in pipeline parallel * default dist op support varying mesh in pipeline parallel * dist attribute for startup program * default dist op support varying mesh in pipeline parallel 2 * partitoner support varying mesh in pipeline parallel * revise logic for auto compeletion * revise framework.py * revise reshard unitest * revise unitest for parallelize * chmod * fixed bug for dist embedding name mapping Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
-
- 19 10月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 13 10月, 2021 1 次提交
-
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 28 9月, 2021 1 次提交
-
-
由 xiayanming 提交于
* [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid * [HIP] fix op not support AMD GPU bug * [hybrid] seed and dropout op support force-cpu * [hybrid] seed and dropout op support force-cpu * [hybrid] seed and dropout op support force-cpu * [hybrid] seed and dropout op support force-cpu * [hybrid] seed and dropout op support force-cpu * [hybrid] fix seed ci failed issue * add AsExtra for force_cpu of seed op
-
- 05 8月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 04 8月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add gradients_with_optimizer api * modify gradients_with_optimizer * add gradients_with_optimizer api into paddle.auto.backward_mode * add gradients_with_optimizer test case * add doc for gradients_with_optimizer * add doc for gradients_with_optimizer
-
- 14 7月, 2021 1 次提交
-
-
由 ShenLiang 提交于
-
- 05 7月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 02 7月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 09 6月, 2021 1 次提交
-
-
由 wanghuancoder 提交于
* modify API nn.Bilinear's doc, test=develop
-
- 26 4月, 2021 1 次提交
-
-
由 xiemoyuan 提交于
* Modified params of some APIs to support tuple and list. * fixed bug.
-
- 07 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 02 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 12 1月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 24 12月, 2020 1 次提交
-
-
由 tangwei12 提交于
* oneps (3/4) Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nmalin10 <malin10@baidu.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
- 26 11月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* add static_only for static api * addd static_only for class init * remove static_only for default_main_program * remove creater_parameter & startup_program * remove failed apis * revert py_func import * remove global scope * remove some api * remove cuda pinned place
-
- 14 10月, 2020 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 28 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* modify sample code * variable -> tensor * migrate program_guard sample code * refine error message * migrate program_guard * refine comment style * fix indent
-
- 21 9月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* support use add instead of sum to do gradient accumulation * add inplace addto pass * add grad_add op and inplace addto pass * remove debug code * code refine * fix bug when sereral sum ops inserts at same op_idx * fix Flags type * add addto attribute for conv3d * fix ut * code clean * fix type
-
- 11 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* fix calcu_gradients * fix code place * fix embedding interface usage
-
- 13 7月, 2020 1 次提交
-
-
由 liym27 提交于
[while grad]Support pruning op in find_op_path about while sub-block when appending backward (#25330) Prune OPs which are not related with loss in while sub-block when constructing backward OP path.
-
- 14 5月, 2020 1 次提交
-
-
由 Cindy Cai 提交于
* test=develop, test=document_fix * test=develop, test=document_fix Co-authored-by: Nswtkiwi <1208425345@qq.com>
-
- 30 4月, 2020 1 次提交
-
-
由 qingqing01 提交于
* Rename internal gradient variables in multiple backward * so that they have different names with previous backward * For example: * y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x) * In second-time backward, gradient variable names of partial * forward network (y * y) may be have same names with first-time * fluid.gradients(y, x). test=develop
-
- 15 4月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* allow amp and recompute working together
-
- 10 4月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* API/OP (append_backward) error message enhancement test=develop * polish check_type test=develop * fix unittest failed test=develop * merge develop test=develop
-