1. 11 7月, 2022 1 次提交
    • X
      reorganize the higher order autodiff api (#44119) · dd63e5b4
      Xiaoxu Chen 提交于
      * move _gradients to primapi and rename to grad
      
      * modify jvp to call forward_grad in primitive mode
      
      * add primapi unittest and remove some unused test cases.
      
      * fix  circular import problem
      
      * move paddle/autograd/functional into paddle/incubate.autograd/functional
      
      * remove unused JacobianBatchLast class
      dd63e5b4
  2. 07 6月, 2022 1 次提交
  3. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  4. 30 5月, 2022 1 次提交
  5. 18 5月, 2022 1 次提交
    • W
      Add support for forward and reverse high-order automatic differentiation mechanism (#41919) · f6ee202f
      WangZhen 提交于
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      
      * support multi input in triple gradient checker
      
      * Add matmul triple grad kernel
      
      * Updated comments of TODO
      
      * Supported some special tests
      
      * Change code-format to follow CI std
      
      * Updated gradient_checker.py
      
      * Fix conflicts
      
      * Removed unnecessary printing log
      
      * Change code style to follow CI std
      
      * merge upstream
      
      * add priops.py
      
      * add_p
      
      * rm useless files
      
      * add sub_p mul_p div_p
      
      * add sqrt_p and tanh_p
      
      * add reshape_p
      
      * add broadcast_p
      
      * Add python primitive wrappers.
      
      * Jvp rules updated.
      
      * JVP rules done for all the 17 primops.
      
      * quick check and fixes.
      
      * add jvp(op, *args)
      
      * add broadcast_p fill_constant_p matmul_p reduce_p reshape_p transpose_p
      
      * add split_p and concat_p
      
      * add gather_p and scatter_add_p
      
      * add slice_select_p and slice_assign_p
      
      * Add transpose rules.
      
      * add multi input check for add_p, sub_p, mul_p, div_p
      
      * update concat_p
      
      * Linearize and transpose in progress..
      
      * refine gather_p and scatter_add_p
      
      * updated.
      
      * update transpose.
      
      * refine slice_assign_p and slice_select_p
      
      * init commit for lower
      
      * Merged with primitive ops.
      
      * small update
      
      * add rules for orig2prim and prim2orig
      
      * add 9 test for prim ops
      
      * add more test and fix some bug
      
      * add more test
      
      * register proto
      
      * Adding primops test.
      
      * add shape valid check for broadcast_p op, and add keepdim attr into reduce_p op proto
      
      * support multi input and multi output for split_p and concat_p
      
      * Test updated.
      
      * update
      
      * fix slice bug for slice_select_p and slice_assign_p
      
      * updated.
      
      * Ops updated.
      
      * Refactor and bug fixes.
      
      * updated.
      
      * finish orig2prim and prim2orig rules
      
      * dtype for axis attr should be long int
      
      * update dtype for axis attr int64_t
      
      * update for iscan CI
      
      * Update primx.
      
      * Refactor vars in primx.
      
      * update for lower transform
      
      * add more shape and dtype check
      
      * update primx.py
      
      * change IndexTensor into int32 dtype
      
      * update
      
      * Fix linearize and transpose.
      
      * Update is_dot
      
      * Update is_dot
      
      * Update is_dot
      
      * add gradient aggregation, fix add_transpose.
      
      * pass first linearize+transpose test.
      
      * update test
      
      * refactor op registration and primx.
      
      * update rule for slice_assign
      
      * try test lower
      
      * update orig2prim and prim2orig
      
      * pass simple lower pass
      
      * update
      
      * Update input types in the unit test.
      
      * orig2prim segfault.
      
      * 50% for adam.minimize
      
      * test updated.
      
      * temp fix erros in removing vars.
      
      * primx updated.
      
      * update for matmul_v2 and reshape2 orig2prim
      
      * update for minimize
      
      * Refine primrules
      
      * Remove some code
      
      * supporting unused and unreachable vars.
      
      * update for use prim2orig in minimize
      
      * fix gather and scatter_add transpose.
      
      * Add rules UT
      
      * update scatter_add
      
      * Refine UT code
      
      * fix nonetype check in topo
      
      * Update gather_p pywrapper.
      
      * remove useless print
      
      * Merge tongxin PR and refine code
      
      * readd some test
      
      * rm useless print
      
      * polish code.
      
      * fix bug in minimize
      
      * add get_input_var_list and get_output_var_list and use it in lower
      
      * Fix scatter_add_p prim2orig
      
      * Update code and fix orig2prim/prim2orig UT
      
      * delete vars after block.desc._remove
      
      * Improve ops and vars clean up logics.
      
      * fix some bug in linearize and lower
      
      * update tanh transpose.
      
      * use set instead of list for var2remove
      
      * test updated.
      
      * polish code.
      
      * fix dot2bar delete.
      
      * merge tx/ad
      
      * add indextensor_dot for gather and scatter_add
      
      * add sorted for set
      
      * Fix scale_orig2prim params
      
      * fix some syntax bug
      
      * add golbal_lower_update list
      
      * Better handling of unused vars.
      
      * update tests.
      
      * Fix elementwise_sub orig2prim
      
      * support none for transpose rule
      
      * Merge and add transform UT
      
      * fix a bug in transpose
      
      * Fix transpose and UT
      
      * a hacky fix for cancat op
      
      * Fix exector place
      
      * Refine variable name
      
      * Add elementwise_mul orig2prim and support p_norm when p=1
      
      * Add sqrt orig2prim rule and UT
      
      * merge wz test
      
      * rename files, add enable_prim, disable_prim, prim_enabled, delete global_lower_update
      
      * fix a bug in test_ad_transform_trans
      
      * revert modify in framework.py
      
      * add paddle.fluid.incubate.ad_transform to  python/setup.py.in
      
      * Fix remove vars error
      
      * Fix p_norm_orig2prim
      
      * merge wz
      
      * Modify the code directory
      
      * Add utils.py and remove get_input/output_vars functions
      
      * Update maolin code
      
      * Rename UT and refine test_ad_transform_primops
      
      * Fix div_p jvp rule
      
      * Add higher derivatives UT
      
      * Remove UT to autograd dir
      
      * Fix comments
      
      * import paddle in primops.py
      
      * Add some error message for assert
      
      * Refine UT class name and refine some comments in primreg.py
      
      * update minimize of paddle/optimizer for supporting new autograd
      
      * resolve cicular importing between backward.py and optimizer.py
      
      * fill gradients and minimize unittest
      
      * Replace `assert isinstance` with `raise TypeError`
      
      * Add some assert message for primx.py
      
      * Polish variable name
      
      * Add some assert message
      
      * add some docstring
      
      * refine some name
      
      * update the format of english documents
      
      * Split test_transform.py to two files to avoid ci error
      
      * fix the document format of enable_prim/disable_prim/prim2orig/prim_enabled
      
      * polish test_gradients_and_minimize
      
      * add default value for prim_enabled api doc
      
      * Remove some UT to avoid windows ci error
      
      * Enlarge test_gradients_and_minimize limit time
      
      * Fix ut limit time
      Co-authored-by: Nveyron95 <veyron_wu@163.com>
      Co-authored-by: NJiabin Yang <360788950@qq.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      Co-authored-by: NTongxin Bai <waffle.bai@gmail.com>
      Co-authored-by: NXiaoxu Chen <chenxx_id@163.com>
      Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
      f6ee202f
  6. 06 5月, 2022 1 次提交
    • Z
      [AutoParallel] adapt for 2d laplace (#41601) · c043a21b
      zhaoyingli 提交于
      * add default_ctx in backward.py
      
      * record grad_var_to_var with grad_times
      
      * fix backward
      
      * update annotation
      
      * add complete_high_order_grad in complete_forward
      
      * add dist slice op
      
      * update grad_var_to_var type
      
      * update partition_block init mapping before loss op
      
      * update compatible for 'XShape' & update 'allreduce_vars'
      
      * add dist reshape op when input dim equal to output dim
      
      * update 'set_grad_var_shape' with grad_var_to_var
      
      * fix dist slice
      
      * fix set_grad_var_shape
      
      * add dist pnorm op
      
      * fix dist pnorm dist_attr
      
      * fix engine startprogram & adapt highorder grad
      
      * fix set_grad_var_shape when mp
      
      * update unittest
      
      * update cmakelist
      
      * default strategy in engine: dp
      
      * bug fix
      
      * tiny fix
      
      * flatten outputs
      
      * fix default strategy
      
      * init default ctx
      
      * tiny fix
      
      * test=allcase
      c043a21b
  7. 28 4月, 2022 1 次提交
  8. 25 4月, 2022 1 次提交
    • Y
      fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope,... · 6dd9dd39
      Yilingyelu 提交于
      fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604)
      
      * Update scope_guard; test=document_fix
      
      * gradients; test=document_fix
      
      * gradients; test=document_fix
      
      * name_scope; test=document_fix
      
      * cpu_places; test=document_fix
      
      * WeightNormParamAttr; test=document_fix
      
      * cuda_places; test=document_fix
      
      * load_program_state; test=document_fix
      
      * device_guard; test=document_fix
      
      * device_guard; test=document_fix
      
      * ParamAttr; test=document_fix
      
      * scale; test=document_fix
      
      * scale; test=document_fix
      
      * update code example;test=document_fix
      Co-authored-by: NChen Long <1300851984@qq.com>
      6dd9dd39
  9. 20 4月, 2022 1 次提交
  10. 04 4月, 2022 1 次提交
    • H
      Add dropout yaml (#41355) · 1c7001e7
      hong 提交于
      * add dropout slice yaml
      
      * remove useless code
      
      * fix infer shape error
      
      * skip infrt compile for dropout
      1c7001e7
  11. 28 3月, 2022 1 次提交
  12. 29 1月, 2022 1 次提交
    • T
      Symbolic Hessian (#39221) · 64e7c715
      Tongxin Bai 提交于
      * [autograd] static Jacobian pass tests.
      
      * [autograd] apply CR suggested changes.
      
      * [autograd] more tests.
      
      * [autograd] add CPUPlace in tests.
      
      * [autograd] bug fixes.
      
      * [autograd] reformatted.
      
      * [autograd] adding Hessian, in progress.
      
      * [autograd] Hessian passes. A double grad bug fixed.
      
      * [autograd] fix renaming conflict in double backward pass.
      
      * [autograd] polish test.s
      
      * fix a bug when using brackets
      
      * debug for ci
      
      * [autograd] fixing Hessian test.
      
      * polish format.
      Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      64e7c715
  13. 30 12月, 2021 1 次提交
  14. 20 10月, 2021 1 次提交
    • J
      [Auto Parallel] Generalization for Partition and Completion (#35735) · 797bd40d
      JZ-LIANG 提交于
      * default dist op
      
      * add dist_attr for dist op
      
      * add unitest
      
      * update inputname
      
      * update function name
      
      * add unitest
      
      * update CMakeLists.txt for CI
      
      * fix dis_matmul
      
      * fix compile error
      
      * update matmul to matmul_v2
      
      * unify api
      
      * unify api
      
      * todo
      
      * update distop forward func
      
      * update distop forward func
      
      * auto parallel backward
      
      * update dist op
      
      * autoparallel backward
      
      * add backward for embedding
      
      * temp1
      
      * temp2
      
      * temp3
      
      * temp4
      
      * backward done1
      
      * backward done2
      
      * backward done3
      
      * dist embedding remove mp mode
      
      * dist matmul remove mp mode
      
      * update dist embedding
      『
      
      * dist op init1
      
      * dist op init 2
      
      * update unitest
      
      * context remove parallel mode
      
      * partitioner remove parallel mode
      
      * update unitest
      
      * a more general method to support varying mesh in pipeline parallel
      
      * support varying mesh in pipeline parallel
      
      * embedding support varying mesh in pipeline parallel
      
      * matmul support varying mesh in pipeline parallel
      
      * default dist op support varying mesh in pipeline parallel
      
      * dist attribute for startup program
      
      * default dist op support varying mesh in pipeline parallel 2
      
      * partitoner support varying mesh in pipeline parallel
      
      * revise logic for auto compeletion
      
      * revise framework.py
      
      * revise reshard unitest
      
      * revise unitest for parallelize
      
      * chmod
      
      * fixed bug for dist embedding name mapping
      Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
      797bd40d
  15. 19 10月, 2021 1 次提交
  16. 13 10月, 2021 1 次提交
    • J
      [New Feature] Support triple grad in Paddle (#36187) · 2c44ee7e
      Jiabin Yang 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      Co-authored-by: Nveyron95 <veyron_wu@163.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      2c44ee7e
  17. 28 9月, 2021 1 次提交
    • X
      [hybrid] seed and dropout op support force-cpu (#35820) · 58c8f6b3
      xiayanming 提交于
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] fix seed ci failed issue
      
      * add AsExtra for force_cpu of seed op
      58c8f6b3
  18. 05 8月, 2021 1 次提交
  19. 04 8月, 2021 1 次提交
    • C
      Add gradient with optimizer API (#34395) · d9e63a81
      chentianyu03 提交于
      * add gradients_with_optimizer api
      
      * modify gradients_with_optimizer
      
      * add gradients_with_optimizer api into paddle.auto.backward_mode
      
      * add gradients_with_optimizer test case
      
      * add doc for gradients_with_optimizer
      
      * add doc for gradients_with_optimizer
      d9e63a81
  20. 14 7月, 2021 1 次提交
  21. 05 7月, 2021 1 次提交
  22. 02 7月, 2021 1 次提交
  23. 09 6月, 2021 1 次提交
  24. 26 4月, 2021 1 次提交
  25. 07 4月, 2021 1 次提交
  26. 02 4月, 2021 1 次提交
  27. 12 1月, 2021 1 次提交
  28. 24 12月, 2020 1 次提交
  29. 26 11月, 2020 1 次提交
    • C
      Add static_only decorator for static apis (#29015) · d0129fcd
      Chen Weihang 提交于
      * add static_only for static api
      
      * addd static_only for class init
      
      * remove static_only for default_main_program
      
      * remove creater_parameter & startup_program
      
      * remove failed apis
      
      * revert py_func import
      
      * remove global scope
      
      * remove some api
      
      * remove cuda pinned place
      d0129fcd
  30. 14 10月, 2020 1 次提交
  31. 28 9月, 2020 1 次提交
  32. 21 9月, 2020 1 次提交
  33. 11 9月, 2020 1 次提交
  34. 13 7月, 2020 1 次提交
  35. 14 5月, 2020 1 次提交
  36. 30 4月, 2020 1 次提交
    • Q
      Fix double_grad bug in statig-graph (#24190) · 84cf5db8
      qingqing01 提交于
      * Rename internal gradient variables in multiple backward
      * so that they have different names with previous backward
      * For example:
      *  y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x)
      * In second-time backward, gradient variable names of partial
      * forward network (y * y) may be have same names with first-time
      * fluid.gradients(y, x).
      
      test=develop
      84cf5db8
  37. 15 4月, 2020 1 次提交
  38. 10 4月, 2020 1 次提交
  39. 09 4月, 2020 1 次提交
  40. 20 3月, 2020 1 次提交
    • Z
      Add dygraph double grad implementation (#22939) · a31d7328
      Zeng Jinle 提交于
      * add double grad implementation for dygraph, test=develop
      
      * polish code, add uts, test=develop
      
      * fix place bug, test=develop
      
      * polish codes, add more uts for coverages, test=develop
      
      * add no_grad_set, test=develop
      
      * add star gan ut, test=develop
      
      * follow comments, test=develop
      a31d7328