1. 06 5月, 2022 1 次提交
    • Z
      [AutoParallel] adapt for 2d laplace (#41601) · c043a21b
      zhaoyingli 提交于
      * add default_ctx in backward.py
      
      * record grad_var_to_var with grad_times
      
      * fix backward
      
      * update annotation
      
      * add complete_high_order_grad in complete_forward
      
      * add dist slice op
      
      * update grad_var_to_var type
      
      * update partition_block init mapping before loss op
      
      * update compatible for 'XShape' & update 'allreduce_vars'
      
      * add dist reshape op when input dim equal to output dim
      
      * update 'set_grad_var_shape' with grad_var_to_var
      
      * fix dist slice
      
      * fix set_grad_var_shape
      
      * add dist pnorm op
      
      * fix dist pnorm dist_attr
      
      * fix engine startprogram & adapt highorder grad
      
      * fix set_grad_var_shape when mp
      
      * update unittest
      
      * update cmakelist
      
      * default strategy in engine: dp
      
      * bug fix
      
      * tiny fix
      
      * flatten outputs
      
      * fix default strategy
      
      * init default ctx
      
      * tiny fix
      
      * test=allcase
      c043a21b
  2. 28 4月, 2022 1 次提交
  3. 25 4月, 2022 1 次提交
    • Y
      fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope,... · 6dd9dd39
      Yilingyelu 提交于
      fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) (#41604)
      
      * Update scope_guard; test=document_fix
      
      * gradients; test=document_fix
      
      * gradients; test=document_fix
      
      * name_scope; test=document_fix
      
      * cpu_places; test=document_fix
      
      * WeightNormParamAttr; test=document_fix
      
      * cuda_places; test=document_fix
      
      * load_program_state; test=document_fix
      
      * device_guard; test=document_fix
      
      * device_guard; test=document_fix
      
      * ParamAttr; test=document_fix
      
      * scale; test=document_fix
      
      * scale; test=document_fix
      
      * update code example;test=document_fix
      Co-authored-by: NChen Long <1300851984@qq.com>
      6dd9dd39
  4. 20 4月, 2022 1 次提交
  5. 04 4月, 2022 1 次提交
    • H
      Add dropout yaml (#41355) · 1c7001e7
      hong 提交于
      * add dropout slice yaml
      
      * remove useless code
      
      * fix infer shape error
      
      * skip infrt compile for dropout
      1c7001e7
  6. 28 3月, 2022 1 次提交
  7. 29 1月, 2022 1 次提交
    • T
      Symbolic Hessian (#39221) · 64e7c715
      Tongxin Bai 提交于
      * [autograd] static Jacobian pass tests.
      
      * [autograd] apply CR suggested changes.
      
      * [autograd] more tests.
      
      * [autograd] add CPUPlace in tests.
      
      * [autograd] bug fixes.
      
      * [autograd] reformatted.
      
      * [autograd] adding Hessian, in progress.
      
      * [autograd] Hessian passes. A double grad bug fixed.
      
      * [autograd] fix renaming conflict in double backward pass.
      
      * [autograd] polish test.s
      
      * fix a bug when using brackets
      
      * debug for ci
      
      * [autograd] fixing Hessian test.
      
      * polish format.
      Co-authored-by: Nlevi131 <83750468+levi131@users.noreply.github.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      64e7c715
  8. 30 12月, 2021 1 次提交
  9. 20 10月, 2021 1 次提交
    • J
      [Auto Parallel] Generalization for Partition and Completion (#35735) · 797bd40d
      JZ-LIANG 提交于
      * default dist op
      
      * add dist_attr for dist op
      
      * add unitest
      
      * update inputname
      
      * update function name
      
      * add unitest
      
      * update CMakeLists.txt for CI
      
      * fix dis_matmul
      
      * fix compile error
      
      * update matmul to matmul_v2
      
      * unify api
      
      * unify api
      
      * todo
      
      * update distop forward func
      
      * update distop forward func
      
      * auto parallel backward
      
      * update dist op
      
      * autoparallel backward
      
      * add backward for embedding
      
      * temp1
      
      * temp2
      
      * temp3
      
      * temp4
      
      * backward done1
      
      * backward done2
      
      * backward done3
      
      * dist embedding remove mp mode
      
      * dist matmul remove mp mode
      
      * update dist embedding
      『
      
      * dist op init1
      
      * dist op init 2
      
      * update unitest
      
      * context remove parallel mode
      
      * partitioner remove parallel mode
      
      * update unitest
      
      * a more general method to support varying mesh in pipeline parallel
      
      * support varying mesh in pipeline parallel
      
      * embedding support varying mesh in pipeline parallel
      
      * matmul support varying mesh in pipeline parallel
      
      * default dist op support varying mesh in pipeline parallel
      
      * dist attribute for startup program
      
      * default dist op support varying mesh in pipeline parallel 2
      
      * partitoner support varying mesh in pipeline parallel
      
      * revise logic for auto compeletion
      
      * revise framework.py
      
      * revise reshard unitest
      
      * revise unitest for parallelize
      
      * chmod
      
      * fixed bug for dist embedding name mapping
      Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
      797bd40d
  10. 19 10月, 2021 1 次提交
  11. 13 10月, 2021 1 次提交
    • J
      [New Feature] Support triple grad in Paddle (#36187) · 2c44ee7e
      Jiabin Yang 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      Co-authored-by: Nveyron95 <veyron_wu@163.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      2c44ee7e
  12. 28 9月, 2021 1 次提交
    • X
      [hybrid] seed and dropout op support force-cpu (#35820) · 58c8f6b3
      xiayanming 提交于
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] fix seed ci failed issue
      
      * add AsExtra for force_cpu of seed op
      58c8f6b3
  13. 05 8月, 2021 1 次提交
  14. 04 8月, 2021 1 次提交
    • C
      Add gradient with optimizer API (#34395) · d9e63a81
      chentianyu03 提交于
      * add gradients_with_optimizer api
      
      * modify gradients_with_optimizer
      
      * add gradients_with_optimizer api into paddle.auto.backward_mode
      
      * add gradients_with_optimizer test case
      
      * add doc for gradients_with_optimizer
      
      * add doc for gradients_with_optimizer
      d9e63a81
  15. 14 7月, 2021 1 次提交
  16. 05 7月, 2021 1 次提交
  17. 02 7月, 2021 1 次提交
  18. 09 6月, 2021 1 次提交
  19. 26 4月, 2021 1 次提交
  20. 07 4月, 2021 1 次提交
  21. 02 4月, 2021 1 次提交
  22. 12 1月, 2021 1 次提交
  23. 24 12月, 2020 1 次提交
  24. 26 11月, 2020 1 次提交
    • C
      Add static_only decorator for static apis (#29015) · d0129fcd
      Chen Weihang 提交于
      * add static_only for static api
      
      * addd static_only for class init
      
      * remove static_only for default_main_program
      
      * remove creater_parameter & startup_program
      
      * remove failed apis
      
      * revert py_func import
      
      * remove global scope
      
      * remove some api
      
      * remove cuda pinned place
      d0129fcd
  25. 14 10月, 2020 1 次提交
  26. 28 9月, 2020 1 次提交
  27. 21 9月, 2020 1 次提交
  28. 11 9月, 2020 1 次提交
  29. 13 7月, 2020 1 次提交
  30. 14 5月, 2020 1 次提交
  31. 30 4月, 2020 1 次提交
    • Q
      Fix double_grad bug in statig-graph (#24190) · 84cf5db8
      qingqing01 提交于
      * Rename internal gradient variables in multiple backward
      * so that they have different names with previous backward
      * For example:
      *  y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x)
      * In second-time backward, gradient variable names of partial
      * forward network (y * y) may be have same names with first-time
      * fluid.gradients(y, x).
      
      test=develop
      84cf5db8
  32. 15 4月, 2020 1 次提交
  33. 10 4月, 2020 1 次提交
  34. 09 4月, 2020 1 次提交
  35. 20 3月, 2020 1 次提交
    • Z
      Add dygraph double grad implementation (#22939) · a31d7328
      Zeng Jinle 提交于
      * add double grad implementation for dygraph, test=develop
      
      * polish code, add uts, test=develop
      
      * fix place bug, test=develop
      
      * polish codes, add more uts for coverages, test=develop
      
      * add no_grad_set, test=develop
      
      * add star gan ut, test=develop
      
      * follow comments, test=develop
      a31d7328
  36. 19 3月, 2020 1 次提交
  37. 17 3月, 2020 1 次提交
  38. 03 3月, 2020 1 次提交
  39. 23 2月, 2020 1 次提交
  40. 10 2月, 2020 1 次提交