1. 26 12月, 2022 1 次提交
    • Y
      [Auto Parallel] Merge the python and c++ impls of ProcessMesh (#47503) · 1c0afa79
      Yulong Ao 提交于
      * [Auto Parallel] Rename methods of ProcessMesh
      
      * [Auto Parallel] Impl the python process_mesh by the c++ one
      
      * [Auto Parallel] Add some minor modifications
      
      * [Auto Parallel] Rename some methods
      
      * [Auto Parallel] Remove unnecessary codes
      
      * [Auto Parallel] Add back some removed files
      
      * [Auto Parallel] Fix bugs
      
      * [Auto Parallel] Fix a bug
      
      * Update process_mesh.cc
      
      * [Auto Parallel] Fix a bug
      1c0afa79
  2. 29 11月, 2022 1 次提交
  3. 24 11月, 2022 1 次提交
  4. 22 11月, 2022 1 次提交
  5. 15 11月, 2022 2 次提交
  6. 14 11月, 2022 1 次提交
  7. 10 11月, 2022 1 次提交
  8. 03 11月, 2022 1 次提交
  9. 23 10月, 2022 1 次提交
  10. 18 10月, 2022 1 次提交
    • C
      [Auto Parallel]Add parallel tuner (#46189) · 3108ba11
      caozhou 提交于
      * add parallel tuner
      
      * add unittest
      
      * fix unittest
      
      * set timeout of unittest
      
      * set unittest timeout
      
      * fix auto_mode setting
      
      * update unittest
      
      * sync from develop and update unittest
      
      * remove unused import
      
      * update unittest
      
      * update cmakelist
      
      * add unittests
      3108ba11
  11. 14 10月, 2022 1 次提交
  12. 12 10月, 2022 1 次提交
    • N
      [CodeStyle][F401] remove unused imports in python/paddle/distributed (#46758) · fe716a0b
      Nyakku Shigure 提交于
      * [CodeStyle][F401] remove unused import in python/paddle/distributed
      
      * remove pass
      
      * empty commit
      
      * Fix ValueError: list.remove(x): x not in list for meta_optimizer_names.
      
      Fix ValueError: list.remove(x): x not in list for meta_optimizer_names.
      
      * Fix split import.
      
      Fix split import.
      
      * add noqa after meta_optimizers in factory
      
      * restort collective ops
      
      * expand `import *`
      
      * add noqa after required imports
      
      * try to fix APIs without core.ops
      
      * Revert "try to fix APIs without core.ops"
      
      This reverts commit 6172beaf601e84bf61f2490c12c4739f0edaa5eb.
      
      * fix an increment
      
      * empty commit
      
      * add noqa after required imports
      
      * expand `import *`, fix ci error
      Co-authored-by: NShuangchi He <34329208+Yulv-git@users.noreply.github.com>
      fe716a0b
  13. 28 9月, 2022 1 次提交
  14. 14 9月, 2022 2 次提交
  15. 13 9月, 2022 1 次提交
  16. 05 9月, 2022 1 次提交
  17. 31 8月, 2022 1 次提交
  18. 25 8月, 2022 1 次提交
  19. 23 8月, 2022 1 次提交
  20. 16 8月, 2022 2 次提交
  21. 12 8月, 2022 1 次提交
  22. 09 8月, 2022 1 次提交
  23. 03 8月, 2022 1 次提交
  24. 29 7月, 2022 1 次提交
  25. 28 7月, 2022 1 次提交
  26. 25 7月, 2022 1 次提交
    • C
      [Auto Parallel] Add dist op cost (#44146) · d0f4465d
      caozhou 提交于
      * update comp cost
      
      * add dist default op cost
      
      * add dist fill constant batch size like op cost
      
      * add elewise op cost
      
      * add fill_constant_batch_size_like op cost unittest
      
      * add unittest and remove fill_constant_batch_size_like grad op cost
      
      * add to cmakelist
      
      * fix unittest bug
      d0f4465d
  27. 13 7月, 2022 1 次提交
  28. 07 7月, 2022 1 次提交
  29. 29 6月, 2022 1 次提交
  30. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  31. 01 6月, 2022 2 次提交
    • J
      [AutoParallel & Science] Miscellaneous improvements (#43139) · f59bcb1c
      JZ-LIANG 提交于
      * adapt for 10 loss
      
      * partitioner support optimizer
      f59bcb1c
    • Y
      [Auto Parallel] Add miscellaneous improvements (#43108) · 010aba33
      Yulong Ao 提交于
      * [Auto Parallel] Add the parallel tuner
      
      * [Auto Parallel] Improve the parallel tuner and fix some bugs
      
      * upodate cost model
      
      * update import Resharder by dist op
      
      * update cost model
      
      * fix comp cost bug
      
      * update cost model
      
      * [Auto Parallel] Amend the dist attr for #processses=1
      
      * update cost model and tuner
      
      * update cost model and tuner
      
      * update cost model and tuner
      
      * update cluster
      
      * update reshard
      
      * [Auto Parallel] Add the estimation from the cost model
      
      * [Auto Parallel] Reimplement the backup and restore functions
      
      * [Auto Parallel] Fix the bugs of the parallel tuner
      
      * [Auto Parallel] Update the engine api and dist context
      
      * [Auto Parallel] Work around the high order grad problem
      
      * [Auto Parallel] Add some miscellaneous improvements
      
      * [Auto Parallel] Add a unittest for DistributedContext
      Co-authored-by: Ncaozhou <caozhou@radi.ac.cn>
      010aba33
  32. 19 5月, 2022 2 次提交
  33. 10 5月, 2022 1 次提交
  34. 07 5月, 2022 1 次提交
  35. 06 5月, 2022 1 次提交
    • Z
      [AutoParallel] adapt for 2d laplace (#41601) · c043a21b
      zhaoyingli 提交于
      * add default_ctx in backward.py
      
      * record grad_var_to_var with grad_times
      
      * fix backward
      
      * update annotation
      
      * add complete_high_order_grad in complete_forward
      
      * add dist slice op
      
      * update grad_var_to_var type
      
      * update partition_block init mapping before loss op
      
      * update compatible for 'XShape' & update 'allreduce_vars'
      
      * add dist reshape op when input dim equal to output dim
      
      * update 'set_grad_var_shape' with grad_var_to_var
      
      * fix dist slice
      
      * fix set_grad_var_shape
      
      * add dist pnorm op
      
      * fix dist pnorm dist_attr
      
      * fix engine startprogram & adapt highorder grad
      
      * fix set_grad_var_shape when mp
      
      * update unittest
      
      * update cmakelist
      
      * default strategy in engine: dp
      
      * bug fix
      
      * tiny fix
      
      * flatten outputs
      
      * fix default strategy
      
      * init default ctx
      
      * tiny fix
      
      * test=allcase
      c043a21b