1. 22 9月, 2022 1 次提交
  2. 20 9月, 2022 3 次提交
  3. 19 9月, 2022 6 次提交
    • W
      Recompute unify incubate (#46073) (#46210) · 4bced24a
      wuhuachaocoding 提交于
      4bced24a
    • X
      [cherry-pick] add abs,mean,sum,ge,gt,pow,etc higher-order differentiation operators (#46184) · ad8beaaf
      Xiaoxu Chen 提交于
      * [cherry-pick] extend reduce_sum,reduce_sum,eq,ne,ge,abs,pow,etc higher order operators
      
      * add reduce_mean,reduce_sum primitive ops
      * add ne_p gt_p primitive operators
      * add ge_p abs_p primitive oparators
      * add cast primitive operators
      * add pow,square prim2oirg rules
      * add elementwise_div orig2prim rule
      
      * [cherry-pick] add mean,sum,ge,gt,ne,abs,etc higher-order differentiation operators(#45888)
      
      * add reduce_mean,reduce_sum primitive ops
      
      * add ne_p gt_p primitive operators
      
      * add ge_p abs_p primitive oparators
      ad8beaaf
    • W
      refactor mp. (#45803) (#46121) · e5dc9d61
      wuhuachaocoding 提交于
      * refactor mp.
      
      * update setup.py.
      
      * update mp_layers.py for compatibility.
      
      * add documents for mp_layers.py
      
      * update init.py
      
      * update collective.py.
      
      * update.
      
      * update mp_ops.py
      
      * update.
      
      * update code style.
      
      * update code style.
      e5dc9d61
    • Y
      [Cherry-pick][Auto Parallel] Improve the APIs (#46164) · c5cc4278
      Yulong Ao 提交于
      * [AutoParallel] adapt gradient merge pass (#45915)
      
      * adapt gradient merge
      
      * fix op_role
      
      * fix strategy
      
      * [Auto Parallel] Gradient Fuse Allreduce (#45643)
      
      * bugfix (#45332)
      
      * dist embedding support lookup table v1
      
      * add unitest
      
      * customize wait_comm
      
      * group gradients
      
      * bugfix
      
      * update program
      
      * [Auto Parallel] Improve the APIs (#45776)
      
      * [Auto Parallel] Use c++ dist attr in the completion process
      
      * [Auto Parallel] Add minor changes
      
      * [Auto Parallel] Use c++ dist attr in the completion process
      
      * [Auto Parallel] Add minor changes
      
      * [Auto Parallel] Add the serialization process for dist attrs
      
      * [Auto Parallel] Remove unnecessary comments
      
      * [Auto Parallel] Fix some bugs
      
      * [Auto Parallel] Fix the code style
      
      * [Auto Parallel] Remove unnecessary impls
      
      * [Auto Parallel] Fix the importing error
      
      * [Auto Parallel] Fix the copy from bugs of op dist attr
      
      * [Auto Parallel] Replace the use of constexpr if
      
      * [Auto Parallel] Redesign the shard_tensor, shard_op and ProcessMesh
      
      * [Auto Parallel] Change API of the completion unittest
      
      * [Auto Parallel] Fix the bug when set_attr an int
      
      * [Auto Parallel] Add the unittest for the serialization
      
      * [Auto Parallel] Add some unit tests
      
      * [Auto Paralle] Unify the strategy
      
      * [Auto Parallel] Improve the engine api
      
      * [Auto Parallel] Reset the changes made to the framework
      
      * [Auto Parallel] Change the engine unittest
      
      * [Auto Parallel] Update API of the completion and partitioner
      
      * [Auto Parallel] Update unit tests using engine api
      
      * update shard annotation
      
      * [Auto Parallel] Remove the modifications of other modules
      
      * [Auto Parallel] Add docs for APIs
      
      * add new strategy
      
      * [Auto Parallel] Replace the logger
      
      * [Auto Parallel] Restore the test_program.py
      
      * [Auto Parallel] Change the import rules
      
      * [Auto Parallel] Add the examples for Engine
      
      * [Auto Parallel] Do some minor changes
      
      * [Auto Parallel] Remove yaml dependency
      
      * [Auto Parallel] Fix the unittests
      
      * add valid after train
      
      * bug fix
      Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
      Co-authored-by: Ncaozhou <caozhou@radi.ac.cn>
      Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com>
      
      * [Auto Parallel] Bugfix allreduce fuse for MP (#46086)
      
      * bugfix
      
      * bugfix
      
      * typos fixed
      
      * update strategy (#46138)
      Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com>
      Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
      Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
      Co-authored-by: Ncaozhou <caozhou@radi.ac.cn>
      Co-authored-by: Ncaozhou <48191911+Caozhou1995@users.noreply.github.com>
      c5cc4278
    • C
      Revert "Simplify size op impl (#45808)" (#46168) · dabb8f23
      Chen Weihang 提交于
      This reverts commit c252b1de.
      dabb8f23
    • S
      rename fleetx, develop=document_fix (#46141) · 7a6db0a3
      ShenLiang 提交于
      7a6db0a3
  4. 17 9月, 2022 1 次提交
    • Z
      V2.4 - cherry-pick (#46126) · a76fa414
      ziyoujiyi 提交于
      * back fl
      
      * delete ssl cert
      
      * .
      
      * make warning
      
      * .
      
      * unittest paral degree
      
      * solve unittest
      
      * heter & multi cloud commm ready
      
      * .
      
      * .
      
      * fix gloo compile warning
      
      * adapt for nn fl-ps
      a76fa414
  5. 15 9月, 2022 1 次提交
  6. 09 9月, 2022 3 次提交
  7. 08 9月, 2022 1 次提交
  8. 07 9月, 2022 2 次提交
  9. 06 9月, 2022 2 次提交
  10. 05 9月, 2022 1 次提交
  11. 02 9月, 2022 3 次提交
  12. 01 9月, 2022 3 次提交
  13. 31 8月, 2022 3 次提交
  14. 29 8月, 2022 1 次提交
  15. 26 8月, 2022 3 次提交
  16. 25 8月, 2022 2 次提交
    • Z
      Fl-PS bug fix (#45413) · f2f3f6e7
      ziyoujiyi 提交于
      * back fl
      
      * delete ssl cert
      
      * .
      
      * make warning
      
      * .
      
      * unittest paral degree
      
      * solve unittest
      
      * heter & multi cloud commm ready
      
      * .
      
      * .
      
      * fl-ps v1.0
      
      * .
      
      * support N + N mode
      
      * .
      
      * .
      
      * .
      
      * .
      
      * delete print
      
      * .
      
      * .
      
      * .
      
      * .
      
      * fix bug
      
      * .
      
      * .
      
      * fl-ps with coordinator ready
      
      * merge dev
      
      * update message parse only
      
      * update fl client scheduler
      
      * fix bug
      
      * update multithreads sync
      
      * fix ci errors
      
      * update role_maker.py
      
      * update role_maker.py
      
      * fix ci error: windows py import error
      
      * fix ci error: windows py import error
      
      * fix windows ci pylib import error
      
      * add dump fields & params
      
      * try to fix windows import fleet error
      
      * fix ps FLAGS error
      
      * fix logging risk
      
      * fix logging possible risk
      
      * write trainer_desc file
      
      * support split sparse params in local & remote
      
      * fix import paddle.fluid.core.PSGPU
      
      * fix import paddle.fluid.core.PSGPU
      
      * add remote_sparse & local_sparse config
      
      * fix unittest
      
      * fix test_dist_fleet_geo table error
      
      * fix PADDLE_ENFORCE error
      
      * fix other's pr conflict
      
      * forbidden ssd table
      
      * .
      
      * recover ssd table code
      
      * recover file mode
      f2f3f6e7
    • J
      [Auto Parallel] Support High Order Differential with Data Parallel Calc-Comm Overlaping (#45388) · bdd0b0f1
      JZ-LIANG 提交于
      * support high order differential with data parallel overlap
      
      * update unitest
      bdd0b0f1
  17. 23 8月, 2022 4 次提交