1. 25 11月, 2021 1 次提交
  2. 23 11月, 2021 2 次提交
    • Z
      [cherry-pick]Refactor Heterogenous Pipeline Parameter Server (#37446) · 4dc426f4
      zmx 提交于
      * bug fix for  DeserializeSelectedRows. test=develop (#36520)
      
      * fix SerializeSelectedRows (#36543)
      
      * bug fix for  DeserializeSelectedRows. test=develop
      
      * fix bug for SerializeSelectedRows. test=develop
      
      * update. test=develop
      
      * [Heterps]Refactor Heter Pipeline Parameter Server (#36845)
      
      * change username
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * update
      
      * update
      
      * update unittests
      
      * fix
      
      * update
      
      * fix
      
      * update
      
      * fix
      
      * fix
      
      * fix
      
      * update
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update send_and_recv op. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix unit. notest,test=coverage
      
      * fix ut. notest, test=coverage
      
      * update. notest,test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix. notest, test=coverage
      
      * fix. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * add func. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix. test=develop
      
      * fix. test=develop
      
      * Fix unit test for send_and_recv_cpu & send_and_recv_gpu (#37129)
      
      * [heterps]fix ut for heter_pipeline_trainer.cc  (#37136)
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]bug fix for local training with --heter_worker_num (#37166)
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]Refactor heterogenous worker (#37244)
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * refactor heter trainer. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]add heterps mode judgement (#37298)
      
      * [heterps]change default executor for heter trainer (#37314)
      
      * fix pslib. test=develop
      
      * add device to train_from_dataset. test=develop
      
      * refine fleet.stop_worker. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix executor & ut. test=develop
      
      * fix executor & ut. test=develop
      
      * fix executor & ut. test=develop
      
      * [heterps]remove api for heter pipeline ps (#37396)
      
      * fix api. test=develop
      
      * fix api. test=develop
      
      * fix code style. test=release/2.2
      
      * fix CMakeLists. test=develop (#37454)
      4dc426f4
    • W
      cherry pick save/load in the_one_ps (#37461) · 58a51130
      wangguanqun 提交于
      * save/load in ps runtime(the_one_ps) (#36097)
      
      * add trainer desc config to distributed strategy
      
      * code style modified
      
      * data_feed set lod
      
      * fix bug
      
      * code style
      
      * fix bug
      
      * save load
      
      * save load
      
      * save unittest
      
      * add unittest of the_one_ps
      
      * unittest
      
      * add todo in communicator sendsparse
      
      * fix bug in save_inference_model (#37362)
      58a51130
  3. 27 10月, 2021 1 次提交
  4. 26 10月, 2021 3 次提交
    • H
      [cherry-pick]Support FP16 in HybridParallel and Fix bugs in HybridOptimizer (#36707) · 5b357e02
      Haohongxiang 提交于
      * fix bugs in HybridParallelClipGrad of hybrid_parallel_optimizer (#36237)
      
      * fix bugs in HybridParallelClipGrad of hybrid_parallel_optimizer
      
      * update
      
      * update
      
      * fix bugs in mp_layers、pp_layers and HybridParallelClipGrad (#36144)
      
      * fix calling bug of HybridParallelClipGrad
      
      * fix bugs of HybridParallelClipGrad
      
      * add unittest of pp with HybridParallelClipGrad
      
      * fix bugs in mp_layers.py
      
      * update
      
      * fix bugs in pp_layers.py
      
      * update
      
      * [HybridParallel]Rebuild code for pipeline (#36396)
      
      * add no_sync for parameters sync
      
      * add pipeline for moe
      
      * [HybridParallel]Support fp16 in dygraph hybrid parallel (#36420)
      
      * [HybridParallel]Support fp16 in dygraph hybrid parallel
      
      * update
      
      * update
      
      * update for recompute
      
      * add unittest of pp+fp16
      
      * add unittest of recompute+fp16
      
      * update
      
      * modify ut
      
      * modify ut of cond (#36475)
      
      * fix bugs of ClipGradByGlobalNorm in HybridParallel (#36555)
      
      * fix bugs of ClipGradByGlobalNorm
      
      * add unittests
      
      * add unittests
      
      * [HybridParallel]fix bug of check_inf in fleet_base.py (#36651)
      
      * fix bug of check_inf
      
      * fix allreduce
      
      * support ClipGradByGlobalNorm in sharding (#36012)
      
      * support ClipGradByGlobalNorm in sharding
      
      * support ClipGradByGlobalNorm in sharding
      
      * test=allcase
      
      * Update test_linalg_cond.py
      
      * Update hybrid_parallel_util.py
      
      * Update hybrid_parallel_util.py
      Co-authored-by: NShenLiang <1422485404@qq.com>
      Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com>
      5b357e02
    • L
      [Amp] refine code of amp level (#36362) (#36726) · 1ee4fc32
      Leo Chen 提交于
      * refine amp level
      
      * fix typo
      
      * update tracer._amp_level
      1ee4fc32
    • X
      [cherry-pick] Support CPU Parallel in DataParallel Interface by GLOO to speed... · beb920cd
      xiongkun 提交于
      [cherry-pick] Support CPU Parallel in DataParallel Interface by GLOO to speed up training (#35745) (#36605)
      
      * User specified backend (#35745)
      
      * remove tensordot
      beb920cd
  5. 25 10月, 2021 1 次提交
    • W
      [cherry-pick 2.2] static model parallel dropout support deterministic RandomSeedGenerator (#36682) · 59615fff
      WangXi 提交于
      * Revert "Add fused_dropout wrapper to ease use. (#36185) (#36640)"
      
      This reverts commit 05d7e2fd.
      
      * [hybrid] seed and dropout op support force-cpu (#35820)
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] fix seed ci failed issue
      
      * add AsExtra for force_cpu of seed op
      
      * Add fused_dropout wrapper to ease use. (#36185)
      
      * [hybrid] static model parallel dropout support deterministic RandomSeedGenerator (#36228)
      Co-authored-by: Nxiayanming <41795079@qq.com>
      Co-authored-by: NLi Min <11663212+limin2021@users.noreply.github.com>
      59615fff
  6. 30 9月, 2021 1 次提交
  7. 26 9月, 2021 1 次提交
  8. 22 9月, 2021 1 次提交
  9. 17 9月, 2021 3 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
    • G
      test=document_fix (#35824) · 177bf52f
      Guoxia Wang 提交于
      177bf52f
    • G
      add launch doc (#35634) · 5548061b
      Guoxia Wang 提交于
      * add launch doc
      5548061b
  10. 16 9月, 2021 2 次提交
  11. 15 9月, 2021 2 次提交
  12. 14 9月, 2021 2 次提交
  13. 13 9月, 2021 2 次提交
  14. 10 9月, 2021 2 次提交
  15. 08 9月, 2021 2 次提交
    • Y
      [Auto Parallel] Integrate all modules (#35483) · 12155358
      Yulong Ao 提交于
      * add auto_parallel dir
      
      * mv to paddle.distributed
      
      * add shard_xx api
      
      * add distributed attrs for var
      
      * add ut, test=develop
      
      * add dist
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update, test=develop
      
      * update, test=develop
      
      * update
      
      * update
      
      * delete unused proto
      
      * resotre op_desc
      
      * restore type_defs
      
      * update var_desc
      
      * remove dimss_mapping for proto_pybind
      
      * update interface.py
      
      * update framework.py
      
      * update
      
      * update
      
      * add auto_parallel dir
      
      * mv to paddle.distributed
      
      * add shard_xx api
      
      * add distributed attrs for var
      
      * add ut, test=develop
      
      * [WIP] Add the auto completion feature and related codes
      
      * [WIP] Improve the auto completion and related codes
      
      * [WIP] Make the auto completion to support data-parallel
      
      * [WIP] Make the completion support mp and dp+mp
      
      * [WIP] Refactor auto completion unit test for MLP
      
      * [WIP] Refactor the implementation of DistributedOperatorImpl
      
      * [WIP] Improve dims_mapping update rule and fix a bug
      
      * [WIP] Support auto completion for one transformer decoder layer
      
      * [WIP] Add a minor change
      
      * [WIP] Fix a bug within the uint test
      
      * Shard XShape tensor, add embedding completion and refactor code
      
      * Add the distributed_operators dir to setup.py.in
      
      * Improve the completion process and add the unittest for gpt
      
      * fix process_mesh ut
      
      * fix process_mesh ut
      
      * update
      
      * update, test=develop
      
      * Add support for automatically completing distributed attrs of special ops
      
      * update
      
      * update
      
      * update
      
      * fix doc sample codes, test=develop
      
      * improve coverage, test=develop
      
      * add static_mode check, test=develop
      
      * Model the cluster for cost model and physical mapping
      
      * update, test=develop
      
      * add set_placement, test=develop
      
      * Add the check to make sure the candidate tensors' size is great than zero
      
      * update doc, test=develop
      
      * update doc, test=develop
      
      * update doc, test=develop
      
      * update doc, test=develop
      
      * update, test=develop
      
      * Auto mark dist attrs annotated by user
      
      * update ndarray to nested list, test=develop
      
      * update, test=develop
      
      * Add auto-completion module for auto-parallel (based on PR#33804)
      
      * Remove unnecessary files
      
      * Remove unrelated files for the auto completion pr
      
      * Update the unit test to improve the coverage
      
      * Modify codes based on reviews
      
      * Minor changes for CI
      
      * Improve some codes based on new comments
      
      * Fix bugs caused by shallow copy in attributes.py
      * Imporve amend_distributed_attr_for_program in context.py
      * Other changes for weihang's comments
      
      * support shard reader
      
      * support shard reader
      
      * add parallel mode
      
      * update process mesh
      
      * add method to compute comm_group
      
      * implement dist_embedding forward func
      
      * implement dist matmul forward func
      
      * implement dist reshape forward func
      
      * add transpiler framework
      
      * add transpiler forward
      
      * implement transpiler forward
      
      * implement transpiler backward & update
      
      * add process
      
      * add unitest
      
      * chmod
      
      * chmod
      
      * chmod
      
      * update unitest
      
      * add unitest for gpt
      
      * remove unused print
      
      * rename transpiler --> partitioner
      
      * rename transpiler --> partitioner
      
      * chmod
      
      * chmod
      
      * bug fixed
      
      * remove amp function
      
      * update case for dp mode
      
      * update case for dp mode
      
      * [Auto Parallel] Integrate all parts with the newest code
      
      * Integrate all parts of auto parallel and improve codes
      
      * Integrate all parts by AutoParallelizer
      * Add unit test for AutoParallelizer
      * Improve auto completion module for pipeline parallel
      * Add support for matmul_v2 in dist_matmul
      * Correct the typo "stratergy" to "strategy"
      
      * Modify distributed_strategy.proto to conform the main stream
      
      * Restore parts of distributed_strategy to conform the develop branch
      Co-authored-by: Nsandyhouse <lilong12@baidu.com>
      Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
      12155358
    • Z
      Enable program passes on Fleet APIs (#34955) · 5f369881
      Zeng Jinle 提交于
      * add fleet api for program pass
      
      * turn on apply pass for CI test
      
      * fix disable fuse_all_optimizer bug
      
      * try to test ci
      
      * fix CI
      
      * fill unspecified op role
      
      * fix fuse_allreduce
      
      * add ut to improve coverage
      
      * remove useless change
      
      * improve c++ coverage
      
      * follow some comments
      
      * test ir pass pipeline
      
      * update doc
      
      * reduce ut time again
      5f369881
  16. 01 9月, 2021 2 次提交
  17. 25 8月, 2021 1 次提交
  18. 20 8月, 2021 1 次提交
  19. 18 8月, 2021 2 次提交
  20. 17 8月, 2021 1 次提交
  21. 13 8月, 2021 1 次提交
  22. 12 8月, 2021 1 次提交
  23. 11 8月, 2021 3 次提交
  24. 10 8月, 2021 2 次提交