1. 26 10月, 2021 2 次提交
    • H
      [cherry-pick]Support FP16 in HybridParallel and Fix bugs in HybridOptimizer (#36707) · 5b357e02
      Haohongxiang 提交于
      * fix bugs in HybridParallelClipGrad of hybrid_parallel_optimizer (#36237)
      
      * fix bugs in HybridParallelClipGrad of hybrid_parallel_optimizer
      
      * update
      
      * update
      
      * fix bugs in mp_layers、pp_layers and HybridParallelClipGrad (#36144)
      
      * fix calling bug of HybridParallelClipGrad
      
      * fix bugs of HybridParallelClipGrad
      
      * add unittest of pp with HybridParallelClipGrad
      
      * fix bugs in mp_layers.py
      
      * update
      
      * fix bugs in pp_layers.py
      
      * update
      
      * [HybridParallel]Rebuild code for pipeline (#36396)
      
      * add no_sync for parameters sync
      
      * add pipeline for moe
      
      * [HybridParallel]Support fp16 in dygraph hybrid parallel (#36420)
      
      * [HybridParallel]Support fp16 in dygraph hybrid parallel
      
      * update
      
      * update
      
      * update for recompute
      
      * add unittest of pp+fp16
      
      * add unittest of recompute+fp16
      
      * update
      
      * modify ut
      
      * modify ut of cond (#36475)
      
      * fix bugs of ClipGradByGlobalNorm in HybridParallel (#36555)
      
      * fix bugs of ClipGradByGlobalNorm
      
      * add unittests
      
      * add unittests
      
      * [HybridParallel]fix bug of check_inf in fleet_base.py (#36651)
      
      * fix bug of check_inf
      
      * fix allreduce
      
      * support ClipGradByGlobalNorm in sharding (#36012)
      
      * support ClipGradByGlobalNorm in sharding
      
      * support ClipGradByGlobalNorm in sharding
      
      * test=allcase
      
      * Update test_linalg_cond.py
      
      * Update hybrid_parallel_util.py
      
      * Update hybrid_parallel_util.py
      Co-authored-by: NShenLiang <1422485404@qq.com>
      Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com>
      5b357e02
    • L
      [Amp] refine code of amp level (#36362) (#36726) · 1ee4fc32
      Leo Chen 提交于
      * refine amp level
      
      * fix typo
      
      * update tracer._amp_level
      1ee4fc32
  2. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  3. 10 9月, 2021 1 次提交
  4. 13 8月, 2021 1 次提交
  5. 12 8月, 2021 1 次提交
  6. 24 5月, 2021 1 次提交
  7. 06 5月, 2021 1 次提交
  8. 03 5月, 2021 1 次提交
  9. 25 4月, 2021 1 次提交