1. 14 10月, 2021 2 次提交
  2. 13 10月, 2021 1 次提交
    • L
      Merge lars op (#35476) · 0c31579c
      limingshu 提交于
      * A leap of try for cudaLaunchCooperativeKernel
      
      * fix bugs
      
      * Totally replace the lar cuda kernel
      
      * Fix bugs
      
      * a test for lars merge
      
      * Adding las_op_momentum infer_shape
      
      * Fix codes
      
      * use avg_numel instead of max_numel to acquire grid num
      
      * modify unittest files about lars op
      
      * Finally converge when merged-lars works
      
      * fix ctest files
      
      * add merged_operation kernel when cuda version is older than 11
      
      * Fix code style
      
      * fix ctest failure
      
      * fix error
      
      * fix all ctest error and change lars compute code of cpu
      
      * fix bugs on v100.
      
      * revert python modififation about lars
      
      * revert python modification codes
      0c31579c
  3. 12 10月, 2021 2 次提交
  4. 21 9月, 2021 1 次提交
    • A
      Reuse OneDNN handler for SGD and SUM for SelectedRows input tensors. (#35510) · 799f3861
      Adam Osewski 提交于
      * Create stateful OneDNNAXPYHandler object.
      
      This makes it possible to call it multiple times without recreating the
      oneDNN primitives every time.
      
      * Prepare SGDOpKernel to reuse its implementation from OneDNN kernel.
      
      * OneDNN SGD kernel.
      
      * Update call to use new OneDNNAXPYHandler object api.
      
      * Setup seed in proper place.
      
      * Enable OneDNN kernel only for single case.
      
      * For dense param and sparse grad.
      
      * Small refactor.
      
      * Enable oneDNN by op attr or by cmd line flag.
      
      * Use int64_t type for number of elements.
      
      * Support dense param and grad from OneDNN kernel.
      
      * Enable SGD OneDNN kernel when use MP BF16 optimizer.
      
      * Force non-copyable/movable OneDNNAXPYHandler.
      
      * Reuse OneDNNAXPYHandler for spare tensors in SUM op.
      
      * Fix SFINAE rules.
      
      * Remove recording event inside AXPY.
      
      * Get rid of internal primitive caching.
      
      * Stop use PP cache mechanims to store mem and primitive obj.
      * Handler obj store and reuse needed desc & prim
      
      * Do not derive from MKLDNNHandlerT
      799f3861
  5. 18 9月, 2021 1 次提交
  6. 17 9月, 2021 2 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
    • H
      Support EMA in Paddle2.x and Fleet (#35673) · fb4d5689
      Haohongxiang 提交于
      * Support EMA in Paddle2.x and Fleet
      
      * update
      
      * update
      
      * update
      
      * modify ut of ema
      
      * modify docs
      
      * modify bugs
      
      * update
      
      * update
      
      * update
      
      * modify ut
      fb4d5689
  7. 16 9月, 2021 2 次提交
  8. 15 9月, 2021 1 次提交
  9. 10 9月, 2021 1 次提交
  10. 08 9月, 2021 3 次提交
  11. 02 9月, 2021 1 次提交
  12. 27 8月, 2021 1 次提交
  13. 23 8月, 2021 1 次提交
  14. 20 8月, 2021 1 次提交
  15. 17 8月, 2021 1 次提交
  16. 14 8月, 2021 1 次提交
  17. 11 8月, 2021 1 次提交
  18. 09 8月, 2021 1 次提交
  19. 28 7月, 2021 2 次提交
  20. 27 7月, 2021 1 次提交
  21. 20 7月, 2021 2 次提交
  22. 19 7月, 2021 3 次提交
  23. 16 7月, 2021 1 次提交
  24. 15 7月, 2021 1 次提交
  25. 13 7月, 2021 1 次提交
  26. 12 7月, 2021 1 次提交
  27. 08 7月, 2021 2 次提交
  28. 21 6月, 2021 2 次提交