1. 23 11月, 2022 1 次提交
  2. 21 11月, 2022 4 次提交
  3. 19 11月, 2022 1 次提交
  4. 18 11月, 2022 3 次提交
    • W
    • J
      correct sync behavior for XPU distributed training (#47882) · aafa9820
      james 提交于
      * correct sync behavior for XPU distributed training
      
      XPU support event mechanism similar to cuda event, so it is advisable to
      use an event to sync compute/comm streams for performance. However this
      mechanism is never fully tested, and inconsistent loss/ending_epochs are
      reported. Therefore, this PR replaces event sync with stream waiting as
      a temporary solution.
      
      * remove compile warning
      aafa9820
    • J
      fix device id issue for xpu eager mode (#48076) · 3b18d96b
      james 提交于
      * fix device id issue for xpu eager
      
      xpu device id is not correctly set in eager mode, thus vars are on dev0 unless
      XPUDeviceGurad is called, leading to this error message for all node rank != 0:
      "NotImplementedError: (Unimplemented) Place Place(xpu:0) is not supported."
      
      * fix typo
      
      * fix pybind error
      3b18d96b
  5. 17 11月, 2022 1 次提交
  6. 16 11月, 2022 1 次提交
  7. 14 11月, 2022 3 次提交
  8. 10 11月, 2022 2 次提交
    • J
      XPU multi-card support eager mode (#47445) · 3b91f8f3
      james 提交于
      * XPU support eager mode
      
      * add unittest for XPU eager mode
      
      * minor bugfix
      
      * minor bugfix, test=kunlun
      
      * correct copyright info
      
      * 1. remove unsed vars/funcs
      2. ProcessGroupBKCL inherit from ProcessGroupStream
      
      * bugfix for fp16 in eager mode multi-card, test=kunlun
      
      * rebase & fix a few issues
      
      * use new processgroup interface, test=kunlun
      
      * fix compile issue, test=kunlun
      3b91f8f3
    • W
      Refactor collective communication P2P C++ API (#47801) · d926c270
      Wen Sun 提交于
      * refactor: send, recv, send_partial, recv_partial
      
      * refactor: rm useless const ref
      d926c270
  9. 09 11月, 2022 1 次提交
  10. 08 11月, 2022 1 次提交
  11. 07 11月, 2022 1 次提交
  12. 04 11月, 2022 2 次提交
  13. 01 11月, 2022 1 次提交
  14. 31 10月, 2022 1 次提交
  15. 28 10月, 2022 2 次提交
  16. 17 10月, 2022 1 次提交
    • G
      Support BF16 training for sharding (#46846) · 0b39b244
      Ghost Screaming 提交于
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * support pure bfloat16
      
      * support bf16 linear
      
      * update PR to pass CI
      
      * tiny fix where_grad_kernel.cu
      
      * Support bfloat16 type for reducer and sharding.
      
      * Fix some bug.
      
      * Polish code.
      
      * Polise code.
      
      * Add bfloat16 datatype in fill_grad kernels.
      Co-authored-by: Nsneaxiy <sneaxiy@126.com>
      0b39b244
  17. 11 10月, 2022 2 次提交
  18. 10 10月, 2022 1 次提交
  19. 08 10月, 2022 1 次提交
  20. 30 9月, 2022 1 次提交
  21. 29 9月, 2022 1 次提交
  22. 21 9月, 2022 1 次提交
  23. 16 9月, 2022 1 次提交
  24. 07 9月, 2022 1 次提交
  25. 06 9月, 2022 1 次提交
  26. 01 9月, 2022 1 次提交
  27. 31 8月, 2022 1 次提交
  28. 26 8月, 2022 1 次提交
  29. 25 8月, 2022 1 次提交