1. 28 11月, 2022 1 次提交
    • X
      【fluid api clear】Remove reduce sum (#48330) · 8d00f76e
      xiaoguoguo626807 提交于
      * remove fluid.reduce_sum
      
      * remove fluid.reduce_sum
      
      * modify axis and import paddle
      
      * modify keepdim and out_name
      
      * modift unittest
      
      * modift unittest
      
      * modify CI_static and loss.py
      
      * modify test_mse_loss
      
      * modify static ci
      
      * modify static ci datatype
      
      * add import paddle in test
      
      * fix conflict
      
      * fix conflict
      
      * modify ci
      
      * modify ci
      
      * fix_conflict
      
      * fix bug
      
      * code_style
      8d00f76e
  2. 25 11月, 2022 2 次提交
  3. 21 11月, 2022 1 次提交
  4. 17 11月, 2022 2 次提交
  5. 14 11月, 2022 1 次提交
  6. 10 11月, 2022 1 次提交
  7. 08 11月, 2022 1 次提交
  8. 05 11月, 2022 1 次提交
  9. 04 11月, 2022 1 次提交
  10. 03 11月, 2022 2 次提交
  11. 02 11月, 2022 2 次提交
  12. 01 11月, 2022 1 次提交
  13. 23 10月, 2022 1 次提交
  14. 21 10月, 2022 1 次提交
  15. 20 10月, 2022 3 次提交
  16. 19 10月, 2022 1 次提交
  17. 17 10月, 2022 2 次提交
    • G
      Add enable_partial_send_recv switch in pipeline_configs (#46992) · b9a2f29c
      Ghost Screaming 提交于
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * Support allow_partial switch, which can be configure in
      pipeline_configs. If sent tensor are not the same from
      different hosts, they shouldn't been sent partially and
      then concated as a whole tensor.
      
      * Change name allow_partial to enable_partial_send_recv.
      
      * Add global variable _enable_partial_send_recv
      b9a2f29c
    • G
      Support BF16 training for sharding (#46846) · 0b39b244
      Ghost Screaming 提交于
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * support pure bfloat16
      
      * support bf16 linear
      
      * update PR to pass CI
      
      * tiny fix where_grad_kernel.cu
      
      * Support bfloat16 type for reducer and sharding.
      
      * Fix some bug.
      
      * Polish code.
      
      * Polise code.
      
      * Add bfloat16 datatype in fill_grad kernels.
      Co-authored-by: Nsneaxiy <sneaxiy@126.com>
      0b39b244
  18. 13 10月, 2022 1 次提交
  19. 12 10月, 2022 2 次提交
  20. 09 10月, 2022 1 次提交
  21. 08 10月, 2022 1 次提交
  22. 28 9月, 2022 2 次提交
  23. 22 9月, 2022 1 次提交
  24. 21 9月, 2022 1 次提交
  25. 20 9月, 2022 2 次提交
  26. 19 9月, 2022 1 次提交
  27. 16 9月, 2022 1 次提交
    • W
      refactor mp. (#45803) · fa97e5ba
      wuhuachaocoding 提交于
      * refactor mp.
      
      * update setup.py.
      
      * update mp_layers.py for compatibility.
      
      * add documents for mp_layers.py
      
      * update init.py
      
      * update collective.py.
      
      * update.
      
      * update mp_ops.py
      
      * update.
      
      * update code style.
      
      * update code style.
      fa97e5ba
  28. 14 9月, 2022 1 次提交
  29. 09 9月, 2022 2 次提交