1. 18 11月, 2022 1 次提交
  2. 16 11月, 2022 1 次提交
  3. 12 8月, 2022 1 次提交
  4. 01 8月, 2022 1 次提交
    • O
      Release swap buffers for persisted params (#2089) · 2210ebe7
      Olatunji Ruwase 提交于
      * Split parameter offload from z3
      
      * Format fixes
      
      * Bug fixes
      
      * Cleanup
      
      * Remove dead code
      
      * Release swap buffers for persisted params
      
      * Format fixes
      
      * Format fixes
      
      * Pass args correctly
      
      * Use pinned memory for nvme offload
      
      * Merge with masster
      
      * Fix missing import
      
      * model pesistence params
      
      * Fix merge issues
      
      * Handle none device
      
      * Usse log_dist
      2210ebe7
  5. 27 7月, 2022 1 次提交
    • M
      Refactor ZeRO configs to use Pydantic (#2004) · 59975896
      Michael Wyatt 提交于
      * first pass at pydanticifying Zero Configs
      
      * added pydantic to reqs
      
      * fixed bug with deprecated values not being type-checked
      
      * fixing zero config bugs from unit tests
      
      * fixed access of Config values
      
      * removing zero constants
      
      * formatting/fix broken import
      
      * fixed bad merge
      
      * fixed issue with missing aliased field
      
      * fix for failing tests
      
      * fix how deprecated fields are processed
      
      * only process dep params when they are set
      
      * fix mistyped field name
      
      * fixes, docs, removed more constants
      
      * fix merge
      
      * more fixes after merge w master
      
      * added unit tests
      
      * formatting
      
      * added fix for transformers unit tests
      
      * separated offload config from zero config
      
      * fixed bad import
      
      * formatting and flake fixes
      
      * implement suggestion from review
      Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
      Co-authored-by: NJeff Rasley <jerasley@microsoft.com>
      59975896
  6. 21 1月, 2022 1 次提交
    • J
      Various ZeRO Stage3 Optimizations + Improvements (including bfloat16 support) (#1453) · 4912e0ad
      Justin Chiu 提交于
      * Changes for bfloat16 Zero2
      
      * ZeRO stage3 optimizations, with some bug fixes
      
      optimizations for stage3:
      - prefetching improvements
      - batching allgather calls to amortize fixed overhead and improve
        bandwidth utilization
      - batching reduce_scatter calls to amortize fixed overhead and
        improve bandwidth utilization
      - using *_base variants of allgather and reduce scatter to reduce memory
        allocations and data movement
      - more fine grained synchronization for communication that allows
        blocking on less work
      - precomputation of fetching code - using a fetch queue rather than
        deciding what to (pre)fetch at each iteration
      - limiting queued coalesced communication ops to reduce memory pressure
        on pytorch cuda caching allocator (not elegant solution)
      
      optimizations for stage3-offload:
      - made some host-device tensor copies async to improve performance
      
      bug fixes and qol improvements:
      - fix init context method when parent modules modify child weights
      - speed up model initialization by moving model to GPU before weight
        initialization
      - fixed unit test imports so that unit tests can be run from any
        directory
      - change performance logging to include memory consumption
      - add logging w/ model size when done partitioning model
      
      new features
      - bfloat16 support for ZeRO 3
      
      * fix import in ut
      
      * ran yapf
      
      * improvements to cache flush warn log
      
      * backwards compatibility with older versions of pytorch
      
      * handle edge case where reduced tensor smaller than world size
      
      * moved event synchronization to allgather handle wait() call
      
      * removed unnecessary barrier call
      
      * formatting fix after resolving merge conflict
      
      * skip nvme prefetch when trace not complete
      
      * opportunistically avoid memory allocation in allgather coalesced where possible
      
      * fix indentation after merge
      
      * fixes to account for parameter offload
      
      * accounting for torch.cuda.memory_stats not being available
      
      * moved partition_all_params to optimizer step
      
      * allgathering on params before item gets called
      
      * fix param status checks
      
      needed after moving partition_all_parameters call to optimizer step
      
      * fix grad accumulation with optimizer offload
      
      * grad norm computation fix for optimizer offload
      
      * change post divide in reduce-scatter to pre divide
      
      * fix gradient race condition w/ optimizer offload
      
      * improve inf/nan gradient tracking
      
      * don't prefetch when not in training mode
      
      * format fix after merging
      
      * fix prefetching issue when using NVME offload
      
      * improved defragmentation for fp16 parameters
      
      * relative imports for bf16 tests
      
      * changes for bwd compatibility with pytorch 1.2
      
      * remove buffered_reduce_fallback
      
      * removed unused parameter offset bookkeeping
      
      * fixed tracking for multiple param groups
      
      * unbroke bfloat16 config after merge conflict
      
      * using base allgather params when only 1 param
      
      * cleanup/fixes for fp16 partition defragmentation
      
      * switch to CRLF
      
      * convert to same new-line style as master
      
      * align new line with master
      
      * Fix merge issues
      
      * switch to CRLF
      
      * fix to LF line endings
      
      * minor merge fixes
      
      * remove extra bfloat16_enabled definition
      
      * asserting params inflight for AllGatherHandle
      
      * remove get_cuda_mem_allocated_str
      
      * Format fixes
      
      * fix bfloat16 zero stage check (broken after merge commit)
      
      * +self.communication_data_type, -self.allreduce_always_fp32; delete dead code
      
      * Add self.reduce_scatter
      
      * Format fix
      
      * Fix merge issues
      
      * iterate over params_to_fetch rather than make another iterator
      
      * add some TODOs
      
      * remove unnecessary division by micro_step_id
      
      * rename config keys "bfloat16" -> "bf16"
      
      * rename stage3_gather_fp16_weights_on_model_save -> stage3_gather_16bit_weights_on_model_save
      
      * add unit test to check backwards compatibility for gather_16bit_weights
      
      * added test to confirm bf16 key bwd compatibility
      
      * Format fixes
      Co-authored-by: NRana Ali Amjad <raamjad@amazon.com>
      Co-authored-by: NJustin Chiu <justchiu@amazon.com>
      Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
      Co-authored-by: NJeff Rasley <jerasley@microsoft.com>
      4912e0ad
  7. 01 12月, 2021 1 次提交
  8. 29 7月, 2021 2 次提交
  9. 09 6月, 2021 1 次提交
  10. 20 5月, 2021 1 次提交
  11. 08 5月, 2021 1 次提交
  12. 25 4月, 2021 1 次提交
    • H
      Add find_unused_parameters option to DeepSpeedEngine (#945) · d0b61f18
      hamlet 提交于
      * Add find_unused_parameters option
      
      As unused parameters in modules may not be expected sometimes, 
      add an explicit error msg when it occurred and an option to avoid the error: https://github.com/microsoft/DeepSpeed/issues/707
      
      * Add find_unused_parameters option
      
      As unused parameters in modules may not be expected sometimes, 
      add an explicit error msg when it occurred and an option to avoid the error: https://github.com/microsoft/DeepSpeed/issues/707
      
      * Fix syntax error
      
      * Fix yapf error
      
      * Fix yapf error
      
      * Fix yapf error
      
      * Fix yapf error
      
      * Move stage2 find_unused_parameters to config file
      
      * Add stage2 find_unused_parameters
      
      * Add stage2 find_unused_parameters
      
      * Add stage2_find_unused_parameters option
      
      * Change error msg to reflect zero_optimization config change
      
      * Fix yapf error
      
      * Fix yapf errors
      
      * Change find_unused_parameters option name
      
      * Change find_unused_parameters option name
      
      * Change find_unused_parameters option name
      
      * Change find_unused_parameters option name
      
      * Change find_unused_parameters option name
      
      * Add UnusedParametersModel for test option find_unused_parameters
      
      * Add unit test for stage2 find_unused_parameters
      
      * Add cpu-adam compatible check
      
      * Remove dups import
      
      * Trim spaces
      
      * Fix yapf errors
      
      * Trim spaces
      
      * Add False Positive test check
      
      * Fix find_unused_parameters test
      
      * Trim spaces
      
      * Fix yapf error
      d0b61f18
  13. 19 4月, 2021 1 次提交
  14. 08 4月, 2021 1 次提交
  15. 27 3月, 2021 1 次提交
  16. 17 3月, 2021 1 次提交
  17. 09 3月, 2021 1 次提交
  18. 12 12月, 2020 1 次提交
  19. 18 11月, 2020 1 次提交
  20. 10 9月, 2020 1 次提交
  21. 02 9月, 2020 1 次提交
  22. 14 7月, 2020 1 次提交
  23. 05 6月, 2020 1 次提交
    • C
      Add log util (#230) · e1ad8803
      Chunyang Wen 提交于
      * Add log util
      
      * replace all occurrences of print and logging
      
      * address format
      
      * disable propagate to avoid duplicate log
      e1ad8803
  24. 28 5月, 2020 1 次提交
  25. 19 5月, 2020 1 次提交