- 08 6月, 2023 6 次提交
-
-
由 Conglong Li 提交于
* DeepSpeed overview in Japanese * DeepSpeed overview in Japanese
-
由 john li 提交于
* Small tweak on cuda version mismatch documentation * clarify minor versions should also match --------- Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-
由 Michael Wyatt 提交于
* mix typo and missing epsilon value * Touch file to re-build * revert changes * Touch file to re-build * Format --------- Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com> Co-authored-by: NLogan Adams <loadams@microsoft.com>
-
由 digger yu 提交于
Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-
由 Reza Yazdani 提交于
* fix gpt-j inference issue for mlp_gemm_func call * bring back the gpt-j inference-test * fix formatting * fix the neox and pythia injection issue
-
由 Logan Adams 提交于
This reverts commit f2f5f21b.
-
- 07 6月, 2023 5 次提交
-
-
由 tensor-tang 提交于
Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com> Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
由 Logan Adams 提交于
Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
由 Abhilash Majumder 提交于
Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-
由 Byungsoo Oh 提交于
Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com> Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 Ramya Ramineni 提交于
Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com> Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com> Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-
- 06 6月, 2023 3 次提交
-
-
由 Siddharth Singh 提交于
-
由 Olatunji Ruwase 提交于
* Use logger in accelerator * Handle pre-build cases * Explain possible import failure
-
由 digger yu 提交于
* fix typo deepspeed/runtime * fix some typo --------- Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
- 05 6月, 2023 1 次提交
-
-
由 Zhen Zhang 提交于
* fix mics save checkpoint hanging * MiCS load_checkpoint * copyright * fix for torch-1.9.0 all_reduce_coalesced api does not support nccl backend * Naming alignment * adding more test conditions for mics shard size * test with different shard sizes * adding assertion for better error msg --------- Co-authored-by: NZhen Zhang <zhzhn@amazon.com>
-
- 03 6月, 2023 3 次提交
-
-
由 Jeff Rasley 提交于
-
由 Buğra 提交于
* Refactor check_enabled root validator in DeepSpeedMonitorConfig * formatting * formatting --------- Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com> Co-authored-by: NMichael Wyatt <mrwyattii@gmail.com>
-
由 digger yu 提交于
Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
- 02 6月, 2023 5 次提交
-
-
由 郭叶军 提交于
When activation checkpointing is enabled, most of forward is re-computed, and so the FLOPS calculation should be updated with recompute_fwd_factor=1.0 I don't find a way to pass the option from model script to deepspeed engine, and so add option directly for flops_profiler. Co-authored-by: NCheng Li <pistasable@gmail.com>
-
由 digger yu 提交于
* fix spelling error with deepspeed/runtime/ * fix typo docs/ * fix typo in comments with deepspeed/ * fix typo deepspeed/ * Update constants.py Remove the space after nebula --------- Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com> Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
由 Michael Wyatt 提交于
Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 Haodong Lyu 提交于
Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 郭叶军 提交于
Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
- 01 6月, 2023 3 次提交
-
-
由 Micah Zoltu 提交于
Code (in this context) is mass noun, and thus has no plural form. Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
由 Will Jessup 提交于
grammar fix. Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
由 Michael Wyatt 提交于
* skip test for docs-only changes * add missing skip to blog changes
-
- 31 5月, 2023 3 次提交
-
-
由 CurryRice233 提交于
* add Ascend NPU accelerator support * clean code --------- Co-authored-by: Njializheng <jializheng@huawei.com> Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 郭叶军 提交于
this change also aligns with the logic before reduce_scatter_coalesced Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 Jeff Rasley 提交于
* tmp remove launcher args * add exclude list for env variables on aisc * add comment
-
- 27 5月, 2023 1 次提交
-
-
由 Danny Semiat 提交于
* Align InferenceEngine to store ms in _model_times When using cuda_events, the measured model time is stored in ms. When not using cuda_events, the measured model time was stored in seconds. This commit fixes the units and aligns them to store ms, the same as elapsed() function. This was observed when running the following pytest: unit/inference/test_model_profiling.py::TestModelProfiling::test[False-True-roberta-base-fill-mask] Returned values were: count=0 e2e_t=895.174312 model_t=0.8529715538024902 count=1 e2e_t=7.500252 model_t=0.0041310787200927734 count=2 e2e_t=3.887346 model_t=0.0018568038940429688 count=3 e2e_t=3.577845 model_t=0.0016334056854248047 count=4 e2e_t=3.43976 model_t=0.0016703605651855469 count=5 e2e_t=3.310903 model_t=0.0016107559204101562 count=6 e2e_t=3.299556 model_t=0.001603841781616211 count=7 e2e_t=3.605722 model_t=0.0015969276428222656 count=8 e2e_t=3.273741 model_t=0.0015516281127929688 count=9 e2e_t=3.46306 model_t=0.0016617774963378906 The units difference is observed here, when model_t is in ther order of 10e-3 comparing to e2e_t * Update engine.py --------- Co-authored-by: NMichael Wyatt <michaelwyatt@microsoft.com>
-
- 26 5月, 2023 3 次提交
-
-
由 Quentin Anthony 提交于
Co-authored-by: NJeff Rasley <jerasley@microsoft.com>
-
由 Olatunji Ruwase 提交于
-
由 Conglong Li 提交于
-
- 25 5月, 2023 1 次提交
-
-
由 Nikita Shulga 提交于
-
- 24 5月, 2023 2 次提交
-
-
由 Lev Kurilenko 提交于
This PR fixes Hybrid Engine (HE) support for the BLOOM model, which was accidentally broken during the HE refactor in GH-3425. The BLOOM container now inherits the HybridEngineContainer feature and defines a set_lora_params() function necessary for the feature to work. get_lora_params() is correspondingly removed from the BLOOM policy class as well. GPT-NeoX was also cleaned up by removing a get_lora_params() function from its policy due to it no longer being used.
-
由 Joe Mayer 提交于
* Fixing bf16 test that was missing a config. * Chaning train_batch_size to train_micro_batch_size_per_gpu * Chaning all train_batch_size to train_micro_batch_size_per_gpu
-
- 19 5月, 2023 1 次提交
-
-
由 mzl 提交于
-
- 17 5月, 2023 2 次提交
-
-
由 Ramya Ramineni 提交于
Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com> Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com>
-
由 Olatunji Ruwase 提交于
* Clone tensors to avoid torch.save bloat * Adddocs * Fix clang-formatting * Update docs/code-docs/source/model-checkpointing.rst Co-authored-by: NStas Bekman <stas00@users.noreply.github.com> * Update deepspeed/checkpoint/utils.py Co-authored-by: NStas Bekman <stas00@users.noreply.github.com> * Update deepspeed/checkpoint/utils.py Co-authored-by: NStas Bekman <stas00@users.noreply.github.com> * Fix url * url fix * Tweak docs --------- Co-authored-by: NStas Bekman <stas00@users.noreply.github.com> Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-
- 16 5月, 2023 1 次提交
-
-
由 Ma, Guokai 提交于
* add fallback path for kernels used in megatron * temporary numactl WA for SPR 56core * adapt core allocation according to number of ranks * add switch to turn on numactl * detect number of cores on the system * allow select a subset of the cores on the system to bind * remove unneeded changes * add ccl backend * change nccl to ccl * remove unused code * add comm/ccl to ops * initial ccl comm support * first broadcast case passed * add CCL_Backend to DeepSpeed * support comm timer for CPU * support barrier for comm backend * support specify master address from deepspeed command line * support pytorch 2.0 * remove 'block' from api * Tweak for debug Signed-off-by: NCao, Zhong Z <zhong.z.cao@intel.com> * Remove unecessary directory Signed-off-by: NCao, Zhong Z <zhong.z.cao@intel.com> * Add bf16 kernel support for inference * Add temporary torch implement for cpu inference * Add softmax ops cpu fallback for inference * bind cores to numa domain as well * merge latest change in gma/numactl * initial bf16 kernel support with fallback path * initial fallback path for bloom kernel injection * fix softmax attn mask * check KMP_AFFINITY to avoid conflict with numactl * New CCLBackend which utilize TorchBackend for initialization * rollback last change because there is result error * fix bloom injection policy TP could not work issue. injection_policy={BloomBlock: ("self_attention.dense", "mlp.dense_4h_to_h")} * Use TorchBackend to initialize CCLBackend, make behavior consistent * remove comm under deepspeed/ops * add license header * code clean up * fix format issue * remove magic number in main address * add caching support but not turn on by default * change name of inference_cuda_module to inference_module * Check for is_synchronized_device in accelerator before get Event * fix typo * Fix fallback path of softmax kernel on CUDA device for BF16 data type, because CUDA tril does not support BF16 datatype, enforce fp32 data type * add cpu backend files * change CPU_Accelerator op_builder_dir * remove cpu_kernel_path * using CPU_Accelerator on non-cuda device * fix deepspeed.op_builder => deepspeed.ops.op_builder * add alias for num_gpus: num_accelerators * allow loading cpu_builder in build stage * Assume cuda available if torch not installed * add oneccl_binding_pt to requirements * move oneccl-binding-pt to seperate requiremetns-cpu.txt * add missing file * use dependency_links in setuptools.setup() call for additional dependency links * install oneccl_bind_pt in workflows * change oneccl_bind_pt's version from 1.13 to 2.0 * use intel_exention_for_pytorch as indicator that CPU_Accelerator should be used * Add indicator for Accelerator used * change foo.c to foo.cpp * exclude 'cpu' directory in CUDA op builder reflection * add a cpu-inference workflow * run cpu-inference workflow on self-hosted instance * change cpu runs-on node to v100 node * print out python version in workflow * add verbose in pip command to understand oneccl_bind_pt install issue * update cpu-inference workflow * add a stage to detect instance instruction sets * add back bf16 support for CPU inference * enable autoTP for bloom Signed-off-by: NWang, Yi A <yi.a.wang@intel.com> * update workflow to detect cpu instruction sets * temporary WA for Intel Extension for PyTorch AVX2 instructioon set detection * change cpu-inference workflow machine to ubuntu-20.04 * add sharded checkpoint loading for AutoTP path to reduce the peak memory in initialization stage Signed-off-by: NWang, Yi A <yi.a.wang@intel.com> * enable policy for llama * use a special build ipex to test avx2 detection fix * fix format * fix test fail issue Signed-off-by: NWang, Yi A <yi.a.wang@intel.com> * fix gptj sharded checkpoint loading problem Signed-off-by: NWang, Yi A <yi.a.wang@intel.com> * return a not implemented build in get_op_builder in cpu_backend * support cpu device in tests * use cpuinfo to extract number of CPUs * use ~/tmp as transfomer cache rather than /blob/ * Add support for mpich launcher with prefer_deepspeed_comm * add missing modification in accelerator * enable IMPI launcher * remove unused file and fix formatting * clean up ccl.cpp * Less confusing error message when certin op builder are not implemented * Fix license header * Add license header * add license headers * add license header * fix cuda specific code in test * update CPU workflow * use numactl to bind to core * allow bind_cores_to_rank in multi-node impi runner * fix format error * Remove InferenceBuilder * fix format error in numa.py * check whether op is in installed ops in ds_report.py * allow override accelerator with DS_ACCELERATOR='cuda','cpu' or 'xpu' * lazy init class_dict in CUDA_Accelerator to avoid cyclic initialization of CUDA_Accelerator * put short path in the beginning in real_accelerator.py * device_count return number of NUMA nodes * fix typo * install numactl in cpu workflow * Follow comments * Better implementation of device_count() and current_device() * remove dependency_link for Intel Extension for DeepSpeed * use check is_synchronized_device in timer only once * remove env mapping WA in cpu_accelerator * fix duplicate definition * fix format error * refine ccl backend selection * move comments to the right place * remove prefer_deepspeed_comm, use CCLBackend by default * refractor fallback path * Fix execution failure in kernel injection path * do not refractory kernel injection fallback path in residual_add because it contains function call with side-effect * guard residual_add fallback path with environ DS_KI_FALLBACK=True * fix format error * add test for allreduce on CPU workflow * fix format error * Fallback to TorchBackend if CCLBackend kernel are not implemented * Update Intel Extension for Pytorch installation link * Don't specify version number of Intel Extension for PyTorch * install oneCCL for CCLBackend * fix link path for CPU comm kernels * fix source oneCCL environment * source oneCCL env before run UT * Give more specific instruction when CCL_ROOT not defined --------- Signed-off-by: NCao, Zhong Z <zhong.z.cao@intel.com> Signed-off-by: NWang, Yi A <yi.a.wang@intel.com> Co-authored-by: Nsdp <sdp@aia-sdp-spr-108864.jf.intel.com> Co-authored-by: NCao, Zhong Z <zhong.z.cao@intel.com> Co-authored-by: NZhenhuan Chen <zhenhuan.chen@intel.com> Co-authored-by: Nbaodii <di.bao@intel.com> Co-authored-by: NWang, Yi A <yi.a.wang@intel.com> Co-authored-by: Njianan-gu <jianan.gu@intel.com> Co-authored-by: NOlatunji Ruwase <olruwase@microsoft.com> Co-authored-by: NLogan Adams <114770087+loadams@users.noreply.github.com>
-