未验证 提交 9cb64a1f 编写于 作者: J Jeff Rasley 提交者: GitHub

MoE read the docs update (#1312)

上级 a1de767a
......@@ -16,9 +16,6 @@ import typing
class MoE(torch.nn.Module):
'''
DeepSpeed MOE API: This defines a simple API that can be used from client-side code.
'''
def __init__(self,
hidden_size,
expert,
......@@ -30,10 +27,9 @@ class MoE(torch.nn.Module):
min_capacity=4,
noisy_gate_policy: typing.Optional[str] = None):
"""Initialize an MoE layer.
TODO: add details about input/output dimension assumptions
Arguments:
hidden_size (int): the hidden dimension of the model.
hidden_size (int): the hidden dimension of the model, importantly this is also the input and output dimension.
expert (torch.nn.Module): the torch module that defines the expert (e.g., MLP, torch.linear).
......@@ -81,15 +77,20 @@ class MoE(torch.nn.Module):
self.dropout = torch.nn.Dropout(output_dropout_prob)
def forward(self, hidden_states, used_token=None):
"""
""" MoE forward
Arguments:
hidden_states (Tensor): input to the layer
used_token (Tensor, optional): default: None, mask only used tokens
Returns:
output (Tensor): output of the model
l_aux (Tensor): gate loss value
exp_counts (int): expert count
A tuple including output, gate loss, and expert count.
* output (Tensor): output of the model
* l_aux (Tensor): gate loss value
* exp_counts (int): expert count
"""
output = self.deepspeed_moe(hidden_states, used_token)
output = self.dropout(output)
......
......@@ -69,10 +69,39 @@ def ensure_divisibility(numerator, denominator):
def initialize(ep_size=1, mpu=None):
""" if mpu is provided, intialize groups using mpu.
otherwise, we have two cases:
1. If called from DeepSpeed.initialize(), initialize groups with mp_size=1 and ep_size=1
2. If called from an application, initialize groups with mp_size=1 and ep_size=ep_size provided by the application
"""
Process groups initialization supporting expert (E), data (D), and model (M) parallelism. DeepSpeed considers
the following scenarios w.r.t. process group creation.
* S1: There is no expert parallelism or model parallelism, only data (D)::
model = my_model(args)
engine = deepspeed.initialize(model) # initialize groups without mpu
* S2: There is expert parallelism but no model parallelism (E+D)::
deepspeed.utils.groups.initialize(ep_size) # groups will be initialized here
model = my_model(args)
engine = deepspeed.initialize(model)
* S3: There is model parallelism but no expert parallelism (M)::
mpu.init() # client initializes it's model parallel unit
model = my_model(args)
engine = deepspeed.initialize(model, mpu=mpu) # init w. mpu but ep_size = dp_world_size
* S4: There is model, data, and expert parallelism (E+D+M)::
mpu.init() # client initializes it's model parallel unit
deepspeed.utils.groups.initialize(ep_size, mpu) # initialize expert groups wrt mpu
model = my_model(args)
engine = deepspeed.initialize(model, mpu=mpu) # passing mpu is optional in this case
Arguments:
ep_size (int, optional): default=1, expert parallel size
mpu (module, optional): default=None, model parallel unit (e.g., from Megatron)
that descibes model/data parallel ranks.
"""
if mpu is not None:
log_dist(message="initializing deepspeed groups using mpu", ranks=[0])
......
......@@ -42,7 +42,12 @@ ZeRO API
zero3
Mixture of Experts (MoE)
------------------------
.. toctree::
:maxdepth: 2
moe
Transformer Kernel API
----------------------
......
Mixture of Experts (MoE)
====================
Layer specification
--------------------
.. autoclass:: deepspeed.moe.layer.MoE
:members:
Groups initialization
--------
.. autofunction:: deepspeed.utils.groups.initialize
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册