Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
DeepSpeed
提交
9cb64a1f
D
DeepSpeed
项目概览
Greenplum
/
DeepSpeed
上一次同步 大约 1 年
通知
10
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeed
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
未验证
提交
9cb64a1f
编写于
8月 17, 2021
作者:
J
Jeff Rasley
提交者:
GitHub
8月 17, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
MoE read the docs update (#1312)
上级
a1de767a
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
60 addition
and
13 deletion
+60
-13
deepspeed/moe/layer.py
deepspeed/moe/layer.py
+10
-9
deepspeed/utils/groups.py
deepspeed/utils/groups.py
+33
-4
docs/code-docs/source/index.rst
docs/code-docs/source/index.rst
+5
-0
docs/code-docs/source/moe.rst
docs/code-docs/source/moe.rst
+12
-0
未找到文件。
deepspeed/moe/layer.py
浏览文件 @
9cb64a1f
...
...
@@ -16,9 +16,6 @@ import typing
class
MoE
(
torch
.
nn
.
Module
):
'''
DeepSpeed MOE API: This defines a simple API that can be used from client-side code.
'''
def
__init__
(
self
,
hidden_size
,
expert
,
...
...
@@ -30,10 +27,9 @@ class MoE(torch.nn.Module):
min_capacity
=
4
,
noisy_gate_policy
:
typing
.
Optional
[
str
]
=
None
):
"""Initialize an MoE layer.
TODO: add details about input/output dimension assumptions
Arguments:
hidden_size (int): the hidden dimension of the model.
hidden_size (int): the hidden dimension of the model
, importantly this is also the input and output dimension
.
expert (torch.nn.Module): the torch module that defines the expert (e.g., MLP, torch.linear).
...
...
@@ -81,15 +77,20 @@ class MoE(torch.nn.Module):
self
.
dropout
=
torch
.
nn
.
Dropout
(
output_dropout_prob
)
def
forward
(
self
,
hidden_states
,
used_token
=
None
):
"""
""" MoE forward
Arguments:
hidden_states (Tensor): input to the layer
used_token (Tensor, optional): default: None, mask only used tokens
Returns:
output (Tensor): output of the model
l_aux (Tensor): gate loss value
exp_counts (int): expert count
A tuple including output, gate loss, and expert count.
* output (Tensor): output of the model
* l_aux (Tensor): gate loss value
* exp_counts (int): expert count
"""
output
=
self
.
deepspeed_moe
(
hidden_states
,
used_token
)
output
=
self
.
dropout
(
output
)
...
...
deepspeed/utils/groups.py
浏览文件 @
9cb64a1f
...
...
@@ -69,10 +69,39 @@ def ensure_divisibility(numerator, denominator):
def
initialize
(
ep_size
=
1
,
mpu
=
None
):
""" if mpu is provided, intialize groups using mpu.
otherwise, we have two cases:
1. If called from DeepSpeed.initialize(), initialize groups with mp_size=1 and ep_size=1
2. If called from an application, initialize groups with mp_size=1 and ep_size=ep_size provided by the application
"""
Process groups initialization supporting expert (E), data (D), and model (M) parallelism. DeepSpeed considers
the following scenarios w.r.t. process group creation.
* S1: There is no expert parallelism or model parallelism, only data (D)::
model = my_model(args)
engine = deepspeed.initialize(model) # initialize groups without mpu
* S2: There is expert parallelism but no model parallelism (E+D)::
deepspeed.utils.groups.initialize(ep_size) # groups will be initialized here
model = my_model(args)
engine = deepspeed.initialize(model)
* S3: There is model parallelism but no expert parallelism (M)::
mpu.init() # client initializes it's model parallel unit
model = my_model(args)
engine = deepspeed.initialize(model, mpu=mpu) # init w. mpu but ep_size = dp_world_size
* S4: There is model, data, and expert parallelism (E+D+M)::
mpu.init() # client initializes it's model parallel unit
deepspeed.utils.groups.initialize(ep_size, mpu) # initialize expert groups wrt mpu
model = my_model(args)
engine = deepspeed.initialize(model, mpu=mpu) # passing mpu is optional in this case
Arguments:
ep_size (int, optional): default=1, expert parallel size
mpu (module, optional): default=None, model parallel unit (e.g., from Megatron)
that descibes model/data parallel ranks.
"""
if
mpu
is
not
None
:
log_dist
(
message
=
"initializing deepspeed groups using mpu"
,
ranks
=
[
0
])
...
...
docs/code-docs/source/index.rst
浏览文件 @
9cb64a1f
...
...
@@ -42,7 +42,12 @@ ZeRO API
zero3
Mixture of Experts (MoE)
------------------------
.. toctree::
:maxdepth: 2
moe
Transformer Kernel API
----------------------
...
...
docs/code-docs/source/moe.rst
0 → 100644
浏览文件 @
9cb64a1f
Mixture of Experts (MoE)
====================
Layer specification
--------------------
.. autoclass:: deepspeed.moe.layer.MoE
:members:
Groups initialization
--------
.. autofunction:: deepspeed.utils.groups.initialize
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录