Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
da8b16fc
MegEngine
项目概览
MegEngine 天元
/
MegEngine
1 年多 前同步成功
通知
403
Star
4705
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
da8b16fc
编写于
12月 09, 2021
作者:
Q
quqi.liu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
docs(mge/distributed): change the allreduce related functions document
上级
695d24f2
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
113 addition
and
51 deletion
+113
-51
imperative/python/megengine/distributed/functional.py
imperative/python/megengine/distributed/functional.py
+113
-51
未找到文件。
imperative/python/megengine/distributed/functional.py
浏览文件 @
da8b16fc
...
...
@@ -389,31 +389,52 @@ def reduce_scatter_sum(
def
all_reduce_sum
(
inp
:
Tensor
,
group
:
Optional
[
Group
]
=
WORLD
,
device
:
Optional
[
str
]
=
None
,
)
->
Tensor
:
r
"""Reduce tensors across the specified group by sum.
r
"""Reduce tensors with sum operation on each value across the specified group.
Note:
``inp`` tensor must have identical shape in all processes across the group.
Args:
inp: Input tensor.
group: The process group to work on.
The default group is WORLD which means all processes available.
You can use a list of process ranks to create new group to work on it, e.g. [1, 3, 5].
device: The specific device to execute this operator.
None default device means the device of inp will be used.
Specify "gpu0:1" to execute this operator on diffrent cuda stream,
1 is stream id, and default stream id is 0.
inp (Tensor): tensor to be reduced.
Keyword args:
group (Group or sequence of ints): the process group to work on. Default: ``WORLD``.
``WORLD`` group selects all processes available.
list of process rank as parameter will create a new group to work on.
device (:attr:`.Tensor.device`): the specific device to execute this operator. Default: ``None``
``None`` will select the device of ``inp`` to execute.
Specially, ``GPU`` device can assign a different stream to execute
by adding a number right after a colon following the device name while
``:0`` denotes default stream of GPU, otherwise will use default stream.
Returns:
Result tensor.
A tensor with sum operation on each value across the group.
The shape of the output tensor must be the same as ``inp``, and the output
tensor is going to be bitwise identical in all processes across the group.
Examples:
.. code-block::
>>> # We execute all_reduce_sum on rank 0 and rank 1
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=xpux:0) # Rank 0
Tensor([3. 4.], device=xpux:0) # Rank 1
>>> F.distributed.all_reduce_sum(input, group=[0, 1])
Tensor([4. 6.], device=xpux:0) # Rank 0
Tensor([4. 6.], device=xpux:0) # Rank 1
>>> # We execute all_reduce_sum with on gpu0 with cuda stream 1
>>> megengine.set_default_device("gpu0")
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=gpu0:0) # Rank 0
Tensor([3. 4.], device=gpu0:0) # Rank 1
>>> F.distributed.all_reduce_sum(input, device="gpu0:1")
Tensor([4. 6.], device=gpu0:0) # Rank 0
Tensor([4. 6.], device=gpu0:0) # Rank 1
input = Tensor(rank)
# Rank 0 # input: Tensor(0)
# Rank 1 # input: Tensor(1)
output = all_reduce_sum(input)
# Rank 0 # output: Tensor(1)
# Rank 1 # output: Tensor(1)
"""
mode
=
CollectiveComm
.
Mode
.
ALL_REDUCE_SUM
return
collective_comm
(
inp
,
mode
,
group
,
device
)
...
...
@@ -422,32 +443,53 @@ def all_reduce_sum(
def
all_reduce_max
(
inp
:
Tensor
,
group
:
Optional
[
Group
]
=
WORLD
,
device
:
Optional
[
str
]
=
None
,
)
->
Tensor
:
r
"""Reduce tensors across the specified group by max.
r
"""Reduce tensors with max operation on each value across the specified group.
Note:
``inp`` tensor must have identical shape in all processes across the group.
Args:
inp: Input tensor.
group: The process group to work on.
The default group is WORLD which means all processes available.
You can use a list of process ranks to create new group to work on it, e.g. [1, 3, 5].
device: The specific device to execute this operator.
None default device means the device of inp will be used.
Specify "gpu0:1" to execute this operator on diffrent cuda stream,
1 is stream id, and default stream id is 0.
inp (Tensor): tensor to be reduced.
Keyword args:
group (Group or sequence of ints): the process group to work on. Default: ``WORLD``.
``WORLD`` group selects all processes available.
list of process rank as parameter will create a new group to work on.
device (:attr:`.Tensor.device`): the specific device to execute this operator. Default: ``None``
``None`` will select the device of ``inp`` to execute.
Specially, ``GPU`` device can assign a different stream to execute
by adding a number right after a colon following the device name while
``:0`` denotes default stream of GPU, otherwise will use default stream.
Returns:
Result tensor.
A tensor with max operation on each value across the group.
The shape of the output tensor must be the same as ``inp``, and the output
tensor is going to be bitwise identical in all processes across the group.
Examples:
.. code-block::
>>> # We execute all_reduce_max on rank 0 and rank 1
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=xpux:0) # Rank 0
Tensor([3. 4.], device=xpux:0) # Rank 1
>>> F.distributed.all_reduce_max(input, group=[0, 1])
Tensor([3. 4.], device=xpux:0) # Rank 0
Tensor([3. 4.], device=xpux:0) # Rank 1
>>> # We execute all_reduce_max with on gpu0 with cuda stream 1
>>> megengine.set_default_device("gpu0")
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=gpu0:0) # Rank 0
Tensor([3. 4.], device=gpu0:0) # Rank 1
>>> F.distributed.all_reduce_max(input, device="gpu0:1")
Tensor([3. 4.], device=xpux:0) # Rank 0
Tensor([3. 4.], device=xpux:0) # Rank 1
input = Tensor(rank)
# Rank 0 # input: Tensor(0)
# Rank 1 # input: Tensor(1)
output = all_reduce_max(input)
# Rank 0 # output: Tensor(1)
# Rank 1 # output: Tensor(1)
"""
mode
=
CollectiveComm
.
Mode
.
ALL_REDUCE_MAX
return
collective_comm
(
inp
,
mode
,
group
,
device
)
...
...
@@ -455,31 +497,51 @@ def all_reduce_max(
def
all_reduce_min
(
inp
:
Tensor
,
group
:
Optional
[
Group
]
=
WORLD
,
device
:
Optional
[
str
]
=
None
,
)
->
Tensor
:
r
"""Reduce tensors across the specified group by min.
r
"""Reduce tensors with min operation on each value across the specified group.
Note:
``inp`` tensor must have identical shape in all processes across the group.
Args:
inp: Input tensor.
group: The process group to work on.
The default group is WORLD which means all processes available.
You can use a list of process ranks to create new group to work on it, e.g. [1, 3, 5].
device: The specific device to execute this operator.
None default device means the device of inp will be used.
Specify "gpu0:1" to execute this operator on diffrent cuda stream,
1 is stream id, and default stream id is 0.
inp (Tensor): tensor to be reduced.
Keyword args:
group (Group or sequence of ints): the process group to work on. Default: ``WORLD``.
``WORLD`` group selects all processes available.
list of process rank as parameter will create a new group to work on.
device (:attr:`.Tensor.device`): the specific device to execute this operator. Default: ``None``
``None`` will select the device of ``inp`` to execute.
Specially, ``GPU`` device can assign a different stream to execute
by adding a number right after a colon following the device name while
``:0`` denotes default stream of GPU, otherwise will use default stream.
Returns:
Result tensor.
A tensor with min operation on each value across the group.
The shape of the output tensor must be the same as ``inp``, and the output
tensor is going to be bitwise identical in all processes across the group.
Examples:
.. code-block::
>>> # We execute all_reduce_min on rank 0 and rank 1
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=xpux:0) # Rank 0
Tensor([3. 4.], device=xpux:0) # Rank 1
>>> F.distributed.all_reduce_min(input, group=[0, 1])
Tensor([1. 2.], device=xpux:0) # Rank 0
Tensor([1. 2.], device=xpux:0) # Rank 1
>>> # We execute all_reduce_min with on gpu0 with cuda stream 1
>>> megengine.set_default_device("gpu0")
>>> input = F.arange(2) + 1 + 2 * rank
>>> input
Tensor([1. 2.], device=gpu0:0) # Rank 0
Tensor([3. 4.], device=gpu0:0) # Rank 1
>>> F.distributed.all_reduce_min(input, device="gpu0:1")
Tensor([1. 2.], device=xpux:0) # Rank 0
Tensor([1. 2.], device=xpux:0) # Rank 1
input = Tensor(rank)
# Rank 0 # input: Tensor(0)
# Rank 1 # input: Tensor(1)
output = all_reduce_min(input)
# Rank 0 # output: Tensor(0)
# Rank 1 # output: Tensor(0)
"""
mode
=
CollectiveComm
.
Mode
.
ALL_REDUCE_MIN
return
collective_comm
(
inp
,
mode
,
group
,
device
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录