Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
1a24fb29
MegEngine
项目概览
MegEngine 天元
/
MegEngine
大约 1 年 前同步成功
通知
399
Star
4705
Fork
582
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
1a24fb29
编写于
9月 22, 2020
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
perf(mge/allreduce): put allreduce on another cuda stream
GitOrigin-RevId: 2e778dfa0444ac2c2870b9dcfa72cfe7271fbc1a
上级
4a5e3170
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
4 addition
and
1 deletion
+4
-1
imperative/python/megengine/distributed/helper.py
imperative/python/megengine/distributed/helper.py
+3
-0
imperative/python/megengine/functional/param_pack.py
imperative/python/megengine/functional/param_pack.py
+1
-1
未找到文件。
imperative/python/megengine/distributed/helper.py
浏览文件 @
1a24fb29
...
@@ -88,6 +88,7 @@ class AllreduceCallback:
...
@@ -88,6 +88,7 @@ class AllreduceCallback:
self
.
_futures_dict
=
dict
()
self
.
_futures_dict
=
dict
()
self
.
_packing_list
=
defaultdict
(
list
)
self
.
_packing_list
=
defaultdict
(
list
)
self
.
_packing_size
=
defaultdict
(
int
)
self
.
_packing_size
=
defaultdict
(
int
)
self
.
_grad_origin_device
=
dict
()
def
_pack
(
self
,
dtype
):
def
_pack
(
self
,
dtype
):
grad_list
=
[
self
.
_gradients_dict
[
p
]
for
p
in
self
.
_packing_list
[
dtype
]]
grad_list
=
[
self
.
_gradients_dict
[
p
]
for
p
in
self
.
_packing_list
[
dtype
]]
...
@@ -109,6 +110,7 @@ class AllreduceCallback:
...
@@ -109,6 +110,7 @@ class AllreduceCallback:
self
.
_params
.
append
(
param
)
self
.
_params
.
append
(
param
)
self
.
_futures_dict
[
param
]
=
TensorFuture
(
ack
=
False
)
self
.
_futures_dict
[
param
]
=
TensorFuture
(
ack
=
False
)
self
.
_gradients_dict
[
param
]
=
grad
self
.
_gradients_dict
[
param
]
=
grad
self
.
_grad_origin_device
[
param
]
=
str
(
grad
.
device
)
dtype_str
=
str
(
np
.
dtype
(
param
.
dtype
))
dtype_str
=
str
(
np
.
dtype
(
param
.
dtype
))
dtype_size
=
np
.
dtype
(
param
.
dtype
).
itemsize
dtype_size
=
np
.
dtype
(
param
.
dtype
).
itemsize
...
@@ -123,6 +125,7 @@ class AllreduceCallback:
...
@@ -123,6 +125,7 @@ class AllreduceCallback:
self
.
_pack
(
dtype
)
self
.
_pack
(
dtype
)
for
param
in
self
.
_params
:
for
param
in
self
.
_params
:
grad
=
self
.
_gradients_dict
[
param
]
grad
=
self
.
_gradients_dict
[
param
]
grad
=
copy
(
grad
,
self
.
_grad_origin_device
[
param
])
self
.
_futures_dict
[
param
].
set
(
grad
)
self
.
_futures_dict
[
param
].
set
(
grad
)
self
.
_reset
()
self
.
_reset
()
...
...
imperative/python/megengine/functional/param_pack.py
浏览文件 @
1a24fb29
...
@@ -27,7 +27,7 @@ def pack_allreduce_split(pack_list, shapes, group, reduce_method):
...
@@ -27,7 +27,7 @@ def pack_allreduce_split(pack_list, shapes, group, reduce_method):
offsets_val
=
get_offsets
(
shapes
)
offsets_val
=
get_offsets
(
shapes
)
offsets
=
Tensor
(
offsets_val
)
offsets
=
Tensor
(
offsets_val
)
packed_grads
=
param_pack_concat
(
pack_list
,
offsets
,
offsets_val
)
packed_grads
=
param_pack_concat
(
pack_list
,
offsets
,
offsets_val
)
packed_grads
=
all_reduce_sum
(
packed_grads
,
group
)
packed_grads
=
all_reduce_sum
(
packed_grads
,
group
,
group
.
comp_node
)
if
reduce_method
==
"mean"
:
if
reduce_method
==
"mean"
:
packed_grads
/=
group
.
size
packed_grads
/=
group
.
size
grads
=
param_pack_split
(
packed_grads
,
offsets_val
,
shapes
)
grads
=
param_pack_split
(
packed_grads
,
offsets_val
,
shapes
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录