Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
69574f38
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
69574f38
编写于
6月 18, 2020
作者:
X
Xiaoda Zhang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix the bprob error of embeddinglookup
上级
373832d0
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
38 addition
and
3 deletion
+38
-3
mindspore/ops/_grad/grad_array_ops.py
mindspore/ops/_grad/grad_array_ops.py
+2
-2
tests/ut/python/parallel/test_embeddinglookup.py
tests/ut/python/parallel/test_embeddinglookup.py
+36
-1
未找到文件。
mindspore/ops/_grad/grad_array_ops.py
浏览文件 @
69574f38
...
...
@@ -203,7 +203,7 @@ def get_bprop_embedding_lookup(self):
actual_dout
=
elu_grad
(
dout
,
split_num
)
else
:
actual_dout
=
dout
new_indices
=
host_sub
(
indices
-
offset
)
new_indices
=
host_sub
(
indices
,
offset
)
# Reshape the 'new_indices'
new_indices_shape_changed
=
(
size_op
(
new_indices
),)
new_indices
=
host_reshape
(
new_indices
,
new_indices_shape_changed
)
...
...
@@ -211,7 +211,7 @@ def get_bprop_embedding_lookup(self):
x_shp_tail
=
x_shp
[
1
:]
actual_dout_shape_changed
=
new_indices_shape_changed
+
x_shp_tail
actual_dout
=
host_reshape
(
actual_dout
,
actual_dout_shape_changed
)
return
(
new_indices
,
actual_dout
,
x_shp
),
zeros_like
(
new_indices
),
zeros_like
(
axis
),
\
return
(
new_indices
,
actual_dout
,
x_shp
),
zeros_like
(
indices
),
zeros_like
(
offset
),
\
zeros_like
(
reduce_scatter_flag
),
zeros_like
(
split_num
)
return
bprop_sparse
...
...
tests/ut/python/parallel/test_embeddinglookup.py
浏览文件 @
69574f38
...
...
@@ -16,12 +16,20 @@ import numpy as np
import
mindspore
as
ms
import
mindspore.nn
as
nn
from
mindspore
import
Tensor
from
mindspore.common.api
import
_executor
from
mindspore.ops
import
operations
as
P
from
mindspore.ops
import
composite
as
C
from
mindspore.ops.operations
import
_inner_ops
as
inner
from
mindspore
import
Tensor
,
context
from
tests.ut.python.ops.test_math_ops
import
VirtualLoss
class
GradWrap
(
nn
.
Cell
):
def
__init__
(
self
,
network
):
super
(
GradWrap
,
self
).
__init__
()
self
.
network
=
network
def
construct
(
self
,
x
,
y
):
return
C
.
grad_all
(
self
.
network
)(
x
,
y
)
class
NetWithLoss
(
nn
.
Cell
):
def
__init__
(
self
,
network
):
...
...
@@ -73,3 +81,30 @@ def test_embeddinglookup_reducescatter_true():
x
=
Tensor
(
np
.
ones
([
64
,
32
]),
dtype
=
ms
.
float32
)
y
=
Tensor
(
np
.
ones
([
8
,
32
,
8
]),
dtype
=
ms
.
float32
)
_executor
.
compile
(
net
,
x
,
y
)
def
test_embeddinglookup_reducescatter_false_grad
():
shape
=
[
8
,
8
]
offset
=
8
reduce_scatter_flag
=
False
split_num
=
1
net
=
GradWrap
(
NetWithLoss
(
Net
(
shape
,
offset
,
reduce_scatter_flag
,
split_num
)))
net
.
set_auto_parallel
()
x
=
Tensor
(
np
.
ones
([
64
,
32
]),
dtype
=
ms
.
float32
)
y
=
Tensor
(
np
.
ones
([
8
,
32
,
8
]),
dtype
=
ms
.
float32
)
_executor
.
compile
(
net
,
x
,
y
)
def
test_embeddinglookup_reducescatter_true_grad
():
context
.
set_context
(
save_graphs
=
True
)
shape
=
[
64
,
8
]
offset
=
8
reduce_scatter_flag
=
True
split_num
=
8
net
=
GradWrap
(
NetWithLoss
(
Net
(
shape
,
offset
,
reduce_scatter_flag
,
split_num
)))
net
.
set_auto_parallel
()
x
=
Tensor
(
np
.
ones
([
64
,
32
]),
dtype
=
ms
.
float32
)
y
=
Tensor
(
np
.
ones
([
8
,
32
,
8
]),
dtype
=
ms
.
float32
)
_executor
.
compile
(
net
,
x
,
y
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录