Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
3a14857b
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
3a14857b
编写于
11月 10, 2022
作者:
Z
zhaoyingli
提交者:
GitHub
11月 10, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix dp completion (#47804)
上级
831db343
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
27 addition
and
3 deletion
+27
-3
python/paddle/distributed/auto_parallel/completion.py
python/paddle/distributed/auto_parallel/completion.py
+12
-1
python/paddle/distributed/auto_parallel/operators/dist_slice.py
.../paddle/distributed/auto_parallel/operators/dist_slice.py
+14
-2
python/paddle/fluid/tests/unittests/auto_parallel/test_dist_slice.py
...le/fluid/tests/unittests/auto_parallel/test_dist_slice.py
+1
-0
未找到文件。
python/paddle/distributed/auto_parallel/completion.py
浏览文件 @
3a14857b
...
...
@@ -980,7 +980,7 @@ class Completer:
# Copy the corresponding distributed attribute from graph to serial_main_program
self
.
_dist_context
.
copy_dist_attr_from_graph_to_program
()
else
:
self
.
_logger
.
info
(
"Default d
ata parallel
will be set."
)
self
.
_logger
.
info
(
"Default d
istributed attributed
will be set."
)
self
.
_dist_context
.
initialize
(
with_graph
=
False
)
# A fast and special completion for data parallel
self
.
_update_dist_attr_for_dp
()
...
...
@@ -1050,6 +1050,17 @@ class Completer:
for
arg_name
in
serial_op
.
output_arg_names
:
op_dist_attr
=
dist_op
.
dist_attr
serial_tensor
=
dist_op
.
get_serial_output
(
arg_name
)
if
serial_op
.
type
in
[
"fill_constant"
]:
old_dims_mapping
=
op_dist_attr
.
get_output_dims_mapping
(
arg_name
)
if
len
(
old_dims_mapping
)
>
0
:
new_dims_mapping
=
[
0
]
+
[
-
1
for
_
in
range
(
len
(
old_dims_mapping
)
-
1
)
]
op_dist_attr
.
set_output_dims_mapping
(
arg_name
,
new_dims_mapping
)
dist_tensor
=
self
.
_dist_context
.
get_dist_tensor_for_program
(
serial_tensor
)
...
...
python/paddle/distributed/auto_parallel/operators/dist_slice.py
浏览文件 @
3a14857b
...
...
@@ -39,10 +39,16 @@ class DistributedSliceImpl(DistributedOperatorImpl):
op_desc
=
dist_op
.
serial_op
.
desc
op_dist_attr
=
dist_op
.
dist_attr
in_name
=
op_desc
.
input
(
'Input'
)[
0
]
out_name
=
op_desc
.
output
(
'Out'
)[
0
]
in_var
=
dist_op
.
serial_op
.
block
.
var
(
in_name
)
out_var
=
dist_op
.
serial_op
.
block
.
var
(
out_name
)
axes
=
op_desc
.
attr
(
'axes'
)
in_dims_mapping
=
op_dist_attr
.
get_input_dims_mapping
(
in_name
)
for
axis
in
axes
:
if
is_dim_shard
(
in_dims_mapping
[
axis
]):
if
(
is_dim_shard
(
in_dims_mapping
[
axis
])
and
in_var
.
shape
[
axis
]
!=
out_var
.
shape
[
axis
]
):
return
False
return
True
...
...
@@ -51,6 +57,8 @@ class DistributedSliceImpl(DistributedOperatorImpl):
op_dist_attr
=
dist_op
.
dist_attr
in_name
=
op_desc
.
input
(
'Input'
)[
0
]
out_name
=
op_desc
.
output
(
'Out'
)[
0
]
in_var
=
dist_op
.
serial_op
.
block
.
var
(
in_name
)
out_var
=
dist_op
.
serial_op
.
block
.
var
(
out_name
)
axes
=
op_desc
.
attr
(
'axes'
)
decrease_axis
=
op_desc
.
attr
(
'decrease_axis'
)
in_dims_mapping
=
op_dist_attr
.
get_input_dims_mapping
(
in_name
)
...
...
@@ -67,7 +75,11 @@ class DistributedSliceImpl(DistributedOperatorImpl):
else
:
for
i
in
range
(
len
(
out_dims_mapping
)):
ref_index
=
ref_indices
[
i
]
if
ref_index
in
axes
and
is_dim_shard
(
out_dims_mapping
[
i
]):
if
(
ref_index
in
axes
and
is_dim_shard
(
out_dims_mapping
[
i
])
and
in_var
.
shape
[
ref_index
]
!=
out_var
.
shape
[
ref_index
]
):
return
False
return
True
...
...
python/paddle/fluid/tests/unittests/auto_parallel/test_dist_slice.py
浏览文件 @
3a14857b
...
...
@@ -32,6 +32,7 @@ def make_program_dp2():
tmp_1
=
x
[:,
0
,
:]
tmp_2
=
x
[:,
:,
1
]
tmp_3
=
x
[:
2
,
:
2
,
:
2
]
tmp_3
=
x
[:
4
,
:
2
,
:
2
]
return
main_program
,
start_program
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录