Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
f3ce7dda
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f3ce7dda
编写于
3月 10, 2020
作者:
W
WangXi
提交者:
GitHub
3月 10, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Close fuse when use dgc & move DGC strategy from PE to compiler, test=develop (#22917)
上级
cec3cfba
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
10 addition
and
13 deletion
+10
-13
python/paddle/fluid/compiler.py
python/paddle/fluid/compiler.py
+10
-0
python/paddle/fluid/parallel_executor.py
python/paddle/fluid/parallel_executor.py
+0
-9
python/paddle/fluid/tests/unittests/test_dist_base.py
python/paddle/fluid/tests/unittests/test_dist_base.py
+0
-4
未找到文件。
python/paddle/fluid/compiler.py
浏览文件 @
f3ce7dda
...
@@ -356,6 +356,16 @@ class CompiledProgram(object):
...
@@ -356,6 +356,16 @@ class CompiledProgram(object):
if
self
.
_build_strategy
.
sync_batch_norm
:
if
self
.
_build_strategy
.
sync_batch_norm
:
self
.
_build_strategy
.
enable_sequential_execution
=
True
self
.
_build_strategy
.
enable_sequential_execution
=
True
if
self
.
_program
is
not
None
and
self
.
_program
.
_enable_dgc
:
assert
use_cuda
,
"DGC only used under cuda"
assert
self
.
_build_strategy
.
num_trainers
*
len
(
places
)
>
1
,
"DGC is not useful for single card training"
assert
self
.
_build_strategy
.
reduce_strategy
==
BuildStrategy
.
ReduceStrategy
.
AllReduce
,
"DGC
\
only used for AllReduce BuildStrategy"
# DGC doesn't support fuse for now, close fuse.
self
.
_build_strategy
.
fuse_all_reduce_ops
=
False
self
.
_persistable_vars
=
[]
self
.
_persistable_vars
=
[]
for
node
in
self
.
_graph
.
nodes
():
for
node
in
self
.
_graph
.
nodes
():
if
node
.
is_var
()
and
node
.
var
()
is
not
None
and
node
.
var
().
persistable
()
and
\
if
node
.
is_var
()
and
node
.
var
()
is
not
None
and
node
.
var
().
persistable
()
and
\
...
...
python/paddle/fluid/parallel_executor.py
浏览文件 @
f3ce7dda
...
@@ -175,15 +175,6 @@ class ParallelExecutor(object):
...
@@ -175,15 +175,6 @@ class ParallelExecutor(object):
)
if
use_cuda
else
framework
.
cpu_places
()
)
if
use_cuda
else
framework
.
cpu_places
()
self
.
_scope
=
scope
if
scope
is
not
None
else
executor
.
global_scope
()
self
.
_scope
=
scope
if
scope
is
not
None
else
executor
.
global_scope
()
if
main_program
is
not
None
and
main_program
.
_enable_dgc
:
assert
build_strategy
.
num_trainers
>
1
,
"dgc is not useful when num_trainers <= 1"
assert
build_strategy
.
reduce_strategy
==
BuildStrategy
.
ReduceStrategy
.
AllReduce
,
"dgc
\
only used for allreduce"
assert
build_strategy
.
num_trainers
*
len
(
self
.
_places
)
>
1
,
"dgc is not useful for single card training"
assert
use_cuda
,
"dgc only used under cuda"
main_program
=
main_program
if
main_program
is
not
None
\
main_program
=
main_program
if
main_program
is
not
None
\
else
framework
.
default_main_program
()
else
framework
.
default_main_program
()
...
...
python/paddle/fluid/tests/unittests/test_dist_base.py
浏览文件 @
f3ce7dda
...
@@ -334,10 +334,6 @@ class TestDistRunnerBase(object):
...
@@ -334,10 +334,6 @@ class TestDistRunnerBase(object):
build_stra
.
num_trainers
=
1
build_stra
.
num_trainers
=
1
build_stra
.
trainer_id
=
0
build_stra
.
trainer_id
=
0
if
args
.
use_dgc
:
# fuse_all_reduce_ops require that gradients should not be sparse types
build_stra
.
fuse_all_reduce_ops
=
False
print_to_err
(
type
(
self
).
__name__
,
"begin to compile with data parallel"
)
print_to_err
(
type
(
self
).
__name__
,
"begin to compile with data parallel"
)
binary
=
compiler
.
CompiledProgram
(
trainer_prog
).
with_data_parallel
(
binary
=
compiler
.
CompiledProgram
(
trainer_prog
).
with_data_parallel
(
loss_name
=
avg_cost
.
name
,
loss_name
=
avg_cost
.
name
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录