Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
1c8531ce
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
1c8531ce
编写于
1月 13, 2023
作者:
W
wuhuachaocoding
提交者:
GitHub
1月 13, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix a bug of stage2 offload. (#49767)
上级
d58cca9e
变更
2
显示空白变更内容
内联
并排
Showing
2 changed file
with
28 addition
and
3 deletion
+28
-3
python/paddle/distributed/fleet/meta_parallel/sharding/group_sharded_optimizer_stage2.py
.../meta_parallel/sharding/group_sharded_optimizer_stage2.py
+6
-0
python/paddle/fluid/tests/unittests/collective/fleet/dygraph_group_sharded_stage2_offload.py
.../collective/fleet/dygraph_group_sharded_stage2_offload.py
+22
-3
未找到文件。
python/paddle/distributed/fleet/meta_parallel/sharding/group_sharded_optimizer_stage2.py
浏览文件 @
1c8531ce
...
@@ -149,6 +149,11 @@ class GroupShardedOptimizerStage2(Optimizer):
...
@@ -149,6 +149,11 @@ class GroupShardedOptimizerStage2(Optimizer):
self
.
_rank
=
self
.
_group
.
rank
self
.
_rank
=
self
.
_group
.
rank
self
.
_global_root_rank
=
self
.
_group
.
ranks
[
0
]
self
.
_global_root_rank
=
self
.
_group
.
ranks
[
0
]
if
self
.
_dp_group
is
not
None
and
self
.
_dp_group
.
nranks
>
1
:
assert
(
not
offload
),
"Not support! when using offload with sharding stage2, please use pure sharding stage2, exclude data parallel."
# Synchronous all ranks models
# Synchronous all ranks models
if
pertrain_sync_models
:
if
pertrain_sync_models
:
self
.
_sync_params_and_buffers
()
self
.
_sync_params_and_buffers
()
...
@@ -164,6 +169,7 @@ class GroupShardedOptimizerStage2(Optimizer):
...
@@ -164,6 +169,7 @@ class GroupShardedOptimizerStage2(Optimizer):
if
(
if
(
hcg
hcg
and
hcg
.
get_parallel_mode
()
is
not
ParallelMode
.
DATA_PARALLEL
and
hcg
.
get_parallel_mode
()
is
not
ParallelMode
.
DATA_PARALLEL
and
not
offload
):
):
self
.
_optim
.
_grad_clip
=
HybridParallelClipGrad
(
self
.
_optim
.
_grad_clip
=
HybridParallelClipGrad
(
self
.
_optim
.
_grad_clip
,
hcg
self
.
_optim
.
_grad_clip
,
hcg
...
...
python/paddle/fluid/tests/unittests/collective/fleet/dygraph_group_sharded_stage2_offload.py
浏览文件 @
1c8531ce
...
@@ -37,17 +37,29 @@ np.random.seed(seed)
...
@@ -37,17 +37,29 @@ np.random.seed(seed)
paddle
.
seed
(
seed
)
paddle
.
seed
(
seed
)
def
train_mlp
(
model
,
offload
=
False
):
def
train_mlp
(
model
,
offload
=
False
,
test
=
False
):
optimizer
=
optimizer_setting
(
model
=
model
,
use_pure_fp16
=
True
)
optimizer
=
optimizer_setting
(
model
=
model
,
use_pure_fp16
=
True
)
model
=
paddle
.
amp
.
decorate
(
models
=
model
,
level
=
'O2'
,
save_dtype
=
'float32'
)
model
=
paddle
.
amp
.
decorate
(
models
=
model
,
level
=
'O2'
,
save_dtype
=
'float32'
)
scaler
=
paddle
.
amp
.
GradScaler
(
init_loss_scaling
=
1024
)
scaler
=
paddle
.
amp
.
GradScaler
(
init_loss_scaling
=
1024
)
scaler
=
GroupShardedScaler
(
scaler
)
scaler
=
GroupShardedScaler
(
scaler
)
dp_group
=
(
None
if
not
test
else
paddle
.
distributed
.
new_group
(
list
(
range
(
paddle
.
distributed
.
get_world_size
()))
)
)
optimizer
=
GroupShardedOptimizerStage2
(
optimizer
=
GroupShardedOptimizerStage2
(
params
=
optimizer
.
_parameter_list
,
optim
=
optimizer
,
offload
=
offload
params
=
optimizer
.
_parameter_list
,
optim
=
optimizer
,
offload
=
offload
,
dp_group
=
dp_group
,
)
model
=
GroupShardedStage2
(
model
,
optimizer
,
buffer_max_size
=
2
**
21
,
dp_group
=
dp_group
)
)
model
=
GroupShardedStage2
(
model
,
optimizer
,
buffer_max_size
=
2
**
21
)
paddle
.
seed
(
2023
)
paddle
.
seed
(
2023
)
np
.
random
.
seed
(
2023
)
np
.
random
.
seed
(
2023
)
...
@@ -103,6 +115,13 @@ def test_sharding_stage2_offload():
...
@@ -103,6 +115,13 @@ def test_sharding_stage2_offload():
rtol
=
5e-3
,
rtol
=
5e-3
,
atol
=
5e-3
,
atol
=
5e-3
,
)
)
# just to test assert error for the rate of coverage
try
:
train_mlp
(
mlp_offload
,
offload
=
True
,
test
=
True
)
except
Exception
as
e
:
assert
isinstance
(
e
,
AssertionError
)
return
return
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录