Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
a301fc17
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a301fc17
编写于
8月 05, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
8月 05, 2020
浏览文件
操作
浏览文件
下载
差异文件
!3916 remove loss_scale range check which is a temp fix.
Merge pull request !3916 from xychow/remove-loss-scale-range-check
上级
5c7712ca
7d31deb6
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
27 addition
and
5 deletion
+27
-5
mindspore/core/abstract/abstract_value.cc
mindspore/core/abstract/abstract_value.cc
+18
-3
tests/ut/python/optimizer/test_optimizer_with_loss_scale.py
tests/ut/python/optimizer/test_optimizer_with_loss_scale.py
+9
-2
未找到文件。
mindspore/core/abstract/abstract_value.cc
浏览文件 @
a301fc17
...
...
@@ -38,9 +38,24 @@ bool AbstractBase::operator==(const AbstractBase &other) const {
<<
this
->
ToString
()
<<
", other: "
<<
other
.
ToString
();
}
bool
value_equal
=
*
value_
==
*
other
.
value_
;
bool
type_equal
=
*
type_
==
*
other
.
type_
;
bool
shape_equal
=
*
shape_
==
*
other
.
shape_
;
bool
value_equal
=
false
;
if
(
value_
==
other
.
value_
)
{
value_equal
=
true
;
}
else
if
(
*
value_
==
*
other
.
value_
)
{
value_equal
=
true
;
}
bool
type_equal
=
false
;
if
(
type_
==
other
.
type_
)
{
type_equal
=
true
;
}
else
if
(
*
type_
==
*
other
.
type_
)
{
type_equal
=
true
;
}
bool
shape_equal
=
false
;
if
(
shape_
==
other
.
shape_
)
{
shape_equal
=
true
;
}
else
if
(
*
shape_
==
*
other
.
shape_
)
{
shape_equal
=
true
;
}
return
value_equal
&&
type_equal
&&
shape_equal
;
}
...
...
tests/ut/python/optimizer/test_optimizer_with_loss_scale.py
浏览文件 @
a301fc17
...
...
@@ -276,7 +276,7 @@ def test_compile_fp16_lr_overflow_dynamic_graph():
print
(
"the result is "
,
output
)
def
test_adam_compile
(
):
def
adam_compile
(
loss_scale
=
1.0
):
inputs
=
Tensor
(
np
.
ones
([
15
,
1
]).
astype
(
np
.
float32
))
label
=
Tensor
(
np
.
zeros
([
15
,
1
]).
astype
(
np
.
float32
))
scaling_sens
=
Tensor
(
np
.
full
((
1
),
1.0
),
dtype
=
mstype
.
float32
)
...
...
@@ -284,10 +284,17 @@ def test_adam_compile():
loss
=
MSELoss
()
optimizer
=
Adam
(
net
.
trainable_params
(),
learning_rate
=
1e-3
,
beta1
=
0.9
,
beta2
=
0.999
,
eps
=
1e-8
,
use_locking
=
False
,
use_nesterov
=
False
,
weight_decay
=
0.0
,
loss_scale
=
1.0
)
use_nesterov
=
False
,
weight_decay
=
0.0
,
loss_scale
=
loss_scale
)
net_with_loss
=
WithLossCell
(
net
,
loss
)
train_network
=
TrainOneStepWithLossScaleCell
(
net_with_loss
,
optimizer
)
train_network
.
set_train
()
output
=
train_network
(
inputs
,
label
,
scaling_sens
)
print
(
"the result is "
,
output
)
def
test_adam_compile
():
adam_compile
()
def
test_adam_loss_scale_compile
():
""" test setting loss_scale to 1e-40 """
adam_compile
(
loss_scale
=
1e-40
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录