Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
2105d146
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
2105d146
编写于
8月 17, 2022
作者:
W
wanghuancoder
提交者:
GitHub
8月 17, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix api sigmoid_focal_loss to final state (#45207)
上级
a79d4a75
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
44 addition
and
15 deletion
+44
-15
python/paddle/nn/functional/loss.py
python/paddle/nn/functional/loss.py
+44
-15
未找到文件。
python/paddle/nn/functional/loss.py
浏览文件 @
2105d146
...
@@ -2616,23 +2616,54 @@ def sigmoid_focal_loss(logit,
...
@@ -2616,23 +2616,54 @@ def sigmoid_focal_loss(logit,
"Expected one dimension of normalizer in sigmoid_focal_loss but got {}."
"Expected one dimension of normalizer in sigmoid_focal_loss but got {}."
.
format
(
normalizer_dims
))
.
format
(
normalizer_dims
))
if
_non_static_mode
():
if
in_dygraph_mode
():
if
in_dygraph_mode
():
place
=
_current_expected_place
()
place
=
_current_expected_place
()
one
=
_C_ops
.
final_state_full
(
logit
.
shape
,
float
(
1.0
),
logit
.
dtype
,
one
=
_C_ops
.
final_state_full
(
logit
.
shape
,
float
(
1.0
),
logit
.
dtype
,
place
)
place
)
loss
=
_C_ops
.
final_state_sigmoid_cross_entropy_with_logits
(
loss
=
_C_ops
.
final_state_sigmoid_cross_entropy_with_logits
(
logit
,
label
,
False
,
-
100
)
logit
,
label
,
False
,
-
100
)
elif
_in_legacy_dygraph
():
pred
=
_C_ops
.
final_state_sigmoid
(
logit
)
one
=
_varbase_creator
(
dtype
=
logit
.
dtype
)
_C_ops
.
fill_constant
(
one
,
'value'
,
float
(
1.0
),
'force_cpu'
,
False
,
p_t
=
_C_ops
.
final_state_add
(
'dtype'
,
one
.
dtype
,
'str_value'
,
'1.0'
,
_C_ops
.
final_state_multiply
(
pred
,
label
),
'shape'
,
logit
.
shape
)
_C_ops
.
final_state_multiply
(
_C_ops
.
final_state_subtract
(
one
,
pred
),
loss
=
_C_ops
.
sigmoid_cross_entropy_with_logits
(
logit
,
label
)
_C_ops
.
final_state_subtract
(
one
,
label
)))
alpha
=
fluid
.
dygraph
.
base
.
to_variable
([
alpha
],
dtype
=
loss
.
dtype
)
alpha_t
=
_C_ops
.
final_state_add
(
_C_ops
.
final_state_multiply
(
alpha
,
label
),
_C_ops
.
final_state_multiply
(
_C_ops
.
final_state_subtract
(
one
,
alpha
),
_C_ops
.
final_state_subtract
(
one
,
label
)))
loss
=
_C_ops
.
final_state_multiply
(
alpha_t
,
loss
)
gamma
=
fluid
.
dygraph
.
base
.
to_variable
([
gamma
],
dtype
=
loss
.
dtype
)
gamma_t
=
_C_ops
.
final_state_pow
(
_C_ops
.
elementwise_sub
(
one
,
p_t
),
gamma
)
loss
=
_C_ops
.
final_state_multiply
(
gamma_t
,
loss
)
if
normalizer
is
not
None
:
loss
=
_C_ops
.
final_state_divide
(
loss
,
normalizer
)
if
reduction
==
"sum"
:
return
_C_ops
.
final_state_sum
(
loss
,
[],
None
,
False
)
elif
reduction
==
"mean"
:
return
_C_ops
.
final_state_mean_all
(
loss
)
return
loss
elif
_in_legacy_dygraph
():
one
=
_varbase_creator
(
dtype
=
logit
.
dtype
)
_C_ops
.
fill_constant
(
one
,
'value'
,
float
(
1.0
),
'force_cpu'
,
False
,
'dtype'
,
one
.
dtype
,
'str_value'
,
'1.0'
,
'shape'
,
logit
.
shape
)
loss
=
_C_ops
.
sigmoid_cross_entropy_with_logits
(
logit
,
label
)
pred
=
_C_ops
.
sigmoid
(
logit
)
pred
=
_C_ops
.
sigmoid
(
logit
)
p_t
=
_C_ops
.
elementwise_add
(
p_t
=
_C_ops
.
elementwise_add
(
_C_ops
.
elementwise_mul
(
pred
,
label
),
_C_ops
.
elementwise_mul
(
pred
,
label
),
_C_ops
.
elementwise_mul
(
_C_ops
.
elementwise_sub
(
one
,
pred
),
_C_ops
.
elementwise_mul
(
_C_ops
.
elementwise_sub
(
one
,
pred
),
...
@@ -2656,8 +2687,6 @@ def sigmoid_focal_loss(logit,
...
@@ -2656,8 +2687,6 @@ def sigmoid_focal_loss(logit,
if
reduction
==
"sum"
:
if
reduction
==
"sum"
:
return
_C_ops
.
reduce_sum
(
loss
,
'reduce_all'
,
True
)
return
_C_ops
.
reduce_sum
(
loss
,
'reduce_all'
,
True
)
elif
reduction
==
"mean"
:
elif
reduction
==
"mean"
:
if
in_dygraph_mode
():
return
_C_ops
.
final_state_mean_all
(
loss
)
return
_C_ops
.
mean
(
loss
)
return
_C_ops
.
mean
(
loss
)
return
loss
return
loss
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录