Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
e732d40a
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
1 年多 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
e732d40a
编写于
8月 25, 2021
作者:
B
Bin Lu
提交者:
GitHub
8月 25, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update deephashloss.py
上级
9323b147
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
58 addition
and
5 deletion
+58
-5
ppcls/loss/deephashloss.py
ppcls/loss/deephashloss.py
+58
-5
未找到文件。
ppcls/loss/deephashloss.py
浏览文件 @
e732d40a
# do binarize
if
self
.
config
[
"Global"
].
get
(
"feature_binarize"
)
==
"round"
:
batch_feas
=
paddle
.
round
(
batch_feas
).
astype
(
"float32"
)
*
2.0
-
1.0
#copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import
paddle
import
paddle.nn
as
nn
class
DSHSDLoss
(
nn
.
Layer
):
"""
# DSHSD(IEEE ACCESS 2019)
# paper [Deep Supervised Hashing Based on Stable Distribution](https://ieeexplore.ieee.org/document/8648432/)
# [DSHSD] epoch:70, bit:48, dataset:cifar10-1, MAP:0.809, Best MAP: 0.809
# [DSHSD] epoch:250, bit:48, dataset:nuswide_21, MAP:0.809, Best MAP: 0.815
# [DSHSD] epoch:135, bit:48, dataset:imagenet, MAP:0.647, Best MAP: 0.647
"""
def
__init__
(
self
,
n_class
,
bit
,
alpha
,
multi_label
=
False
):
super
(
DSHSDLoss
,
self
).
__init__
()
self
.
m
=
2
*
bit
self
.
alpha
=
alpha
self
.
multi_label
=
multi_label
self
.
n_class
=
n_class
self
.
fc
=
paddle
.
nn
.
Linear
(
bit
,
n_class
,
bias_attr
=
False
)
def
forward
(
self
,
input
,
label
):
feature
=
input
[
"features"
]
feature
=
feature
.
tanh
().
astype
(
"float32"
)
dist
=
paddle
.
sum
(
paddle
.
square
((
paddle
.
unsqueeze
(
feature
,
1
)
-
paddle
.
unsqueeze
(
feature
,
0
))),
axis
=
2
)
# label to ont-hot
label
=
paddle
.
flatten
(
label
)
label
=
paddle
.
nn
.
functional
.
one_hot
(
label
,
self
.
n_class
).
astype
(
"float32"
)
s
=
(
paddle
.
matmul
(
label
,
label
,
transpose_y
=
True
)
==
0
).
astype
(
"float32"
)
Ld
=
(
1
-
s
)
/
2
*
dist
+
s
/
2
*
(
self
.
m
-
dist
).
clip
(
min
=
0
)
Ld
=
Ld
.
mean
()
logits
=
self
.
fc
(
feature
)
if
self
.
multi_label
:
# multiple labels classification loss
Lc
=
(
logits
-
label
*
logits
+
((
1
+
(
-
logits
).
exp
()).
log
())).
sum
(
axis
=
1
).
mean
()
else
:
# single labels classification loss
Lc
=
(
-
paddle
.
nn
.
functional
.
softmax
(
logits
).
log
()
*
label
).
sum
(
axis
=
1
).
mean
()
return
{
"dshsdloss"
:
Lc
+
Ld
*
self
.
alpha
}
if
self
.
config
[
"Global"
].
get
(
"feature_binarize"
)
==
"sign"
:
batch_feas
=
paddle
.
sign
(
batch_feas
).
astype
(
"float32"
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录