Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
f6c5625f
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
f6c5625f
编写于
7月 05, 2021
作者:
C
cuicheng01
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix LeVit export_model bugs
上级
77557082
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
22 addition
and
15 deletion
+22
-15
ppcls/arch/backbone/model_zoo/levit.py
ppcls/arch/backbone/model_zoo/levit.py
+22
-15
未找到文件。
ppcls/arch/backbone/model_zoo/levit.py
浏览文件 @
f6c5625f
...
...
@@ -45,12 +45,13 @@ __all__ = list(MODEL_URLS.keys())
def
cal_attention_biases
(
attention_biases
,
attention_bias_idxs
):
gather_list
=
[]
attention_bias_t
=
paddle
.
transpose
(
attention_biases
,
(
1
,
0
))
for
idx
in
attention_bias_idxs
:
gather
=
paddle
.
gather
(
attention_bias_t
,
idx
)
nums
=
attention_bias_idxs
.
shape
[
0
]
for
idx
in
range
(
nums
):
gather
=
paddle
.
gather
(
attention_bias_t
,
attention_bias_idxs
[
idx
])
gather_list
.
append
(
gather
)
shape0
,
shape1
=
attention_bias_idxs
.
shape
return
paddle
.
transpose
(
paddle
.
concat
(
gather_list
),
(
1
,
0
)).
reshape
(
(
0
,
shape0
,
shape1
))
gather
=
paddle
.
concat
(
gather_list
)
return
paddle
.
transpose
(
gather
,
(
1
,
0
)).
reshape
(
(
0
,
shape0
,
shape1
))
class
Conv2d_BN
(
nn
.
Sequential
):
...
...
@@ -127,11 +128,12 @@ class Residual(nn.Layer):
def
forward
(
self
,
x
):
if
self
.
training
and
self
.
drop
>
0
:
return
x
+
self
.
m
(
x
)
*
paddle
.
rand
(
x
.
size
(
0
),
1
,
1
,
device
=
x
.
device
).
ge_
(
self
.
drop
).
div
(
1
-
self
.
drop
).
detach
()
return
paddle
.
add
(
x
,
self
.
m
(
x
)
*
paddle
.
rand
(
x
.
size
(
0
),
1
,
1
,
device
=
x
.
device
).
ge_
(
self
.
drop
).
div
(
1
-
self
.
drop
).
detach
())
else
:
return
x
+
self
.
m
(
x
)
return
paddle
.
add
(
x
,
self
.
m
(
x
)
)
class
Attention
(
nn
.
Layer
):
...
...
@@ -203,9 +205,9 @@ class Attention(nn.Layer):
self
.
attention_bias_idxs
)
else
:
attention_biases
=
self
.
ab
attn
=
(
(
q
@
k_transpose
)
*
self
.
scale
+
attention_biases
)
attn
=
(
paddle
.
matmul
(
q
,
k_transpose
)
*
self
.
scale
+
attention_biases
)
attn
=
F
.
softmax
(
attn
)
x
=
paddle
.
transpose
(
attn
@
v
,
perm
=
[
0
,
2
,
1
,
3
])
x
=
paddle
.
transpose
(
paddle
.
matmul
(
attn
,
v
)
,
perm
=
[
0
,
2
,
1
,
3
])
x
=
paddle
.
reshape
(
x
,
[
B
,
N
,
self
.
dh
])
x
=
self
.
proj
(
x
)
return
x
...
...
@@ -219,8 +221,11 @@ class Subsample(nn.Layer):
def
forward
(
self
,
x
):
B
,
N
,
C
=
x
.
shape
x
=
paddle
.
reshape
(
x
,
[
B
,
self
.
resolution
,
self
.
resolution
,
C
])[:,
::
self
.
stride
,
::
self
.
stride
]
#x = paddle.reshape(x, [B, self.resolution, self.resolution,
# C])[:, ::self.stride, ::self.stride]
x
=
paddle
.
reshape
(
x
,
[
B
,
self
.
resolution
,
self
.
resolution
,
C
])
end1
,
end2
=
x
.
shape
[
1
],
x
.
shape
[
2
]
x
=
x
[:,
0
:
end1
:
self
.
stride
,
0
:
end2
:
self
.
stride
]
x
=
paddle
.
reshape
(
x
,
[
B
,
-
1
,
C
])
return
x
...
...
@@ -315,13 +320,14 @@ class AttentionSubsample(nn.Layer):
else
:
attention_biases
=
self
.
ab
attn
=
(
q
@
paddle
.
transpose
(
k
,
perm
=
[
0
,
1
,
3
,
2
]))
*
self
.
scale
+
attention_biases
attn
=
(
paddle
.
matmul
(
q
,
paddle
.
transpose
(
k
,
perm
=
[
0
,
1
,
3
,
2
])))
*
self
.
scale
+
attention_biases
attn
=
F
.
softmax
(
attn
)
x
=
paddle
.
reshape
(
paddle
.
transpose
(
(
attn
@
v
),
perm
=
[
0
,
2
,
1
,
3
]),
[
B
,
-
1
,
self
.
dh
])
paddle
.
matmul
(
attn
,
v
),
perm
=
[
0
,
2
,
1
,
3
]),
[
B
,
-
1
,
self
.
dh
])
x
=
self
.
proj
(
x
)
return
x
...
...
@@ -422,6 +428,7 @@ class LeViT(nn.Layer):
x
=
paddle
.
transpose
(
x
,
perm
=
[
0
,
2
,
1
])
x
=
self
.
blocks
(
x
)
x
=
x
.
mean
(
1
)
x
=
paddle
.
reshape
(
x
,
[
-
1
,
384
])
if
self
.
distillation
:
x
=
self
.
head
(
x
),
self
.
head_dist
(
x
)
if
not
self
.
training
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录