Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
a13180fc
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
a13180fc
编写于
7月 08, 2021
作者:
W
Wenyu
提交者:
GitHub
7月 08, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix import problem of _convert_attention_mask (#3631)
上级
c8f8a3b0
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
22 addition
and
4 deletion
+22
-4
ppdet/modeling/layers.py
ppdet/modeling/layers.py
+21
-2
ppdet/modeling/transformers/detr_transformer.py
ppdet/modeling/transformers/detr_transformer.py
+1
-2
未找到文件。
ppdet/modeling/layers.py
浏览文件 @
a13180fc
...
...
@@ -31,8 +31,6 @@ from . import ops
from
.initializer
import
xavier_uniform_
,
constant_
from
paddle.vision.ops
import
DeformConv2D
from
paddle.nn.layer
import
transformer
_convert_attention_mask
=
transformer
.
_convert_attention_mask
def
_to_list
(
l
):
...
...
@@ -1195,6 +1193,27 @@ class Concat(nn.Layer):
return
'dim={}'
.
format
(
self
.
dim
)
def
_convert_attention_mask
(
attn_mask
,
dtype
):
"""
Convert the attention mask to the target dtype we expect.
Parameters:
attn_mask (Tensor, optional): A tensor used in multi-head attention
to prevents attention to some unwanted positions, usually the
paddings or the subsequent positions. It is a tensor with shape
broadcasted to `[batch_size, n_head, sequence_length, sequence_length]`.
When the data type is bool, the unwanted positions have `False`
values and the others have `True` values. When the data type is
int, the unwanted positions have 0 values and the others have 1
values. When the data type is float, the unwanted positions have
`-INF` values and the others have 0 values. It can be None when
nothing wanted or needed to be prevented attention to. Default None.
dtype (VarType): The target type of `attn_mask` we expect.
Returns:
Tensor: A Tensor with shape same as input `attn_mask`, with data type `dtype`.
"""
return
nn
.
layer
.
transformer
.
_convert_attention_mask
(
attn_mask
,
dtype
)
class
MultiHeadAttention
(
nn
.
Layer
):
"""
Attention mapps queries and a set of key-value pairs to outputs, and
...
...
ppdet/modeling/transformers/detr_transformer.py
浏览文件 @
a13180fc
...
...
@@ -18,11 +18,10 @@ from __future__ import print_function
import
paddle
import
paddle.nn
as
nn
from
paddle.nn.layer.transformer
import
_convert_attention_mask
import
paddle.nn.functional
as
F
from
ppdet.core.workspace
import
register
from
..layers
import
MultiHeadAttention
from
..layers
import
MultiHeadAttention
,
_convert_attention_mask
from
.position_encoding
import
PositionEmbedding
from
.utils
import
*
from
..initializer
import
*
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录