Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
ed80b0e2
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 2 年 前同步成功
通知
210
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
ed80b0e2
编写于
8月 30, 2022
作者:
T
tianhao zhang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix multigpu training test=asr
上级
733ec7f2
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
39 addition
and
30 deletion
+39
-30
paddlespeech/s2t/models/u2/u2.py
paddlespeech/s2t/models/u2/u2.py
+2
-2
paddlespeech/s2t/modules/attention.py
paddlespeech/s2t/modules/attention.py
+20
-15
paddlespeech/s2t/modules/conformer_convolution.py
paddlespeech/s2t/modules/conformer_convolution.py
+6
-4
paddlespeech/s2t/modules/encoder.py
paddlespeech/s2t/modules/encoder.py
+3
-3
paddlespeech/s2t/modules/encoder_layer.py
paddlespeech/s2t/modules/encoder_layer.py
+8
-6
未找到文件。
paddlespeech/s2t/models/u2/u2.py
浏览文件 @
ed80b0e2
...
@@ -605,8 +605,8 @@ class U2BaseModel(ASRInterface, nn.Layer):
...
@@ -605,8 +605,8 @@ class U2BaseModel(ASRInterface, nn.Layer):
xs
:
paddle
.
Tensor
,
xs
:
paddle
.
Tensor
,
offset
:
int
,
offset
:
int
,
required_cache_size
:
int
,
required_cache_size
:
int
,
att_cache
:
paddle
.
Tensor
,
att_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
cnn_cache
:
paddle
.
Tensor
,
cnn_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
""" Export interface for c++ call, give input chunk xs, and return
""" Export interface for c++ call, give input chunk xs, and return
output from time 0 to current chunk.
output from time 0 to current chunk.
...
...
paddlespeech/s2t/modules/attention.py
浏览文件 @
ed80b0e2
...
@@ -86,7 +86,8 @@ class MultiHeadedAttention(nn.Layer):
...
@@ -86,7 +86,8 @@ class MultiHeadedAttention(nn.Layer):
self
,
self
,
value
:
paddle
.
Tensor
,
value
:
paddle
.
Tensor
,
scores
:
paddle
.
Tensor
,
scores
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
)
->
paddle
.
Tensor
:
mask
:
paddle
.
Tensor
,
# paddle.ones([0, 0, 0], dtype=paddle.bool)
)
->
paddle
.
Tensor
:
"""Compute attention context vector.
"""Compute attention context vector.
Args:
Args:
value (paddle.Tensor): Transformed value, size
value (paddle.Tensor): Transformed value, size
...
@@ -126,13 +127,15 @@ class MultiHeadedAttention(nn.Layer):
...
@@ -126,13 +127,15 @@ class MultiHeadedAttention(nn.Layer):
return
self
.
linear_out
(
x
)
# (batch, time1, d_model)
return
self
.
linear_out
(
x
)
# (batch, time1, d_model)
def
forward
(
self
,
def
forward
(
query
:
paddle
.
Tensor
,
self
,
key
:
paddle
.
Tensor
,
query
:
paddle
.
Tensor
,
value
:
paddle
.
Tensor
,
key
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
value
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
# paddle.ones([0,0,0], dtype=paddle.bool)
cache
:
paddle
.
Tensor
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
pos_emb
:
paddle
.
Tensor
,
# paddle.empty([0])
cache
:
paddle
.
Tensor
# paddle.zeros([0,0,0,0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
"""Compute scaled dot product attention.
"""Compute scaled dot product attention.
Args:
Args:
query (paddle.Tensor): Query tensor (#batch, time1, size).
query (paddle.Tensor): Query tensor (#batch, time1, size).
...
@@ -241,13 +244,15 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
...
@@ -241,13 +244,15 @@ class RelPositionMultiHeadedAttention(MultiHeadedAttention):
return
x
return
x
def
forward
(
self
,
def
forward
(
query
:
paddle
.
Tensor
,
self
,
key
:
paddle
.
Tensor
,
query
:
paddle
.
Tensor
,
value
:
paddle
.
Tensor
,
key
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
value
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
# paddle.ones([0,0,0], dtype=paddle.bool)
cache
:
paddle
.
Tensor
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
pos_emb
:
paddle
.
Tensor
,
# paddle.empty([0])
cache
:
paddle
.
Tensor
# paddle.zeros([0,0,0,0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args:
Args:
query (paddle.Tensor): Query tensor (#batch, time1, size).
query (paddle.Tensor): Query tensor (#batch, time1, size).
...
...
paddlespeech/s2t/modules/conformer_convolution.py
浏览文件 @
ed80b0e2
...
@@ -105,10 +105,12 @@ class ConvolutionModule(nn.Layer):
...
@@ -105,10 +105,12 @@ class ConvolutionModule(nn.Layer):
)
)
self
.
activation
=
activation
self
.
activation
=
activation
def
forward
(
self
,
def
forward
(
x
:
paddle
.
Tensor
,
self
,
mask_pad
:
paddle
.
Tensor
,
x
:
paddle
.
Tensor
,
cache
:
paddle
.
Tensor
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
mask_pad
:
paddle
.
Tensor
,
# paddle.ones([0,0,0], dtype=paddle.bool)
cache
:
paddle
.
Tensor
# paddle.zeros([0,0,0,0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
]:
"""Compute convolution module.
"""Compute convolution module.
Args:
Args:
x (paddle.Tensor): Input tensor (#batch, time, channels).
x (paddle.Tensor): Input tensor (#batch, time, channels).
...
...
paddlespeech/s2t/modules/encoder.py
浏览文件 @
ed80b0e2
...
@@ -190,9 +190,9 @@ class BaseEncoder(nn.Layer):
...
@@ -190,9 +190,9 @@ class BaseEncoder(nn.Layer):
xs
:
paddle
.
Tensor
,
xs
:
paddle
.
Tensor
,
offset
:
int
,
offset
:
int
,
required_cache_size
:
int
,
required_cache_size
:
int
,
att_cache
:
paddle
.
Tensor
,
att_cache
:
paddle
.
Tensor
,
# paddle.zeros([0,0,0,0])
cnn_cache
:
paddle
.
Tensor
,
cnn_cache
:
paddle
.
Tensor
,
# paddle.zeros([0,0,0,0]),
att_mask
:
paddle
.
Tensor
,
att_mask
:
paddle
.
Tensor
,
# paddle.ones([0,0,0], dtype=paddle.bool)
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
""" Forward just one chunk
""" Forward just one chunk
Args:
Args:
...
...
paddlespeech/s2t/modules/encoder_layer.py
浏览文件 @
ed80b0e2
...
@@ -76,9 +76,10 @@ class TransformerEncoderLayer(nn.Layer):
...
@@ -76,9 +76,10 @@ class TransformerEncoderLayer(nn.Layer):
x
:
paddle
.
Tensor
,
x
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
mask_pad
:
paddle
.
Tensor
,
mask_pad
:
paddle
.
att_cache
:
paddle
.
Tensor
,
Tensor
,
# paddle.ones([0, 0, 0], dtype=paddle.bool)
cnn_cache
:
paddle
.
Tensor
,
att_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
cnn_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
"""Compute encoded features.
"""Compute encoded features.
Args:
Args:
...
@@ -194,9 +195,10 @@ class ConformerEncoderLayer(nn.Layer):
...
@@ -194,9 +195,10 @@ class ConformerEncoderLayer(nn.Layer):
x
:
paddle
.
Tensor
,
x
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
mask
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
pos_emb
:
paddle
.
Tensor
,
mask_pad
:
paddle
.
Tensor
,
mask_pad
:
paddle
.
att_cache
:
paddle
.
Tensor
,
Tensor
,
# paddle.ones([0, 0, 0], dtype=paddle.bool)
cnn_cache
:
paddle
.
Tensor
,
att_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
cnn_cache
:
paddle
.
Tensor
,
# paddle.zeros([0, 0, 0, 0])
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
"""Compute encoded features.
"""Compute encoded features.
Args:
Args:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录