Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
f866059b
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 2 年 前同步成功
通知
210
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
f866059b
编写于
1月 14, 2022
作者:
J
Junkun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
config and formalize
上级
43aad7a0
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
20 addition
and
12 deletion
+20
-12
examples/ted_en_zh/st0/conf/tuning/decode.yaml
examples/ted_en_zh/st0/conf/tuning/decode.yaml
+2
-1
examples/ted_en_zh/st1/conf/tuning/decode.yaml
examples/ted_en_zh/st1/conf/tuning/decode.yaml
+2
-1
paddlespeech/s2t/models/u2_st/u2_st.py
paddlespeech/s2t/models/u2_st/u2_st.py
+16
-10
未找到文件。
examples/ted_en_zh/st0/conf/tuning/decode.yaml
浏览文件 @
f866059b
batch_size
:
5
batch_size
:
1
error_rate_type
:
char-bleu
decoding_method
:
fullsentence
# 'fullsentence', 'simultaneous'
beam_size
:
10
word_reward
:
0.7
maxlen_ratio
:
0.3
decoding_chunk_size
:
-1
# decoding chunk size. Defaults to -1.
# <0: for decoding, use full chunk.
# >0: for decoding, use fixed chunk size as set.
...
...
examples/ted_en_zh/st1/conf/tuning/decode.yaml
浏览文件 @
f866059b
batch_size
:
5
batch_size
:
1
error_rate_type
:
char-bleu
decoding_method
:
fullsentence
# 'fullsentence', 'simultaneous'
beam_size
:
10
word_reward
:
0.7
maxlen_ratio
:
0.3
decoding_chunk_size
:
-1
# decoding chunk size. Defaults to -1.
# <0: for decoding, use full chunk.
# >0: for decoding, use fixed chunk size as set.
...
...
paddlespeech/s2t/models/u2_st/u2_st.py
浏览文件 @
f866059b
...
...
@@ -310,7 +310,12 @@ class U2STBaseModel(nn.Layer):
ys
=
paddle
.
ones
((
len
(
hyps
),
i
),
dtype
=
paddle
.
long
)
if
hyps
[
0
][
"cache"
]
is
not
None
:
cache
=
[
paddle
.
ones
((
len
(
hyps
),
i
-
1
,
hyps
[
0
][
"cache"
][
0
].
shape
[
-
1
]),
dtype
=
paddle
.
float32
)
for
_
in
range
(
len
(
hyps
[
0
][
"cache"
]))]
cache
=
[
paddle
.
ones
(
(
len
(
hyps
),
i
-
1
,
hyp_cache
.
shape
[
-
1
]),
dtype
=
paddle
.
float32
)
for
hyp_cache
in
hyps
[
0
][
"cache"
]
]
for
j
,
hyp
in
enumerate
(
hyps
):
ys
[
j
,
:]
=
paddle
.
to_tensor
(
hyp
[
"yseq"
])
if
hyps
[
0
][
"cache"
]
is
not
None
:
...
...
@@ -319,17 +324,18 @@ class U2STBaseModel(nn.Layer):
ys_mask
=
subsequent_mask
(
i
).
unsqueeze
(
0
).
to
(
device
)
logp
,
cache
=
self
.
st_decoder
.
forward_one_step
(
encoder_out
.
repeat
(
len
(
hyps
),
1
,
1
),
encoder_mask
.
repeat
(
len
(
hyps
),
1
,
1
),
ys
,
ys_mask
,
cache
)
encoder_out
.
repeat
(
len
(
hyps
),
1
,
1
),
encoder_mask
.
repeat
(
len
(
hyps
),
1
,
1
),
ys
,
ys_mask
,
cache
)
hyps_best_kept
=
[]
for
j
,
hyp
in
enumerate
(
hyps
):
top_k_logp
,
top_k_index
=
logp
[
j
:
j
+
1
].
topk
(
beam_size
)
top_k_logp
,
top_k_index
=
logp
[
j
:
j
+
1
].
topk
(
beam_size
)
for
b
in
range
(
beam_size
):
new_hyp
=
{}
new_hyp
[
"score"
]
=
hyp
[
"score"
]
+
float
(
top_k_logp
[
0
,
b
])
new_hyp
[
"yseq"
]
=
[
0
]
*
(
1
+
len
(
hyp
[
"yseq"
]))
new_hyp
[
"yseq"
][:
len
(
hyp
[
"yseq"
])]
=
hyp
[
"yseq"
]
new_hyp
[
"yseq"
][:
len
(
hyp
[
"yseq"
])]
=
hyp
[
"yseq"
]
new_hyp
[
"yseq"
][
len
(
hyp
[
"yseq"
])]
=
int
(
top_k_index
[
0
,
b
])
new_hyp
[
"cache"
]
=
[
cache_
[
j
]
for
cache_
in
cache
]
# will be (2 x beam) hyps at most
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录