Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
63aeb747
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 2 年 前同步成功
通知
210
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
63aeb747
编写于
9月 09, 2022
作者:
H
Hui Zhang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
more comment
上级
a7c6c54e
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
11 addition
and
6 deletion
+11
-6
paddlespeech/s2t/exps/u2/model.py
paddlespeech/s2t/exps/u2/model.py
+5
-5
paddlespeech/s2t/modules/encoder.py
paddlespeech/s2t/modules/encoder.py
+6
-1
未找到文件。
paddlespeech/s2t/exps/u2/model.py
浏览文件 @
63aeb747
...
@@ -492,10 +492,9 @@ class U2Tester(U2Trainer):
...
@@ -492,10 +492,9 @@ class U2Tester(U2Trainer):
]
]
infer_model
.
forward_encoder_chunk
=
paddle
.
jit
.
to_static
(
infer_model
.
forward_encoder_chunk
=
paddle
.
jit
.
to_static
(
infer_model
.
forward_encoder_chunk
,
input_spec
=
input_spec
)
infer_model
.
forward_encoder_chunk
,
input_spec
=
input_spec
)
# paddle.jit.save(static_model, self.args.export_path, combine_params=True)
######################### infer_model.forward_attention_decoder ########################
######################### infer_model.forward_attention_decoder ########################
# TODO: 512(encoder_output) be configable. 1 for B
# TODO: 512(encoder_output) be configable. 1 for B
atchSize
input_spec
=
[
input_spec
=
[
paddle
.
static
.
InputSpec
(
shape
=
[
None
,
None
],
dtype
=
'int64'
),
paddle
.
static
.
InputSpec
(
shape
=
[
None
,
None
],
dtype
=
'int64'
),
paddle
.
static
.
InputSpec
(
shape
=
[
None
],
dtype
=
'int64'
),
paddle
.
static
.
InputSpec
(
shape
=
[
None
],
dtype
=
'int64'
),
...
@@ -503,7 +502,6 @@ class U2Tester(U2Trainer):
...
@@ -503,7 +502,6 @@ class U2Tester(U2Trainer):
]
]
infer_model
.
forward_attention_decoder
=
paddle
.
jit
.
to_static
(
infer_model
.
forward_attention_decoder
=
paddle
.
jit
.
to_static
(
infer_model
.
forward_attention_decoder
,
input_spec
=
input_spec
)
infer_model
.
forward_attention_decoder
,
input_spec
=
input_spec
)
# paddle.jit.save(static_model, self.args.export_path, combine_params=True)
######################### infer_model.ctc_activation ########################
######################### infer_model.ctc_activation ########################
# TODO: 512(encoder_output) be configable
# TODO: 512(encoder_output) be configable
...
@@ -513,8 +511,10 @@ class U2Tester(U2Trainer):
...
@@ -513,8 +511,10 @@ class U2Tester(U2Trainer):
infer_model
.
ctc_activation
=
paddle
.
jit
.
to_static
(
infer_model
.
ctc_activation
=
paddle
.
jit
.
to_static
(
infer_model
.
ctc_activation
,
input_spec
=
input_spec
)
infer_model
.
ctc_activation
,
input_spec
=
input_spec
)
paddle
.
jit
.
save
(
infer_model
,
'./export.jit'
,
combine_params
=
True
,
skip_forward
=
True
)
# jit save
paddle
.
jit
.
save
(
infer_model
,
self
.
args
.
export_path
,
combine_params
=
True
,
skip_forward
=
True
)
# test dy2static
def
flatten
(
out
):
def
flatten
(
out
):
if
isinstance
(
out
,
paddle
.
Tensor
):
if
isinstance
(
out
,
paddle
.
Tensor
):
return
[
out
]
return
[
out
]
...
@@ -541,7 +541,7 @@ class U2Tester(U2Trainer):
...
@@ -541,7 +541,7 @@ class U2Tester(U2Trainer):
from
paddle.jit.layer
import
Layer
from
paddle.jit.layer
import
Layer
layer
=
Layer
()
layer
=
Layer
()
layer
.
load
(
'./export.jit'
,
paddle
.
CPUPlace
())
layer
.
load
(
self
.
args
.
export_path
,
paddle
.
CPUPlace
())
xs1
=
paddle
.
full
([
1
,
7
,
80
],
0.1
,
dtype
=
'float32'
)
xs1
=
paddle
.
full
([
1
,
7
,
80
],
0.1
,
dtype
=
'float32'
)
offset
=
paddle
.
to_tensor
([
0
],
dtype
=
'int32'
)
offset
=
paddle
.
to_tensor
([
0
],
dtype
=
'int32'
)
...
...
paddlespeech/s2t/modules/encoder.py
浏览文件 @
63aeb747
...
@@ -251,7 +251,12 @@ class BaseEncoder(nn.Layer):
...
@@ -251,7 +251,12 @@ class BaseEncoder(nn.Layer):
for
i
,
layer
in
enumerate
(
self
.
encoders
):
for
i
,
layer
in
enumerate
(
self
.
encoders
):
# att_cache[i:i+1] = (1, head, cache_t1, d_k*2)
# att_cache[i:i+1] = (1, head, cache_t1, d_k*2)
# cnn_cache[i:i+1] = (1, B=1, hidden-dim, cache_t2)
# cnn_cache[i:i+1] = (1, B=1, hidden-dim, cache_t2)
# zeros([0,0,0,0]) support [i:i+1] slice
# WARNING: eliminate if-else cond op in graph
# tensor zeros([0,0,0,0]) support [i:i+1] slice, will return zeros([0,0,0,0]) tensor
# raw code as below:
# att_cache=att_cache[i:i+1] if elayers > 0 else att_cache,
# cnn_cache=cnn_cache[i:i+1] if paddle.shape(cnn_cache)[0] > 0 else cnn_cache,
xs
,
_
,
new_att_cache
,
new_cnn_cache
=
layer
(
xs
,
_
,
new_att_cache
,
new_cnn_cache
=
layer
(
xs
,
att_mask
,
pos_emb
,
xs
,
att_mask
,
pos_emb
,
att_cache
=
att_cache
[
i
:
i
+
1
],
att_cache
=
att_cache
[
i
:
i
+
1
],
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录