Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
c1fbfe92
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 2 年 前同步成功
通知
210
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
c1fbfe92
编写于
8月 04, 2022
作者:
H
Hui Zhang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add test
上级
05bc2588
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
49 addition
and
0 deletion
+49
-0
paddlespeech/s2t/exps/u2/model.py
paddlespeech/s2t/exps/u2/model.py
+49
-0
未找到文件。
paddlespeech/s2t/exps/u2/model.py
浏览文件 @
c1fbfe92
...
@@ -512,3 +512,52 @@ class U2Tester(U2Trainer):
...
@@ -512,3 +512,52 @@ class U2Tester(U2Trainer):
infer_model
.
ctc_activation
,
input_spec
=
input_spec
)
infer_model
.
ctc_activation
,
input_spec
=
input_spec
)
paddle
.
jit
.
save
(
infer_model
,
'./export.jit'
,
combine_params
=
True
)
paddle
.
jit
.
save
(
infer_model
,
'./export.jit'
,
combine_params
=
True
)
def
flatten
(
out
):
if
isinstance
(
out
,
paddle
.
Tensor
):
return
[
out
]
flatten_out
=
[]
for
var
in
out
:
if
isinstance
(
var
,
(
list
,
tuple
)):
flatten_out
.
extend
(
flatten
(
var
))
else
:
flatten_out
.
append
(
var
)
return
flatten_out
xs1
=
paddle
.
rand
(
shape
=
[
1
,
67
,
80
],
dtype
=
'float32'
)
offset
=
paddle
.
to_tensor
([
0
],
dtype
=
'int32'
)
required_cache_size
=
-
16
att_cache
=
paddle
.
zeros
([
0
,
0
,
0
,
0
])
cnn_cache
=
paddle
.
zeros
([
0
,
0
,
0
,
0
])
# xs, att_cache, cnn_cache = infer_model.forward_encoder_chunk(xs1, offset, required_cache_size, att_cache, cnn_cache)
# xs2 = paddle.rand(shape=[1, 67, 80], dtype='float32')
# offset = paddle.to_tensor([16], dtype='int32')
# out1 = infer_model.forward_encoder_chunk(xs2, offset, required_cache_size, att_cache, cnn_cache)
# print(out1)
xs
,
att_cache
,
cnn_cache
=
infer_model
.
forward_encoder_chunk
(
xs1
,
offset
,
att_cache
,
cnn_cache
)
xs2
=
paddle
.
rand
(
shape
=
[
1
,
67
,
80
],
dtype
=
'float32'
)
offset
=
paddle
.
to_tensor
([
16
],
dtype
=
'int32'
)
out1
=
infer_model
.
forward_encoder_chunk
(
xs2
,
offset
,
att_cache
,
cnn_cache
)
print
(
out1
)
# from paddle.jit.layer import Layer
# layer = Layer()
# layer.load('./export.jit', paddle.CPUPlace())
# offset = paddle.to_tensor([0], dtype='int32')
# att_cache = paddle.zeros([0, 0, 0, 0])
# cnn_cache=paddle.zeros([0, 0, 0, 0])
# xs, att_cache, cnn_cache = layer.forward_encoder_chunk(xs1, offset, att_cache, cnn_cache)
# offset = paddle.to_tensor([16], dtype='int32')
# out2 = layer.forward_encoder_chunk(xs2, offset, att_cache, cnn_cache)
# # print(out2)
# out1 = flatten(out1)
# out2 = flatten(out2)
# for i in range(len(out1)):
# print(np.equal(out1[i].numpy(), out2[i].numpy()).all())
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录