Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
5890c84c
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 1 年 前同步成功
通知
207
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
未验证
提交
5890c84c
编写于
8月 30, 2021
作者:
J
Jackwaterveg
提交者:
GitHub
8月 30, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #793 from PaddlePaddle/seed
ds2 offline cer 6p4287
上级
1f53d54f
341038b6
变更
7
显示空白变更内容
内联
并排
Showing
7 changed file
with
33 addition
and
29 deletion
+33
-29
deepspeech/models/ds2/conv.py
deepspeech/models/ds2/conv.py
+0
-7
deepspeech/modules/subsampling.py
deepspeech/modules/subsampling.py
+7
-7
examples/aishell/s0/README.md
examples/aishell/s0/README.md
+1
-1
examples/aishell/s0/conf/deepspeech2.yaml
examples/aishell/s0/conf/deepspeech2.yaml
+1
-1
examples/aishell/s0/local/train.sh
examples/aishell/s0/local/train.sh
+1
-1
utils/avg.sh
utils/avg.sh
+18
-8
utils/tarball.sh
utils/tarball.sh
+5
-4
未找到文件。
deepspeech/models/ds2/conv.py
浏览文件 @
5890c84c
...
...
@@ -41,13 +41,6 @@ def conv_output_size(I, F, P, S):
return
(
I
-
F
+
2
*
P
-
S
)
//
S
# receptive field calculator
# https://fomoro.com/research/article/receptive-field-calculator
# https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks#hyperparameters
# https://distill.pub/2019/computing-receptive-fields/
# Rl-1 = Sl * Rl + (Kl - Sl)
class
ConvBn
(
nn
.
Layer
):
"""Convolution layer with batch normalization.
...
...
deepspeech/modules/subsampling.py
浏览文件 @
5890c84c
...
...
@@ -108,8 +108,8 @@ class Conv2dSubsampling4(BaseSubsampling):
nn
.
Linear
(
odim
*
(((
idim
-
1
)
//
2
-
1
)
//
2
),
odim
))
self
.
subsampling_rate
=
4
# The right context for every conv layer is computed by:
# (kernel_size - 1)
/ 2 * stride
* frame_rate_of_this_layer
# 6 = (3 - 1)
/ 2 * 2 * 1 + (3 - 1) / 2 * 2
* 2
# (kernel_size - 1) * frame_rate_of_this_layer
# 6 = (3 - 1)
* 1 + (3 - 1)
* 2
self
.
right_context
=
6
def
forward
(
self
,
x
:
paddle
.
Tensor
,
x_mask
:
paddle
.
Tensor
,
offset
:
int
=
0
...
...
@@ -160,10 +160,10 @@ class Conv2dSubsampling6(BaseSubsampling):
# when Padding == 0, O = (I - F - S) // S
self
.
linear
=
nn
.
Linear
(
odim
*
(((
idim
-
1
)
//
2
-
2
)
//
3
),
odim
)
# The right context for every conv layer is computed by:
# (kernel_size - 1)
/ 2 * stride
* frame_rate_of_this_layer
# 1
4 = (3 - 1) / 2 * 2 * 1 + (5 - 1) / 2 * 3
* 2
# (kernel_size - 1) * frame_rate_of_this_layer
# 1
0 = (3 - 1) * 1 + (5 - 1)
* 2
self
.
subsampling_rate
=
6
self
.
right_context
=
1
4
self
.
right_context
=
1
0
def
forward
(
self
,
x
:
paddle
.
Tensor
,
x_mask
:
paddle
.
Tensor
,
offset
:
int
=
0
)
->
Tuple
[
paddle
.
Tensor
,
paddle
.
Tensor
,
paddle
.
Tensor
]:
...
...
@@ -214,8 +214,8 @@ class Conv2dSubsampling8(BaseSubsampling):
odim
)
self
.
subsampling_rate
=
8
# The right context for every conv layer is computed by:
# (kernel_size - 1)
/ 2 * stride
* frame_rate_of_this_layer
# 14 = (3 - 1)
/ 2 * 2 * 1 + (3 - 1) / 2 * 2 * 2 + (3 - 1) / 2 * 2
* 4
# (kernel_size - 1) * frame_rate_of_this_layer
# 14 = (3 - 1)
* 1 + (3 - 1) * 2 + (3 - 1)
* 4
self
.
right_context
=
14
def
forward
(
self
,
x
:
paddle
.
Tensor
,
x_mask
:
paddle
.
Tensor
,
offset
:
int
=
0
...
...
examples/aishell/s0/README.md
浏览文件 @
5890c84c
...
...
@@ -10,7 +10,7 @@
| Model | Params | Release | Config | Test set | Loss | CER |
| --- | --- | --- | --- | --- | --- | --- |
| DeepSpeech2 | 58.4M | 2.2.0 | conf/deepspeech2.yaml + spec aug
+ new datapipe | test | 6.396368026733398 | 0.068382
|
| DeepSpeech2 | 58.4M | 2.2.0 | conf/deepspeech2.yaml + spec aug
| test | 5.71956205368042 | 0.064287
|
| DeepSpeech2 | 58.4M | 2.1.0 | conf/deepspeech2.yaml + spec aug | test | 7.483316898345947 | 0.077860 |
| DeepSpeech2 | 58.4M | 2.1.0 | conf/deepspeech2.yaml | test | 7.299022197723389 | 0.078671 |
| DeepSpeech2 | 58.4M | 2.0.0 | conf/deepspeech2.yaml | test | - | 0.078977 |
...
...
examples/aishell/s0/conf/deepspeech2.yaml
浏览文件 @
5890c84c
...
...
@@ -42,7 +42,7 @@ model:
share_rnn_weights
:
False
training
:
n_epoch
:
5
0
n_epoch
:
8
0
lr
:
2e-3
lr_decay
:
0.83
weight_decay
:
1e-06
...
...
examples/aishell/s0/local/train.sh
浏览文件 @
5890c84c
...
...
@@ -19,7 +19,7 @@ fi
mkdir
-p
exp
seed
=
10
24
seed
=
10
086
if
[
${
seed
}
]
;
then
export
FLAGS_cudnn_deterministic
=
True
fi
...
...
utils/avg.sh
浏览文件 @
5890c84c
#! /usr/bin/env bash
if
[
$#
!=
2
]
;
then
echo
"usage:
${
0
}
ckpt_dir avg_num"
if
[
$#
!=
3
]
;
then
echo
"usage:
${
0
}
[best|latest]
ckpt_dir avg_num"
exit
-1
fi
ckpt_dir
=
${
1
}
average_num
=
${
2
}
avg_mode
=
${
2
}
# best,latest
average_num
=
${
3
}
decode_checkpoint
=
${
ckpt_dir
}
/avg_
${
average_num
}
.pdparams
avg_model.py
\
--dst_model
${
decode_checkpoint
}
\
--ckpt_dir
${
ckpt_dir
}
\
--num
${
average_num
}
\
--val_best
if
[
$avg_mode
==
best
]
;
then
# best
avg_model.py
\
--dst_model
${
decode_checkpoint
}
\
--ckpt_dir
${
ckpt_dir
}
\
--num
${
average_num
}
\
--val_best
else
# latest
avg_model.py
\
--dst_model
${
decode_checkpoint
}
\
--ckpt_dir
${
ckpt_dir
}
\
--num
${
average_num
}
fi
if
[
$?
-ne
0
]
;
then
echo
"Failed in avg ckpt!"
...
...
utils/tarball.sh
浏览文件 @
5890c84c
#!/bin/bash
if
[
$#
!=
4
]
;
then
echo
"usage:
$0
ckpt_prefix model_config mean_std vocab"
if
[
$#
!=
5
]
;
then
echo
"usage:
$0
ckpt_prefix model_config mean_std vocab
pack_name
"
exit
-1
fi
...
...
@@ -9,6 +9,7 @@ ckpt_prefix=$1
model_config
=
$2
mean_std
=
$3
vocab
=
$4
pack_name
=
$5
output
=
release
...
...
@@ -27,6 +28,6 @@ cp ${ckpt_prefix}.* ${output}
# model config, mean std, vocab
cp
${
model_config
}
${
mean_std
}
${
vocab
}
${
output
}
tar
zcvf release.tar.gz
${
output
}
tar
zcvf
${
pack_name
}
.
release.tar.gz
${
output
}
echo
"tarball done!"
echo
"tarball
:
${
pack_name
}
.release.tar.gz
done!"
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录