Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
86811af7
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
1 年多 前同步成功
通知
207
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
86811af7
编写于
10月 11, 2017
作者:
X
Xinghai Sun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Reset default value of batch_size, nun_proc_data and fix an invalid url for DS2.
上级
2e5e9b8c
变更
14
隐藏空白更改
内联
并排
Showing
14 changed file
with
18 addition
and
17 deletion
+18
-17
examples/aishell/run_test.sh
examples/aishell/run_test.sh
+1
-1
examples/aishell/run_test_golden.sh
examples/aishell/run_test_golden.sh
+1
-1
examples/aishell/run_train.sh
examples/aishell/run_train.sh
+1
-1
examples/librispeech/run_test.sh
examples/librispeech/run_test.sh
+1
-1
examples/librispeech/run_test_golden.sh
examples/librispeech/run_test_golden.sh
+1
-1
examples/librispeech/run_train.sh
examples/librispeech/run_train.sh
+2
-2
examples/librispeech/run_tune.sh
examples/librispeech/run_tune.sh
+1
-1
examples/tiny/run_test.sh
examples/tiny/run_test.sh
+1
-1
examples/tiny/run_test_golden.sh
examples/tiny/run_test_golden.sh
+1
-1
infer.py
infer.py
+1
-1
models/librispeech/download_model.sh
models/librispeech/download_model.sh
+1
-1
test.py
test.py
+2
-2
tools/tune.py
tools/tune.py
+3
-2
train.py
train.py
+1
-1
未找到文件。
examples/aishell/run_test.sh
浏览文件 @
86811af7
...
...
@@ -18,7 +18,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
300
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
1024
\
...
...
examples/aishell/run_test_golden.sh
浏览文件 @
86811af7
...
...
@@ -27,7 +27,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
300
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
1024
\
...
...
examples/aishell/run_train.sh
浏览文件 @
86811af7
...
...
@@ -9,7 +9,7 @@ python -u train.py \
--batch_size
=
64
\
--trainer_count
=
8
\
--num_passes
=
50
\
--num_proc_data
=
8
\
--num_proc_data
=
16
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
1024
\
...
...
examples/librispeech/run_test.sh
浏览文件 @
86811af7
...
...
@@ -18,7 +18,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
500
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
examples/librispeech/run_test_golden.sh
浏览文件 @
86811af7
...
...
@@ -27,7 +27,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
500
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
examples/librispeech/run_train.sh
浏览文件 @
86811af7
...
...
@@ -6,10 +6,10 @@ cd ../.. > /dev/null
# if you wish to resume from an exists model, uncomment --init_model_path
CUDA_VISIBLE_DEVICES
=
0,1,2,3,4,5,6,7
\
python
-u
train.py
\
--batch_size
=
512
\
--batch_size
=
160
\
--trainer_count
=
8
\
--num_passes
=
50
\
--num_proc_data
=
8
\
--num_proc_data
=
16
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
examples/librispeech/run_tune.sh
浏览文件 @
86811af7
...
...
@@ -6,7 +6,7 @@ cd ../.. > /dev/null
CUDA_VISIBLE_DEVICES
=
0,1,2,3
\
python
-u
tools/tune.py
\
--num_batches
=
-1
\
--batch_size
=
256
\
--batch_size
=
128
\
--trainer_count
=
8
\
--beam_size
=
500
\
--num_proc_bsearch
=
12
\
...
...
examples/tiny/run_test.sh
浏览文件 @
86811af7
...
...
@@ -18,7 +18,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
500
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
examples/tiny/run_test_golden.sh
浏览文件 @
86811af7
...
...
@@ -27,7 +27,7 @@ python -u test.py \
--trainer_count
=
8
\
--beam_size
=
500
\
--num_proc_bsearch
=
8
\
--num_proc_data
=
4
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
infer.py
浏览文件 @
86811af7
...
...
@@ -17,7 +17,7 @@ add_arg = functools.partial(add_arguments, argparser=parser)
add_arg
(
'num_samples'
,
int
,
10
,
"# of samples to infer."
)
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'beam_size'
,
int
,
500
,
"Beam search width."
)
add_arg
(
'num_proc_bsearch'
,
int
,
12
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_bsearch'
,
int
,
8
,
"# of CPUs for beam search."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
models/librispeech/download_model.sh
浏览文件 @
86811af7
...
...
@@ -2,7 +2,7 @@
.
../../utils/utility.sh
URL
=
'http://cloud.dlnel.org/filepub/?uuid=
8e3cf742-2ff3-41ce-a49d-f6158cc06a23
'
URL
=
'http://cloud.dlnel.org/filepub/?uuid=
6020a634-5399-4423-b021-c5ed32680fff
'
MD5
=
2ef08f8b608a7c555592161fc14d81a6
TARGET
=
./librispeech_model.tar.gz
...
...
test.py
浏览文件 @
86811af7
...
...
@@ -17,8 +17,8 @@ add_arg = functools.partial(add_arguments, argparser=parser)
add_arg
(
'batch_size'
,
int
,
128
,
"Minibatch size."
)
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'beam_size'
,
int
,
500
,
"Beam search width."
)
add_arg
(
'num_proc_bsearch'
,
int
,
12
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_data'
,
int
,
4
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_proc_bsearch'
,
int
,
8
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_data'
,
int
,
8
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
tools/tune.py
浏览文件 @
86811af7
...
...
@@ -27,7 +27,8 @@ add_arg('num_batches', int, -1, "# of batches tuning on. "
add_arg
(
'batch_size'
,
int
,
256
,
"# of samples per batch."
)
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'beam_size'
,
int
,
500
,
"Beam search width."
)
add_arg
(
'num_proc_bsearch'
,
int
,
12
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_bsearch'
,
int
,
8
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_data'
,
int
,
8
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
@@ -86,7 +87,7 @@ def tune():
mean_std_filepath
=
args
.
mean_std_path
,
augmentation_config
=
'{}'
,
specgram_type
=
args
.
specgram_type
,
num_threads
=
1
)
num_threads
=
args
.
num_proc_data
)
audio_data
=
paddle
.
layer
.
data
(
name
=
"audio_spectrogram"
,
...
...
train.py
浏览文件 @
86811af7
...
...
@@ -16,7 +16,7 @@ add_arg = functools.partial(add_arguments, argparser=parser)
add_arg
(
'batch_size'
,
int
,
256
,
"Minibatch size."
)
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'num_passes'
,
int
,
200
,
"# of training epochs."
)
add_arg
(
'num_proc_data'
,
int
,
8
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_proc_data'
,
int
,
16
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录