Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
be1fbc68
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
接近 2 年 前同步成功
通知
210
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
be1fbc68
编写于
10月 07, 2017
作者:
X
Xinghai Sun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Set process daemon property and reset default value of num_proc_data arguments.
上级
64ab19c1
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
6 addition
and
4 deletion
+6
-4
data_utils/utility.py
data_utils/utility.py
+2
-0
examples/aishell/run_train.sh
examples/aishell/run_train.sh
+1
-1
examples/librispeech/run_train.sh
examples/librispeech/run_train.sh
+1
-1
test.py
test.py
+1
-1
train.py
train.py
+1
-1
未找到文件。
data_utils/utility.py
浏览文件 @
be1fbc68
...
...
@@ -133,6 +133,7 @@ def xmap_readers_mp(mapper, reader, process_num, buffer_size, order=False):
# start a read worker in a process
target
=
order_read_worker
if
order
else
read_worker
p
=
Process
(
target
=
target
,
args
=
(
reader
,
in_queue
))
p
.
daemon
=
True
p
.
start
()
# start handle_workers with multiple processes
...
...
@@ -143,6 +144,7 @@ def xmap_readers_mp(mapper, reader, process_num, buffer_size, order=False):
Process
(
target
=
target
,
args
=
args
)
for
_
in
xrange
(
process_num
)
]
for
w
in
workers
:
w
.
daemon
=
True
w
.
start
()
# get results
...
...
examples/aishell/run_train.sh
浏览文件 @
be1fbc68
...
...
@@ -9,7 +9,7 @@ python -u train.py \
--batch_size
=
64
\
--trainer_count
=
8
\
--num_passes
=
50
\
--num_proc_data
=
12
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
1024
\
...
...
examples/librispeech/run_train.sh
浏览文件 @
be1fbc68
...
...
@@ -9,7 +9,7 @@ python -u train.py \
--batch_size
=
512
\
--trainer_count
=
8
\
--num_passes
=
50
\
--num_proc_data
=
12
\
--num_proc_data
=
8
\
--num_conv_layers
=
2
\
--num_rnn_layers
=
3
\
--rnn_layer_size
=
2048
\
...
...
test.py
浏览文件 @
be1fbc68
...
...
@@ -18,7 +18,7 @@ add_arg('batch_size', int, 128, "Minibatch size.")
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'beam_size'
,
int
,
500
,
"Beam search width."
)
add_arg
(
'num_proc_bsearch'
,
int
,
12
,
"# of CPUs for beam search."
)
add_arg
(
'num_proc_data'
,
int
,
12
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_proc_data'
,
int
,
4
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
train.py
浏览文件 @
be1fbc68
...
...
@@ -16,7 +16,7 @@ add_arg = functools.partial(add_arguments, argparser=parser)
add_arg
(
'batch_size'
,
int
,
256
,
"Minibatch size."
)
add_arg
(
'trainer_count'
,
int
,
8
,
"# of Trainers (CPUs or GPUs)."
)
add_arg
(
'num_passes'
,
int
,
200
,
"# of training epochs."
)
add_arg
(
'num_proc_data'
,
int
,
12
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_proc_data'
,
int
,
8
,
"# of CPUs for data preprocessing."
)
add_arg
(
'num_conv_layers'
,
int
,
2
,
"# of convolution layers."
)
add_arg
(
'num_rnn_layers'
,
int
,
3
,
"# of recurrent layers."
)
add_arg
(
'rnn_layer_size'
,
int
,
2048
,
"# of recurrent cells per layer."
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录