Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
DeepSpeech
提交
aea1e92a
D
DeepSpeech
项目概览
PaddlePaddle
/
DeepSpeech
大约 1 年 前同步成功
通知
207
Star
8425
Fork
1598
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
245
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DeepSpeech
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
245
Issue
245
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
aea1e92a
编写于
11月 29, 2021
作者:
J
Junkun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update cmd.sh
上级
3e5fc3dd
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
89 addition
and
0 deletion
+89
-0
examples/ted_en_zh/st1/cmd.sh
examples/ted_en_zh/st1/cmd.sh
+89
-0
未找到文件。
examples/ted_en_zh/st1/cmd.sh
0 → 100644
浏览文件 @
aea1e92a
# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ======
# Usage: <cmd>.pl [options] JOB=1:<nj> <log> <command...>
# e.g.
# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB
#
# Options:
# --time <time>: Limit the maximum time to execute.
# --mem <mem>: Limit the maximum memory usage.
# -–max-jobs-run <njob>: Limit the number parallel jobs. This is ignored for non-array jobs.
# --num-threads <ngpu>: Specify the number of CPU core.
# --gpu <ngpu>: Specify the number of GPU devices.
# --config: Change the configuration file from default.
#
# "JOB=1:10" is used for "array jobs" and it can control the number of parallel jobs.
# The left string of "=", i.e. "JOB", is replaced by <N>(Nth job) in the command and the log file name,
# e.g. "echo JOB" is changed to "echo 3" for the 3rd job and "echo 8" for 8th job respectively.
# Note that the number must start with a positive number, so you can't use "JOB=0:10" for example.
#
# run.pl, queue.pl, slurm.pl, and ssh.pl have unified interface, not depending on its backend.
# These options are mapping to specific options for each backend and
# it is configured by "conf/queue.conf" and "conf/slurm.conf" by default.
# If jobs failed, your configuration might be wrong for your environment.
#
#
# The official documentation for run.pl, queue.pl, slurm.pl, and ssh.pl:
# "Parallelization in Kaldi": http://kaldi-asr.org/doc/queue.html
# =========================================================~
# Select the backend used by run.sh from "local", "sge", "slurm", or "ssh"
cmd_backend
=
'local'
# Local machine, without any Job scheduling system
if
[
"
${
cmd_backend
}
"
=
local
]
;
then
# The other usage
export
train_cmd
=
"run.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export
cuda_cmd
=
"run.pl"
# Used for "*_recog.py"
export
decode_cmd
=
"run.pl"
# "qsub" (SGE, Torque, PBS, etc.)
elif
[
"
${
cmd_backend
}
"
=
sge
]
;
then
# The default setting is written in conf/queue.conf.
# You must change "-q g.q" for the "queue" for your environment.
# To know the "queue" names, type "qhost -q"
# Note that to use "--gpu *", you have to setup "complex_value" for the system scheduler.
export
train_cmd
=
"queue.pl"
export
cuda_cmd
=
"queue.pl"
export
decode_cmd
=
"queue.pl"
# "sbatch" (Slurm)
elif
[
"
${
cmd_backend
}
"
=
slurm
]
;
then
# The default setting is written in conf/slurm.conf.
# You must change "-p cpu" and "-p gpu" for the "partion" for your environment.
# To know the "partion" names, type "sinfo".
# You can use "--gpu * " by default for slurm and it is interpreted as "--gres gpu:*"
# The devices are allocated exclusively using "${CUDA_VISIBLE_DEVICES}".
export
train_cmd
=
"slurm.pl"
export
cuda_cmd
=
"slurm.pl"
export
decode_cmd
=
"slurm.pl"
elif
[
"
${
cmd_backend
}
"
=
ssh
]
;
then
# You have to create ".queue/machines" to specify the host to execute jobs.
# e.g. .queue/machines
# host1
# host2
# host3
# Assuming you can login them without any password, i.e. You have to set ssh keys.
export
train_cmd
=
"ssh.pl"
export
cuda_cmd
=
"ssh.pl"
export
decode_cmd
=
"ssh.pl"
# This is an example of specifying several unique options in the JHU CLSP cluster setup.
# Users can modify/add their own command options according to their cluster environments.
elif
[
"
${
cmd_backend
}
"
=
jhu
]
;
then
export
train_cmd
=
"queue.pl --mem 2G"
export
cuda_cmd
=
"queue-freegpu.pl --mem 2G --gpu 1 --config conf/gpu.conf"
export
decode_cmd
=
"queue.pl --mem 4G"
else
echo
"
$0
: Error: Unknown cmd_backend=
${
cmd_backend
}
"
1>&2
return
1
fi
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录