Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • DeepSpeech
  • Issue
  • #318

D
DeepSpeech
  • 项目概览

PaddlePaddle / DeepSpeech
大约 2 年 前同步成功

通知 210
Star 8425
Fork 1598
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 245
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 3
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
D
DeepSpeech
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 245
    • Issue 245
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 3
    • 合并请求 3
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 4月 08, 2019 by saxon_zh@saxon_zhGuest

Transfer learning for different language

Created by: mtbadakhshan

Hello, I want to create a ASR system to recognise Persian speech. I can train a model from scratch, but I want to resumes my training from a pre-trained model (such as LibriSpeech Model or BaiduEN8k Model). so I set the value of "--init_model_path" argument to the directory of LibriSpeech Model parameters (models/librispeech/params.tar.gz). but I got the following output:

-----------  Configuration Arguments -----------
augment_conf_path: conf/augmentation.config
batch_size: 16
dev_manifest: /home/DeepSpeech2-PaddlePaddle/train-persian/manifest.persian
init_model_path: models/librispeech/params.tar.gz
is_local: 1
learning_rate: 1e-05
max_duration: 27.0
mean_std_path: /home/DeepSpeech2-PaddlePaddle/train-persian/mean_std.npz
min_duration: 0.0
num_conv_layers: 2
num_iter_print: 100
num_passes: 20
num_proc_data: 1
num_rnn_layers: 3
output_model_dir: /home/DeepSpeech2-PaddlePaddle/train-persian/checkpoints/libri-based
rnn_layer_size: 2048
share_rnn_weights: 1
shuffle_method: batch_shuffle_clipped
specgram_type: linear
test_off: 0
train_manifest: /home/DeepSpeech2-PaddlePaddle/train-persian/manifest.persian
trainer_count: 1
use_gpu: 1
use_gru: 0
use_sortagrad: 1
vocab_path: /home/DeepSpeech2-PaddlePaddle/train-persian/vocab.txt
------------------------------------------------
I0408 13:50:13.156404  6138 Util.cpp:166] commandline:  --use_gpu=1 --rnn_use_batch=True --log_clipping=True --trainer_count=1 
[INFO 2019-04-08 13:50:13,723 layers.py:2606] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2019-04-08 13:50:13,724 layers.py:3133] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2019-04-08 13:50:13,724 layers.py:7224] output for __scale_sub_region_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2019-04-08 13:50:13,725 layers.py:2606] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2019-04-08 13:50:13,725 layers.py:3133] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2019-04-08 13:50:13,726 layers.py:7224] output for __scale_sub_region_1__: c = 32, h = 41, w = 54, size = 70848
I0408 13:50:16.769909  6138 GradientMachine.cpp:94] Initing parameters..
I0408 13:50:18.651311  6138 GradientMachine.cpp:101] Init parameters done.
F0408 13:50:22.542716  6138 TensorApply.h:126] Check failed: lhs_.getWidth() == rhs_.getWidth() (118784 vs. 139264) 
*** Check failure stack trace: ***
    @     0x7feb1472b04d  google::LogMessage::Fail()
    @     0x7feb1472d398  google::LogMessage::SendToLog()
    @     0x7feb1472ab5b  google::LogMessage::Flush()
    @     0x7feb1472e26e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7feb148969c8  paddle::adamApply()
    @     0x7feb1488f8c9  paddle::AdamParameterOptimizer::update()
    @     0x7feb1488fcf2  paddle::OptimizerWithGradientClipping::update()
    @     0x7feb14871e7e  paddle::SgdThreadUpdater::updateImpl()
    @     0x7feb14700731  ParameterUpdater::update()
    @     0x7feb141b3e87  _wrap_ParameterUpdater_update
    @           0x4c468a  PyEval_EvalFrameEx
    @           0x4c9d8f  PyEval_EvalFrameEx
    @           0x4c2765  PyEval_EvalCodeEx
    @           0x4ca099  PyEval_EvalFrameEx
    @           0x4c2765  PyEval_EvalCodeEx
    @           0x4ca099  PyEval_EvalFrameEx
    @           0x4c2765  PyEval_EvalCodeEx
    @           0x4ca8d1  PyEval_EvalFrameEx
    @           0x4c2765  PyEval_EvalCodeEx
    @           0x4ca8d1  PyEval_EvalFrameEx
    @           0x4c2765  PyEval_EvalCodeEx
    @           0x4c2509  PyEval_EvalCode
    @           0x4f1def  (unknown)
    @           0x4ec652  PyRun_FileExFlags
    @           0x4eae31  PyRun_SimpleFileExFlags
    @           0x49e14a  Py_Main
    @     0x7feb308f2830  __libc_start_main
    @           0x49d9d9  _start
    @              (nil)  (unknown)
Aborted (core dumped)
Fail in training!

the error Individually:

F0408 13:50:22.542716  6138 TensorApply.h:126] Check failed: lhs_.getWidth() == rhs_.getWidth() (118784 vs. 139264) 

P.S. It is necessary to mention that if I continue my training from checkpoints that they were created while I was training from scratch, I will not get any errors. P.P.S I am pretty sure that using different vocabulary causes this problem, but I have no idea how to fix it.

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/DeepSpeech#318
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7