GPU execution requested, but not compiled with GPU support **F0315 14:52:42.020200 70689 hl_warpctc_wrap.cc:131] Check failed: CTC_STATUS_SUCCESS == dynload::compute_ctc_loss(batchInput, batchGrad, cpuLabels, cpuLabelLengths, cpuInputLengths, numClasse...
Created by: bolt163
https://github.com/PaddlePaddle/DeepSpeech/issues/176) 已解决,编译的时候 GPU选项没有打开; paddle已开启gpu选项编译通过并重新安装,但是 swig_decoders沿用的是昨天编译通过的,下面这个错误不知是否 受swig_decoders 没有重新编译的影响,有点奇怪.....求大神解答..... /DeepSpeech/examples/aishell> sh run_train.sh ----------- Configuration Arguments ----------- augment_conf_path: conf/augmentation.config batch_size: 64 dev_manifest: data/aishell/manifest.dev init_model_path: None is_local: 1 learning_rate: 0.0005 max_duration: 27.0 mean_std_path: data/aishell/mean_std.npz min_duration: 0.0 num_conv_layers: 2 num_iter_print: 100 num_passes: 50 num_proc_data: 16 num_rnn_layers: 3 output_model_dir: ./checkpoints/aishell rnn_layer_size: 1024 share_rnn_weights: 0 shuffle_method: batch_shuffle_clipped specgram_type: linear test_off: 0 train_manifest: data/aishell/manifest.train trainer_count: 4 use_gpu: 1 use_gru: 1 use_sortagrad: 1 vocab_path: data/aishell/vocab.txt
昨天的cudnn问题(I0315 14:52:31.625025 70572 Util.cpp:166] commandline: --use_gpu=1 --rnn_use_batch=True --log_clipping=True --trainer_count=4 [INFO 2018-03-15 14:52:33,805 layers.py:2714] output for conv_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-03-15 14:52:33,806 layers.py:3282] output for batch_norm_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-03-15 14:52:33,807 layers.py:7454] output for scale_sub_region_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-03-15 14:52:33,808 layers.py:2714] output for conv_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2018-03-15 14:52:33,808 layers.py:3282] output for batch_norm_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2018-03-15 14:52:33,809 layers.py:7454] output for scale_sub_region_1: c = 32, h = 41, w = 54, size = 70848 I0315 14:52:33.829530 70572 MultiGradientMachine.cpp:99] numLogicalDevices=1 numThreads=4 numDevices=4 I0315 14:52:33.940189 70572 GradientMachine.cpp:94] Initing parameters.. I0315 14:52:37.948616 70572 GradientMachine.cpp:101] Init parameters done. **GPU execution requested, but not compiled with GPU support GPU execution requested, but not compiled with GPU support **F0315 14:52:42.020200 70689 hl_warpctc_wrap.cc:131] Check failed: CTC_STATUS_SUCCESS == dynload::compute_ctc_loss(batchInput, batchGrad, cpuLabels, cpuLabelLengths, cpuInputLengths, numClasses, numSequences, cpuCosts, workspace, *options) (0 vs. 3) warp-ctc [version 2] Error: execution failed *** Check failure stack trace: ******* @ 0x7f28c1bd3dad google::LogMessage::Fail() @ 0x7f28c1bd7f6c google::LogMessage::SendToLog() @ 0x7f28c1bd38d3 google::LogMessage::Flush() @ 0x7f28c1bd89be google::LogMessageFatal::~LogMessageFatal() @ 0x7f28c1b8bf91 hl_warpctc_compute_loss() @ 0x7f28c17fab9f paddle::WarpCTCLayer::forward() @ 0x7f28c19410fd paddle::NeuralNetwork::forward() @ 0x7f28c194c334 paddle::TrainerThread::forward() @ 0x7f28c194d625 paddle::TrainerThread::computeThread() @ 0x7f290b204870 (unknown) @ 0x7f291697bdc5 start_thread @ 0x7f2915fa029d __clone @ (nil) (unknown) run_train.sh: line 34: 70572 Aborted CUDA_VISIBLE_DEVICES=0,1,2,3 python -u train.py --batch_size=64 --trainer_count=4 --num_passes=50 --num_proc_data=16 --num_conv_layers=2 --num_rnn_layers=3 --rnn_layer_size=1024 --num_iter_print=100 --learning_rate=5e-4 --max_duration=27.0 --min_duration=0.0 --test_off=False --use_sortagrad=True --use_gru=True --use_gpu=True --is_local=True --share_rnn_weights=False --train_manifest='data/aishell/manifest.train' --dev_manifest='data/aishell/manifest.dev' --mean_std_path='data/aishell/mean_std.npz' --vocab_path='data/aishell/vocab.txt' --output_model_dir='./checkpoints/aishell' --augment_conf_path='conf/augmentation.config' --specgram_type='linear' --shuffle_method='batch_shuffle_clipped' Failed in training!