sh train.sh 报错
Created by: 18811737901
我的参数 ----------- Configuration Arguments ----------- augment_conf_path: conf/augmentation.config batch_size: 2 dev_manifest: data/aishell/manifest.dev init_model_path: None is_local: 1 learning_rate: 0.05 max_duration: 27.0 mean_std_path: data/aishell/mean_std.npz min_duration: 0.0 num_conv_layers: 2 num_iter_print: 100 num_passes: 50 num_proc_data: 3 num_rnn_layers: 3 output_model_dir: ./checkpoints/aishell rnn_layer_size: 128 share_rnn_weights: 0 shuffle_method: batch_shuffle_clipped specgram_type: linear test_off: 0 train_manifest: data/aishell/manifest.train trainer_count: 1 use_gpu: 0 use_gru: 1 use_sortagrad: 1 vocab_path: data/aishell/vocab.txt 运行结果: I0921 16:48:21.802156 1551 Util.cpp:166] commandline: --use_gpu=0 --rnn_use_batch=True --log_clipping=True --trainer_count=1 W0921 16:48:21.802270 1551 CpuId.h:112] PaddlePaddle wasn't compiled to use avx instructions, but these are available on your machine and could speed up CPU computations via CMAKE .. -DWITH_AVX=ON [INFO 2018-09-21 16:48:21,821 layers.py:2716] output for conv_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-09-21 16:48:21,822 layers.py:3361] output for batch_norm_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-09-21 16:48:21,823 layers.py:7533] output for scale_sub_region_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2018-09-21 16:48:21,825 layers.py:2716] output for conv_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2018-09-21 16:48:21,827 layers.py:3361] output for batch_norm_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2018-09-21 16:48:21,829 layers.py:7533] output for scale_sub_region_1: c = 32, h = 41, w = 54, size = 70848 I0921 16:48:22.006682 1551 GradientMachine.cpp:94] Initing parameters.. I0921 16:48:22.224419 1551 GradientMachine.cpp:101] Init parameters done. F0921 16:48:27.090147 1551 hl_cuda_device.cc:565] Check failed: cudaSuccess == cudaStat (0 vs. 35) Cuda Error: CUDA driver version is insufficient for CUDA runtime version