当我在docker中运行run_infer_golden.sh时,报了如下错误,请问什么原因导致的?
Created by: MrCuiHao
运行时只使用了一个显卡,1张GTX 1080TI ,显存11GB
root@1e103b8c3b19:/DeepSpeech/examples/tiny# sh run_infer_golden.sh ----------- Configuration Arguments ----------- alpha: 2.5 beam_size: 500 beta: 0.3 cutoff_prob: 1.0 cutoff_top_n: 40 decoding_method: ctc_beam_search error_rate_type: wer infer_manifest: data/tiny/manifest.test-clean lang_model_path: models/lm/common_crawl_00.prune01111.trie.klm mean_std_path: models/librispeech/mean_std.npz model_path: models/librispeech/params.tar.gz num_conv_layers: 2 num_proc_bsearch: 8 num_rnn_layers: 3 num_samples: 10 rnn_layer_size: 2048 share_rnn_weights: 1 specgram_type: linear trainer_count: 1 use_gpu: 1 use_gru: 0 vocab_path: models/librispeech/vocab.txt
I0923 13:29:14.188905 36 Util.cpp:166] commandline: --use_gpu=1 --rnn_use_batch=True --trainer_count=1 [INFO 2019-09-23 13:29:20,204 layers.py:2606] output for conv_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2019-09-23 13:29:20,207 layers.py:3133] output for batch_norm_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2019-09-23 13:29:20,209 layers.py:7224] output for scale_sub_region_0: c = 32, h = 81, w = 54, size = 139968 [INFO 2019-09-23 13:29:20,211 layers.py:2606] output for conv_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2019-09-23 13:29:20,214 layers.py:3133] output for batch_norm_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2019-09-23 13:29:20,215 layers.py:7224] output for scale_sub_region_1: c = 32, h = 41, w = 54, size = 70848 [INFO 2019-09-23 13:29:24,914 model.py:243] begin to initialize the external scorer for decoding [INFO 2019-09-23 13:33:51,108 model.py:253] language model: is_character_based = 0, max_order = 5, dict_size = 400000 [INFO 2019-09-23 13:33:51,589 model.py:254] end initializing scorer [INFO 2019-09-23 13:33:51,589 infer.py:103] start inference ... F0923 13:33:57.753219 36 hl_cuda_device.cc:273] Check failed: cudaSuccess == cudaStat (0 vs. 2) Cuda Error: out of memory