DeepSpeech2模型执行demo_server.py时报了个不可用的错误怎么解决?
Created by: YuHuasong123
python demo_server.py ----------- Configuration Arguments ----------- alpha: 2.5 beam_size: 500 beta: 0.3 cutoff_prob: 1.0 cutoff_top_n: 40 decoding_method: ctc_beam_search host_ip: localhost host_port: 8086 lang_model_path: ./models/lm/zh_giga.no_cna_cmn.prune01244.klm mean_std_path: ./data/aishell/mean_std.npz model_path: ./checkpoints/aishell/step_final num_conv_layers: 2 num_rnn_layers: 3 rnn_layer_size: 2048 share_rnn_weights: True specgram_type: linear speech_save_dir: demo_cache use_gpu: True use_gru: False vocab_path: ./data/aishell/vocab.txt warmup_manifest: ./data/aishell/manifest.test
2020-07-29 17:10:03,314-INFO: begin to initialize the external scorer for decoding 2020-07-29 17:10:23,598-INFO: language model: is_character_based = 1, max_order = 5, dict_size = 0 2020-07-29 17:10:23,598-INFO: end initializing scorer
Warming up ... ('Warm-up Test Case %d: %s', 0, u'./dataset/data_aishell/wav/test/S0913/BAC009S0913W0464.wav') W0729 17:10:23.813068 1439 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0 W0729 17:10:23.879007 1439 device_context.cc:260] device: 0, cuDNN Version: 7.3. /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/executor.py:1070: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "demo_server.py", line 240, in main() File "demo_server.py", line 236, in main start_server() File "demo_server.py", line 221, in start_server num_test_cases=3) File "demo_server.py", line 138, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "demo_server.py", line 197, in file_to_transcript feeding_dict=data_generator.feeding) File "/home/aistudio/DeepSpeech/model_utils/model.py", line 411, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "/home/aistudio/DeepSpeech/model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 876, in load_params filename=filename) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 750, in load_vars filename=filename) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 804, in load_vars executor.run(load_prog) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1071, in run six.reraise(*sys.exc_info()) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1066, in run return_merged=return_merged) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1154, in _run_impl use_program_cache=use_program_cache) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1229, in _run_program fetch_var_name) paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, float>::LoadParamsFromBuffer(paddle::framework::ExecutionContext const&, paddle::platform::Place const&, std::istream*, bool, std::vector<std::string, std::allocatorstd::string > const&) const 3 paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 4 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CUDADeviceContext, long> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 8 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool) 9 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 10 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocatorstd::string > const&, bool, bool)
Python Call Stacks (More useful to users):
File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op attrs=kwargs.get("attrs", None)) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 802, in load_vars 'model_from_memory': vars_from_memory File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 750, in load_vars filename=filename) File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/io.py", line 876, in load_params filename=filename) File "/home/aistudio/DeepSpeech/model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/aistudio/DeepSpeech/model_utils/model.py", line 411, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "demo_server.py", line 197, in file_to_transcript feeding_dict=data_generator.feeding) File "demo_server.py", line 138, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "demo_server.py", line 221, in start_server num_test_cases=3) File "demo_server.py", line 236, in main start_server() File "demo_server.py", line 240, in main()
Error Message Summary:
UnavailableError: Not allowed to load partial data via load_combine_op, please use load_op instead. [Hint: Expected buffer->eof() == true, but received buffer->eof():0 != true:1.] at (/paddle/paddle/fluid/operators/load_combine_op.h:115) [operator < load_combine > error]