Error: OP(LoadCombine) fail to open file models/BaiduCN1.2k/baidu_cn1.2k_model_fluid.tar.gz/params.pdparams
Created by: bigcash
我的目标是:使用baidu_cn1.2k的模型实现对语音文件转文字的调用测试,centos7,cpu环境,没有使用docker而是直接安装的。启动server时遇到以下错误:
(baidu27) [lb@centos702 DeepSpeech]$ python deploy/demo_server.py --host_ip 172.16.1.190 --host_port 10010 --warmup_manifest data/aishell/manifest.test --mean_std_path models/BaiduCN1.2k/mean_std.npz --vocab_path models/BaiduCN1.2k/vocab.txt --model_path models/BaiduCN1.2k/baidu_cn1.2k_model_fluid.tar.gz --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm --use_gpu False ----------- Configuration Arguments ----------- alpha: 2.5 beam_size: 500 beta: 0.3 cutoff_prob: 1.0 cutoff_top_n: 40 decoding_method: ctc_beam_search host_ip: 172.16.1.190 host_port: 10010 lang_model_path: models/lm/zh_giga.no_cna_cmn.prune01244.klm mean_std_path: models/BaiduCN1.2k/mean_std.npz model_path: models/BaiduCN1.2k/baidu_cn1.2k_model_fluid.tar.gz num_conv_layers: 2 num_rnn_layers: 3 rnn_layer_size: 2048 share_rnn_weights: True specgram_type: linear speech_save_dir: demo_cache use_gpu: 0 use_gru: False vocab_path: models/BaiduCN1.2k/vocab.txt warmup_manifest: data/aishell/manifest.test
2019-12-31 10:16:49,765-INFO: begin to initialize the external scorer for decoding 2019-12-31 10:16:56,318-INFO: language model: is_character_based = 1, max_order = 5, dict_size = 0 2019-12-31 10:16:56,318-INFO: end initializing scorer
Warming up ... ('Warm-up Test Case %d: %s', 0, u'./dataset/aishell/data_aishell/wav/test/S0913/BAC009S0913W0464.wav') /home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py:779: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "deploy/demo_server.py", line 238, in main() File "deploy/demo_server.py", line 234, in main start_server() File "deploy/demo_server.py", line 219, in start_server num_test_cases=3) File "deploy/demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "deploy/demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 798, in load_params filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 682, in load_vars filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 726, in load_vars executor.run(load_prog) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 780, in run six.reraise(*sys.exc_info()) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 775, in run use_program_cache=use_program_cache) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 822, in _run_impl use_program_cache=use_program_cache) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 899, in _run_program fetch_var_name) paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) 2 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 3 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, long> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 6 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 7 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 8 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocatorstd::string > const&, bool)
Python Call Stacks (More useful to users):
File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2488, in append_op attrs=kwargs.get("attrs", None)) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 725, in load_vars attrs={'file_path': os.path.join(load_dirname, filename)}) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 682, in load_vars filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 798, in load_params filename=filename) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "deploy/demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "deploy/demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "deploy/demo_server.py", line 219, in start_server num_test_cases=3) File "deploy/demo_server.py", line 234, in main start_server() File "deploy/demo_server.py", line 238, in main()
Error Message Summary:
Error: OP(LoadCombine) fail to open file models/BaiduCN1.2k/baidu_cn1.2k_model_fluid.tar.gz/params.pdparams, please check whether the model file is complete or damaged. at (/paddle/paddle/fluid/operators/load_combine_op.h:46) [operator < load_combine > error]
启动任务时报错,看提示是读取tar.gz中的模型文件出错,但是我是从github的readme中下载的模型文件,多次下载的字节数也相同的,应该没错了呀。另外,centos7上也安装了pkg-config, flac, ogg, vorbis, boost 和 swig。 求助,这是我哪里出错了吗?