Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • DeepSpeech
  • Issue
  • #455

D
DeepSpeech
  • 项目概览

PaddlePaddle / DeepSpeech
大约 2 年 前同步成功

通知 210
Star 8425
Fork 1598
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 245
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 3
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
D
DeepSpeech
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 245
    • Issue 245
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 3
    • 合并请求 3
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 5月 25, 2020 by saxon_zh@saxon_zhGuest

InvalidArgumentError: Cannot parse tensor desc [Hint: Expected desc.ParseFromArray(buf.get(), size) == true, but received desc.ParseFromArray(buf.get(), size):0 != true:1.] at (/paddle/paddle/fluid/framework/tensor_util.cc:527) [operator < load_com...

Created by: libraster

When i try to run demo_server.py or infer.py i get the above error. Here is the complete traceback. I am using paddlepaddle==1.8.0 in CPU.

`[libprotobuf ERROR /paddle/build/third_party/protobuf/src/extern_protobuf/src/google/protobuf/message_lite.cc:119] Can't parse message of type "paddle.framework.proto.VarType.TensorDesc" because it is missing required fields: data_type /home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/executor.py:1070: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "demo_server.py", line 238, in main() File "demo_server.py", line 234, in main start_server() File "demo_server.py", line 219, in start_server num_test_cases=1) File "demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "/home/Desktop/baidu/DeepSpeech/deploy/../model_utils/model.py", line 411, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "/home/Desktop/baidu/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 876, in load_params filename=filename) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 750, in load_vars filename=filename) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 804, in load_vars executor.run(load_prog) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1071, in run six.reraise(*sys.exc_info()) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1066, in run return_merged=return_merged) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1154, in _run_impl use_program_cache=use_program_cache) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 1229, in _run_program fetch_var_name) paddle.fluid.core_avx.EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::framework::TensorFromStream(std::istream&, paddle::framework::Tensor*, paddle::platform::DeviceContext const&) 3 paddle::framework::DeserializeFromStream(std::istream&, paddle::framework::LoDTensor*, paddle::platform::DeviceContext const&) 4 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::LoadParamsFromBuffer(paddle::framework::ExecutionContext const&, paddle::platform::Place const&, std::istream*, bool, std::vector<std::string, std::allocatorstd::string > const&) const 5 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 6 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, long> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 7 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 8 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 9 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 10 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool) 11 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 12 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocatorstd::string > const&, bool, bool)


Python Call Stacks (More useful to users):

File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op attrs=kwargs.get("attrs", None)) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 802, in load_vars 'model_from_memory': vars_from_memory File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 750, in load_vars filename=filename) File "/home/miniconda3/envs/baidu/lib/python2.7/site-packages/paddle/fluid/io.py", line 876, in load_params filename=filename) File "/home/Desktop/baidu/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/Desktop/baidu/DeepSpeech/deploy/../model_utils/model.py", line 411, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "demo_server.py", line 219, in start_server num_test_cases=1) File "demo_server.py", line 234, in main start_server() File "demo_server.py", line 238, in main()


Error Message Summary:

InvalidArgumentError: Cannot parse tensor desc [Hint: Expected desc.ParseFromArray(buf.get(), size) == true, but received desc.ParseFromArray(buf.get(), size):0 != true:1.] at (/paddle/paddle/fluid/framework/tensor_util.cc:527) [operator < load_combine > error] `

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/DeepSpeech#455
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7