Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #24871

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 6月 03, 2020 by saxon_zh@saxon_zhGuest

使用fluid.io.load_interface_model后,运行时会以一定的概率报错

Created by: Edwardwaw

环境:python3.7 paddle1.7.1 AI Studio 使用load_interface_model加载模型,进行预测时会以一定的概率报错,现已确定在训练模型某些地方加以改动后保存的模型会出现此问题,比如dataloader中加入image normalization

import numpy as np
from PIL import Image
import paddle.fluid as fluid
import matplotlib.pyplot as plt
import zipfile

mean222=np.array([0.485,0.456,0.406]).reshape((1,1,3))
std222=np.array([0.229,0.224,0.225]).reshape((1,1,3))

test_zfile = zipfile.ZipFile("data/data37389/test_new.zip")
l_test = []
for test_fname in test_zfile.namelist()[1:]:
    l_test.append(test_fname)

test_img = Image.open(l_test[88])

plt.imshow(test_img)
plt.show()
test_img = test_img.resize((640, 480))
test_im = np.array(test_img)
test_im = (test_im / 255.0)
test_im = test_im.transpose().reshape(1, 3, 640, 480).astype('float32')

new_scope=fluid.Scope()
place=fluid.CUDAPlace(0)
infer_exe=fluid.Executor(place)
# infer_exe.run(fluid.default_startup_program())
path="./test_modelProblem/0/"
with fluid.scope_guard(new_scope):
    [inference_program, feed_target_names, fetch_targets] = fluid.io.load_inference_model(dirname=path, executor=infer_exe)
    result=infer_exe.run(inference_program,feed={feed_target_names[0]: test_im},fetch_list=fetch_targets)
    print(result[0].shape)
    res = result[0][0]
    # print(result)
    # plt.imshow(result,cmap=CM.jet)
    print(np.sum(result[0][0][0]))

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py:782: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") ---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in 29 with fluid.scope_guard(new_scope): 30 [inference_program, feed_target_names, fetch_targets] = fluid.io.load_inference_model(dirname=path, executor=infer_exe) ---> 31 result=infer_exe.run(inference_program,feed={feed_target_names[0]: test_im},fetch_list=fetch_targets) 32 print(result[0].shape) 33 res = result[0][0] /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 781 warnings.warn( 782 "The following exception is not an EOF exception.") --> 783 six.reraise(*sys.exc_info()) 784 785 def _run_impl(self, program, feed, fetch_list, feed_var_name, /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py in reraise(tp, value, tb) 691 if value.traceback is not tb: 692 raise value.with_traceback(tb) --> 693 raise value 694 finally: 695 value = None /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 776 scope=scope, 777 return_numpy=return_numpy, --> 778 use_program_cache=use_program_cache) 779 except Exception as e: 780 if not isinstance(e, core.EOFException): /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py in _run_impl(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 829 scope=scope, 830 return_numpy=return_numpy, --> 831 use_program_cache=use_program_cache) 832 833 program._compile(scope, self.place) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py in _run_program(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 903 if not use_program_cache: 904 self._default_executor.run(program.desc, scope, 0, True, True, --> 905 fetch_var_name) 906 else: 907 self._default_executor.run_prepared_ctx(ctx, scope, False, False, EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::framework::Tensor::check_memory_size() const 3 float const* paddle::framework::Tensor::data() const 4 void paddle::operators::ElementwiseComputeEx<paddle::operators::SubFunctor<float, void>, paddle::platform::CUDADeviceContext, float, float>(paddle::framework::ExecutionContext const&, paddle::framework::Tensor const*, paddle::framework::Tensor const*, int, paddle::operators::SubFunctor<float, void>, paddle::framework::Tensor*) 5 void paddle::operators::default_elementwise_sub<paddle::platform::CUDADeviceContext, float>(paddle::framework::ExecutionContext const&, paddle::framework::Tensor const*, paddle::framework::Tensor const*, paddle::framework::Tensor*) 6 paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 7 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, float>, paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, paddle::platform::float16>, paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, double>, paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, int>, paddle::operators::ElementwiseSubKernel<paddle::platform::CUDADeviceContext, long> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 8 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 9 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 10 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 11 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 12 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocatorstd::string > const&, bool, bool)


Python Call Stacks (More useful to users):

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op attrs=kwargs.get("attrs", None)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op return self.main_program.current_block().append_op(*args, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/loss.py", line 343, in square_error_cost outputs={'Out': [minus_out]}) File "", line 14, in cost1 = fluid.layers.square_error_cost(input=predict, label=label) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes if (yield from self.run_code(code, result)): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async interactivity=interactivity, compiler=compiler, result=result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner coro.send(None) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell return runner(coro) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell raw_cell, store_history, silent, shell_futures) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 294, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 534, in execute_request user_expressions, allow_stdin, File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 357, in process_one yield gen.maybe_future(dispatch(*args)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/gen.py", line 1233, in inner self.run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/asyncio/base_events.py", line 1771, in _run_once handle._run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/asyncio/base_events.py", line 534, in run_forever self._run_once() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 505, in start self.io_loop.start() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/traitlets/config/application.py", line 664, in launch_instance app.start() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in app.launch_new_instance() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec)


Error Message Summary:

Error: Tensor holds no memory. Call Tensor::mutable_data first. [Hint: holder_ should not be null.] at (/paddle/paddle/fluid/framework/tensor.cc:23) [operator < elementwise_sub > error]

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#24871
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7