Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #14624

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 11月 27, 2018 by saxon_zh@saxon_zhGuest

holder_ should not be null

Created by: dubhex

版本、环境信息:    1)PaddlePaddle版本:1.02    3)GPU:k40    4)系统环境:centos6.5

  • 训练信息    1)单机,单卡
  • 问题描述: CPU下能正常训练,但是改为GPU后就报错,paddle.fluid.core.EnforceNotMet: holder_ should not be null Tensor holds no memory. 具体错误提示如下:

Traceback (most recent call last): File "train_fan_score.py", line 174, in train_score() File "train_fan_score.py", line 94, in train_score fetch_list = [lr.name, avg_cost.name, label_softmax.name, main_label.name], feed = feeder.feed(data)) File "/opt/python/cp27-cp27mu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 470, in run self.executor.run(program.desc, scope, 0, True, True) paddle.fluid.core.EnforceNotMet: holder_ should not be null Tensor holds no memory. Call Tensor::mutable_data first. at [/paddle/paddle/fluid/framework/tensor.cc:22] PaddlePaddle Call Stacks: 0 0x7fa7f3a4b2d6p paddle::platform::EnforceNotMet::EnforceNotMet(std::exception_ptr::exception_ptr, char const*, int) + 486 1 0x7fa7f4c3d502p paddle::framework::Tensor::check_memory_size() const + 226 2 0x7fa7f3a50ef9p float const* paddle::framework::Tensor::data() const + 25 3 0x7fa7f452d8aep paddle::operators::BatchNormGradKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const + 1886 4 0x7fa7f452eb33p ZNSt17_Function_handlerIFvRKN6paddle9framework16ExecutionContextEEZNKS1_24OpKernelRegistrarFunctorINS0_8platform9CUDAPlaceELb0ELm0EJNS0_9operators19BatchNormGradKernelINS7_17CUDADeviceContextEfEENSA_ISB_dEEEEclEPKcSG_EUlS4_E_E9_M_invokeERKSt9_Any_dataS4 + 35 5 0x7fa7f4b94f83p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 531 6 0x7fa7f4b9205cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 252 7 0x7fa7f3b16469p paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 393 8 0x7fa7f3b16f00p paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool) + 128 9 0x7fa7f3a30dcdp 10 0x7fa7f3a5be54p pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 2596 11 0x7fa8300abce8p PyEval_EvalFrameEx + 28264 12 0x7fa8300ae37dp PyEval_EvalCodeEx + 2061 13 0x7fa8300abd70p PyEval_EvalFrameEx + 28400 14 0x7fa8300abe9ep PyEval_EvalFrameEx + 28702 15 0x7fa8300ae37dp PyEval_EvalCodeEx + 2061 16 0x7fa8300ae4b2p PyEval_EvalCode + 50 17 0x7fa8300d81c2p PyRun_FileExFlags + 146 18 0x7fa8300d9559p PyRun_SimpleFileExFlags + 217 19 0x7fa8300ef1ddp Py_Main + 3149 20 0x7fa82f382d1dp __libc_start_main + 253 21 0x4006b1p

export GLOG_v=4 export GLOG_logtostderr=1 后再运行,最下面的提示为: I1127 08:05:34.341182 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.342895 6304 operator.cc:152] CUDAPlace(0) Op(softmax_with_cross_entropy), inputs:{Label[main_label:int64_t100, 1], Logits[softmax_0.tmp_0:float100, 2]}, outputs:{Loss[softmax_with_cross_entropy_0.tmp_1100, 1], Softmax[softmax_with_cross_entropy_0.tmp_0100, 2]}. I1127 08:05:34.342943 6304 operator.cc:140] CUDAPlace(0) Op(mean), inputs:{X[softmax_with_cross_entropy_0.tmp_1:float100, 1]}, outputs:{Out[mean_0.tmp_0-1]}. I1127 08:05:34.342981 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.343063 6304 operator.cc:152] CUDAPlace(0) Op(mean), inputs:{X[softmax_with_cross_entropy_0.tmp_1:float100, 1]}, outputs:{Out[mean_0.tmp_01]}. I1127 08:05:34.343122 6304 operator.cc:140] CUDAPlace(0) Op(cast), inputs:{X[@LR_DECAY_COUNTER@:int64_t1]}, outputs:{Out[cast_0.tmp_0-1]}. I1127 08:05:34.343166 6304 operator.cc:683] expected_kernel_key:data_type[int64_t]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN] I1127 08:05:34.343215 6304 operator.cc:152] CUDAPlace(0) Op(cast), inputs:{X[@LR_DECAY_COUNTER@:int64_t1]}, outputs:{Out[cast_0.tmp_01]}. I1127 08:05:34.343246 6304 operator.cc:140] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_11-1]}. I1127 08:05:34.343308 6304 operator.cc:152] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_111]}. I1127 08:05:34.343349 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_div), inputs:{X[cast_0.tmp_0:float1], Y[tmp_11:float1]}, outputs:{Out[tmp_12-1]}. I1127 08:05:34.343382 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.343408 6304 operator.cc:780] Transform Variable cast_0.tmp_0 from data_type[float]:data_layout[NCHW]:place[CPUPlace]:library_type[PLAIN] to data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.343427 6304 scope.cc:129] Create variable cast_0.tmp_0 I1127 08:05:34.343453 6304 data_device_transform.cc:21] DeviceTransform in, src_place CPUPlace dst_place: CUDAPlace(0) I1127 08:05:34.343483 6304 tensor_util.cu:107] TensorCopySync 1 from CPUPlace to CUDAPlace(0) ('Place: ', <paddle.fluid.core.CUDAPlace object at 0x7faefbfa6ba0>) params at './params_e294' is loaded === Epoch 0 === I1127 08:05:34.343655 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_div), inputs:{X[cast_0.tmp_0:float1], Y[tmp_11:float1]}, outputs:{Out[tmp_121]}. I1127 08:05:34.343698 6304 operator.cc:140] CUDAPlace(0) Op(floor), inputs:{X[tmp_12:float1]}, outputs:{Out[floor_0.tmp_0-1]}. I1127 08:05:34.343729 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.343818 6304 operator.cc:152] CUDAPlace(0) Op(floor), inputs:{X[tmp_12:float1]}, outputs:{Out[floor_0.tmp_01]}. I1127 08:05:34.343854 6304 operator.cc:140] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_13-1]}. I1127 08:05:34.343924 6304 operator.cc:152] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_131]}. I1127 08:05:34.343964 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_pow), inputs:{X[tmp_13:float1], Y[floor_0.tmp_0:float1]}, outputs:{Out[tmp_14-1]}. I1127 08:05:34.343998 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.344120 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_pow), inputs:{X[tmp_13:float1], Y[floor_0.tmp_0:float1]}, outputs:{Out[tmp_141]}. I1127 08:05:34.344158 6304 operator.cc:140] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_15-1]}. I1127 08:05:34.344218 6304 operator.cc:152] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[tmp_151]}. I1127 08:05:34.344257 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_mul), inputs:{X[tmp_14:float1], Y[tmp_15:float1]}, outputs:{Out[tmp_16-1]}. I1127 08:05:34.344290 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.344394 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_mul), inputs:{X[tmp_14:float1], Y[tmp_15:float1]}, outputs:{Out[tmp_161]}. I1127 08:05:34.344431 6304 operator.cc:140] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[mean_0.tmp_0@GRAD-1]}. I1127 08:05:34.344512 6304 operator.cc:152] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[mean_0.tmp_0@GRAD1]}. I1127 08:05:34.344555 6304 operator.cc:140] CUDAPlace(0) Op(mean_grad), inputs:{Out@GRAD[mean_0.tmp_0@GRAD:float1], X[softmax_with_cross_entropy_0.tmp_1:float100, 1]}, outputs:{X@GRAD[softmax_with_cross_entropy_0.tmp_1@GRAD-1]}. I1127 08:05:34.344588 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.344660 6304 operator.cc:152] CUDAPlace(0) Op(mean_grad), inputs:{Out@GRAD[mean_0.tmp_0@GRAD:float1], X[softmax_with_cross_entropy_0.tmp_1:float100, 1]}, outputs:{X@GRAD[softmax_with_cross_entropy_0.tmp_1@GRAD100, 1]}. I1127 08:05:34.344708 6304 operator.cc:140] CUDAPlace(0) Op(softmax_with_cross_entropy_grad), inputs:{Label[main_label:int64_t100, 1], Loss[softmax_with_cross_entropy_0.tmp_1:float100, 1], Loss@GRAD[softmax_with_cross_entropy_0.tmp_1@GRAD:float100, 1], Softmax[softmax_with_cross_entropy_0.tmp_0:float100, 2], Softmax@GRAD[softmax_with_cross_entropy_0.tmp_0@GRAD[uninited]]}, outputs:{Logits@GRAD[softmax_0.tmp_0@GRAD-1]}. I1127 08:05:34.344746 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.344839 6304 operator.cc:152] CUDAPlace(0) Op(softmax_with_cross_entropy_grad), inputs:{Label[main_label:int64_t100, 1], Loss[softmax_with_cross_entropy_0.tmp_1:float100, 1], Loss@GRAD[softmax_with_cross_entropy_0.tmp_1@GRAD:float100, 1], Softmax[softmax_with_cross_entropy_0.tmp_0:float100, 2], Softmax@GRAD[softmax_with_cross_entropy_0.tmp_0@GRAD[uninited]]}, outputs:{Logits@GRAD[softmax_0.tmp_0@GRAD100, 2]}. I1127 08:05:34.344882 6304 operator.cc:140] CUDAPlace(0) Op(softmax_grad), inputs:{Out[softmax_0.tmp_0:float100, 2], Out@GRAD[softmax_0.tmp_0@GRAD:float100, 2]}, outputs:{X@GRAD[fc_2.tmp_1@GRAD-1]}. I1127 08:05:34.344921 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[CUDNN] I1127 08:05:34.345041 6304 operator.cc:152] CUDAPlace(0) Op(softmax_grad), inputs:{Out[softmax_0.tmp_0:float100, 2], Out@GRAD[softmax_0.tmp_0@GRAD:float100, 2]}, outputs:{X@GRAD[fc_2.tmp_1@GRAD100, 2]}. I1127 08:05:34.345115 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[fc_2.tmp_1@GRAD:float100, 2], Y[fc_2.b_0:float2]}, outputs:{X@GRAD[fc_2.tmp_0@GRAD-1], Y@GRAD[fc_2.b_0@GRAD-1]}. I1127 08:05:34.345158 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.345260 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[fc_2.tmp_1@GRAD:float100, 2], Y[fc_2.b_0:float2]}, outputs:{X@GRAD[fc_2.tmp_0@GRAD100, 2], Y@GRAD[fc_2.b_0@GRAD2]}. I1127 08:05:34.345312 6304 operator.cc:140] CUDAPlace(0) Op(mul_grad), inputs:{Out[fc_2.tmp_0:float100, 2], Out@GRAD[fc_2.tmp_0@GRAD:float100, 2], X[relu_42.tmp_0:float100, 32], Y[fc_2.w_0:float32, 2]}, outputs:{X@GRAD[relu_42.tmp_0@GRAD-1], Y@GRAD[fc_2.w_0@GRAD-1]}. I1127 08:05:34.345353 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.345494 6304 operator.cc:152] CUDAPlace(0) Op(mul_grad), inputs:{Out[fc_2.tmp_0:float100, 2], Out@GRAD[fc_2.tmp_0@GRAD:float100, 2], X[relu_42.tmp_0:float100, 32], Y[fc_2.w_0:float32, 2]}, outputs:{X@GRAD[relu_42.tmp_0@GRAD100, 32], Y@GRAD[fc_2.w_0@GRAD32, 2]}. I1127 08:05:34.345538 6304 operator.cc:140] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_42.tmp_0:float100, 32], Out@GRAD[relu_42.tmp_0@GRAD:float100, 32]}, outputs:{X@GRAD[fc_1.tmp_1@GRAD-1]}. I1127 08:05:34.345571 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.345657 6304 operator.cc:152] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_42.tmp_0:float100, 32], Out@GRAD[relu_42.tmp_0@GRAD:float100, 32]}, outputs:{X@GRAD[fc_1.tmp_1@GRAD100, 32]}. I1127 08:05:34.345705 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[fc_1.tmp_1@GRAD:float100, 32], Y[fc_1.b_0:float32]}, outputs:{X@GRAD[fc_1.tmp_0@GRAD-1], Y@GRAD[fc_1.b_0@GRAD-1]}. I1127 08:05:34.345738 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.345818 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[fc_1.tmp_1@GRAD:float100, 32], Y[fc_1.b_0:float32]}, outputs:{X@GRAD[fc_1.tmp_0@GRAD100, 32], Y@GRAD[fc_1.b_0@GRAD32]}. I1127 08:05:34.345866 6304 operator.cc:140] CUDAPlace(0) Op(mul_grad), inputs:{Out[fc_1.tmp_0:float100, 32], Out@GRAD[fc_1.tmp_0@GRAD:float100, 32], X[relu_41.tmp_0:float100, 8, 4, 4], Y[fc_1.w_0:float128, 32]}, outputs:{X@GRAD[relu_41.tmp_0@GRAD-1], Y@GRAD[fc_1.w_0@GRAD-1]}. I1127 08:05:34.345907 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.346025 6304 operator.cc:152] CUDAPlace(0) Op(mul_grad), inputs:{Out[fc_1.tmp_0:float100, 32], Out@GRAD[fc_1.tmp_0@GRAD:float100, 32], X[relu_41.tmp_0:float100, 8, 4, 4], Y[fc_1.w_0:float128, 32]}, outputs:{X@GRAD[relu_41.tmp_0@GRAD100, 8, 4, 4], Y@GRAD[fc_1.w_0@GRAD128, 32]}. I1127 08:05:34.346067 6304 operator.cc:140] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_41.tmp_0:float100, 8, 4, 4], Out@GRAD[relu_41.tmp_0@GRAD:float100, 8, 4, 4]}, outputs:{X@GRAD[batch_norm_59.tmp_2@GRAD-1]}. I1127 08:05:34.346125 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.346205 6304 operator.cc:152] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_41.tmp_0:float100, 8, 4, 4], Out@GRAD[relu_41.tmp_0@GRAD:float100, 8, 4, 4]}, outputs:{X@GRAD[batch_norm_59.tmp_2@GRAD100, 8, 4, 4]}. I1127 08:05:34.346276 6304 operator.cc:140] CUDAPlace(0) Op(batch_norm_grad), inputs:{Bias[batch_norm_59.b_0:float8], SavedMean[batch_norm_59.tmp_0:float8], SavedVariance[batch_norm_59.tmp_1:float8], Scale[batch_norm_59.w_0:float8], X[conv2d_60.tmp_1:float100, 8, 4, 4], Y@GRAD[batch_norm_59.tmp_2@GRAD:float100, 8, 4, 4]}, outputs:{Bias@GRAD[batch_norm_59.b_0@GRAD-1], Scale@GRAD[batch_norm_59.w_0@GRAD-1], X@GRAD[conv2d_60.tmp_1@GRAD-1]}. I1127 08:05:34.346321 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.346484 6304 operator.cc:152] CUDAPlace(0) Op(batch_norm_grad), inputs:{Bias[batch_norm_59.b_0:float8], SavedMean[batch_norm_59.tmp_0:float8], SavedVariance[batch_norm_59.tmp_1:float8], Scale[batch_norm_59.w_0:float8], X[conv2d_60.tmp_1:float100, 8, 4, 4], Y@GRAD[batch_norm_59.tmp_2@GRAD:float100, 8, 4, 4]}, outputs:{Bias@GRAD[batch_norm_59.b_0@GRAD8], Scale@GRAD[batch_norm_59.w_0@GRAD8], X@GRAD[conv2d_60.tmp_1@GRAD100, 8, 4, 4]}. I1127 08:05:34.346534 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[conv2d_60.tmp_1@GRAD:float100, 8, 4, 4], Y[conv2d_60.b_0:float8]}, outputs:{X@GRAD[conv2d_60.tmp_0@GRAD-1], Y@GRAD[conv2d_60.b_0@GRAD-1]}. I1127 08:05:34.346570 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.346657 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[conv2d_60.tmp_1@GRAD:float100, 8, 4, 4], Y[conv2d_60.b_0:float8]}, outputs:{X@GRAD[conv2d_60.tmp_0@GRAD100, 8, 4, 4], Y@GRAD[conv2d_60.b_0@GRAD8]}. I1127 08:05:34.346709 6304 operator.cc:140] CUDAPlace(0) Op(conv2d_grad), inputs:{Bias[], Filter[conv2d_60.w_0:float8, 8, 3, 3], Input[relu_40.tmp_0:float100, 8, 8, 8], Output[conv2d_60.tmp_0:float100, 8, 4, 4], Output@GRAD[conv2d_60.tmp_0@GRAD:float100, 8, 4, 4]}, outputs:{Bias@GRAD[], Filter@GRAD[conv2d_60.w_0@GRAD-1], Input@GRAD[relu_40.tmp_0@GRAD-1]}. I1127 08:05:34.346750 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[CUDNN] I1127 08:05:34.347064 6304 operator.cc:152] CUDAPlace(0) Op(conv2d_grad), inputs:{Bias[], Filter[conv2d_60.w_0:float8, 8, 3, 3], Input[relu_40.tmp_0:float100, 8, 8, 8], Output[conv2d_60.tmp_0:float100, 8, 4, 4], Output@GRAD[conv2d_60.tmp_0@GRAD:float100, 8, 4, 4]}, outputs:{Bias@GRAD[], Filter@GRAD[conv2d_60.w_0@GRAD8, 8, 3, 3], Input@GRAD[relu_40.tmp_0@GRAD100, 8, 8, 8]}. I1127 08:05:34.347127 6304 operator.cc:140] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_40.tmp_0:float100, 8, 8, 8], Out@GRAD[relu_40.tmp_0@GRAD:float100, 8, 8, 8]}, outputs:{X@GRAD[batch_norm_58.tmp_2@GRAD-1]}. I1127 08:05:34.347163 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.347237 6304 operator.cc:152] CUDAPlace(0) Op(relu_grad), inputs:{Out[relu_40.tmp_0:float100, 8, 8, 8], Out@GRAD[relu_40.tmp_0@GRAD:float100, 8, 8, 8]}, outputs:{X@GRAD[batch_norm_58.tmp_2@GRAD100, 8, 8, 8]}. I1127 08:05:34.347296 6304 operator.cc:140] CUDAPlace(0) Op(batch_norm_grad), inputs:{Bias[batch_norm_58.b_0:float8], SavedMean[batch_norm_58.tmp_0:float8], SavedVariance[batch_norm_58.tmp_1:float8], Scale[batch_norm_58.w_0:float8], X[conv2d_59.tmp_1:float100, 8, 8, 8], Y@GRAD[batch_norm_58.tmp_2@GRAD:float100, 8, 8, 8]}, outputs:{Bias@GRAD[batch_norm_58.b_0@GRAD-1], Scale@GRAD[batch_norm_58.w_0@GRAD-1], X@GRAD[conv2d_59.tmp_1@GRAD-1]}. I1127 08:05:34.347331 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.347450 6304 operator.cc:152] CUDAPlace(0) Op(batch_norm_grad), inputs:{Bias[batch_norm_58.b_0:float8], SavedMean[batch_norm_58.tmp_0:float8], SavedVariance[batch_norm_58.tmp_1:float8], Scale[batch_norm_58.w_0:float8], X[conv2d_59.tmp_1:float100, 8, 8, 8], Y@GRAD[batch_norm_58.tmp_2@GRAD:float100, 8, 8, 8]}, outputs:{Bias@GRAD[batch_norm_58.b_0@GRAD8], Scale@GRAD[batch_norm_58.w_0@GRAD8], X@GRAD[conv2d_59.tmp_1@GRAD100, 8, 8, 8]}. I1127 08:05:34.347506 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[conv2d_59.tmp_1@GRAD:float100, 8, 8, 8], Y[conv2d_59.b_0:float8]}, outputs:{X@GRAD[conv2d_59.tmp_0@GRAD-1], Y@GRAD[conv2d_59.b_0@GRAD-1]}. I1127 08:05:34.347544 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.347628 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[conv2d_59.tmp_1@GRAD:float100, 8, 8, 8], Y[conv2d_59.b_0:float8]}, outputs:{X@GRAD[conv2d_59.tmp_0@GRAD100, 8, 8, 8], Y@GRAD[conv2d_59.b_0@GRAD8]}. I1127 08:05:34.347681 6304 operator.cc:140] CUDAPlace(0) Op(conv2d_grad), inputs:{Bias[], Filter[conv2d_59.w_0:float8, 16, 3, 3], Input[tmp_4:float100, 16, 16, 16], Output[conv2d_59.tmp_0:float100, 8, 8, 8], Output@GRAD[conv2d_59.tmp_0@GRAD:float100, 8, 8, 8]}, outputs:{Bias@GRAD[], Filter@GRAD[conv2d_59.w_0@GRAD-1], Input@GRAD[tmp_4@GRAD-1]}. I1127 08:05:34.347718 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[CUDNN] I1127 08:05:34.347928 6304 operator.cc:152] CUDAPlace(0) Op(conv2d_grad), inputs:{Bias[], Filter[conv2d_59.w_0:float8, 16, 3, 3], Input[tmp_4:float100, 16, 16, 16], Output[conv2d_59.tmp_0:float100, 8, 8, 8], Output@GRAD[conv2d_59.tmp_0@GRAD:float100, 8, 8, 8]}, outputs:{Bias@GRAD[], Filter@GRAD[conv2d_59.w_0@GRAD8, 16, 3, 3], Input@GRAD[tmp_4@GRAD100, 16, 16, 16]}. I1127 08:05:34.347978 6304 operator.cc:140] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[tmp_4@GRAD:float100, 16, 16, 16], Y[tmp_3:float100, 16, 16, 16]}, outputs:{X@GRAD[batch_norm_24.tmp_2@GRAD-1], Y@GRAD[tmp_3@GRAD@RENAME@0-1]}. I1127 08:05:34.348013 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] I1127 08:05:34.348117 6304 operator.cc:152] CUDAPlace(0) Op(elementwise_add_grad), inputs:{Out@GRAD[tmp_4@GRAD:float100, 16, 16, 16], Y[tmp_3:float100, 16, 16, 16]}, outputs:{X@GRAD[batch_norm_24.tmp_2@GRAD100, 16, 16, 16], Y@GRAD[tmp_3@GRAD@RENAME@0100, 16, 16, 16]}. I1127 08:05:34.348183 6304 operator.cc:140] CUDAPlace(0) Op(batch_norm_grad), inputs:{Bias[batch_norm_24.b_0:float16], SavedMean[batch_norm_24.tmp_0:-1], SavedVariance[batch_norm_24.tmp_1:-1], Scale[batch_norm_24.w_0:float16], X[conv2d_24.tmp_1:float100, 16, 16, 16], Y@GRAD[batch_norm_24.tmp_2@GRAD:float100, 16, 16, 16]}, outputs:{Bias@GRAD[batch_norm_24.b_0@GRAD-1], Scale@GRAD[batch_norm_24.w_0@GRAD-1], X@GRAD[conv2d_24.tmp_1@GRAD-1]}. I1127 08:05:34.348225 6304 operator.cc:683] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN] Traceback (most recent call last): File "train_fan_score.py", line 174, in train_score() File "train_fan_score.py", line 94, in train_score fetch_list = [lr.name, avg_cost.name, label_softmax.name, main_label.name], feed = feeder.feed(data)) File "/opt/python/cp27-cp27mu/lib/python2.7/site-packages/paddle/fluid/executor.py", line 470, in run self.executor.run(program.desc, scope, 0, True, True) paddle.fluid.core.EnforceNotMet: holder_ should not be null Tensor holds no memory. Call Tensor::mutable_data first. at [/paddle/paddle/fluid/framework/tensor.cc:22] PaddlePaddle Call Stacks: 0 0x7faf5d8422d6p paddle::platform::EnforceNotMet::EnforceNotMet(std::exception_ptr::exception_ptr, char const*, int) + 486 1 0x7faf5ea34502p paddle::framework::Tensor::check_memory_size() const + 226 2 0x7faf5d847ef9p float const* paddle::framework::Tensor::data() const + 25 3 0x7faf5e3248aep paddle::operators::BatchNormGradKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const + 1886 4 0x7faf5e325b33p ZNSt17_Function_handlerIFvRKN6paddle9framework16ExecutionContextEEZNKS1_24OpKernelRegistrarFunctorINS0_8platform9CUDAPlaceELb0ELm0EJNS0_9operators19BatchNormGradKernelINS7_17CUDADeviceContextEfEENSA_ISB_dEEEEclEPKcSG_EUlS4_E_E9_M_invokeERKSt9_Any_dataS4 + 35 5 0x7faf5e98bf83p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 531 6 0x7faf5e98905cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 252 7 0x7faf5d90d469p paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 393 8 0x7faf5d90df00p paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool) + 128 9 0x7faf5d827dcdp 10 0x7faf5d852e54p pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 2596 11 0x7faf99b65ce8p PyEval_EvalFrameEx + 28264 12 0x7faf99b6837dp PyEval_EvalCodeEx + 2061 13 0x7faf99b65d70p PyEval_EvalFrameEx + 28400 14 0x7faf99b65e9ep PyEval_EvalFrameEx + 28702 15 0x7faf99b6837dp PyEval_EvalCodeEx + 2061 16 0x7faf99b684b2p PyEval_EvalCode + 50 17 0x7faf99b921c2p PyRun_FileExFlags + 146 18 0x7faf99b93559p PyRun_SimpleFileExFlags + 217 19 0x7faf99ba91ddp Py_Main + 3149 20 0x7faf98e3cd1dp __libc_start_main + 253 21 0x4006b1p

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#14624
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7