Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #15484

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 1月 23, 2019 by saxon_zh@saxon_zhGuest

ptr无法分配GPU

Created by: lucywsq

提示ptr无法分配GPU,麻烦大佬给我指点迷津。 usr/bin/python2.7 /home/yang/PycharmProjects/note7/code/train.py [INFO 2019-01-05 18:23:52,363 layers.py:2716] output for conv_0: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,364 layers.py:3361] output for batch_norm_0: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,364 layers.py:2716] output for conv_1: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,365 layers.py:3361] output for batch_norm_1: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,366 layers.py:2858] output for pool_0: c = 16, h = 40, w = 90, size = 57600 [INFO 2019-01-05 18:23:52,367 layers.py:2716] output for conv_2: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,368 layers.py:3361] output for batch_norm_2: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,368 layers.py:2716] output for conv_3: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,369 layers.py:3361] output for batch_norm_3: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,369 layers.py:2858] output for pool_1: c = 32, h = 20, w = 45, size = 28800 [INFO 2019-01-05 18:23:52,370 layers.py:2716] output for conv_4: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,371 layers.py:3361] output for batch_norm_4: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,371 layers.py:2716] output for conv_5: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,372 layers.py:3361] output for batch_norm_5: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,372 layers.py:2858] output for pool_2: c = 64, h = 10, w = 23, size = 14720 [INFO 2019-01-05 18:23:52,373 layers.py:2716] output for conv_6: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,374 layers.py:3361] output for batch_norm_6: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,374 layers.py:2716] output for conv_7: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,375 layers.py:3361] output for batch_norm_7: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,376 layers.py:2858] output for pool_3: c = 128, h = 5, w = 12, size = 7680 I0105 18:23:52.460633 6178 Util.cpp:166] commandline: --use_gpu=True --trainer_count=1 F0105 18:23:52.470379 6178 Allocator.h:89] Check failed: ptr Fail to allocate GPU memory 1024 bytes

* Check failure stack trace: *

@ 0x7f207029beed google::LogMessage::Fail() @ 0x7f207029f99c google::LogMessage::SendToLog() @ 0x7f207029ba13 google::LogMessage::Flush() @ 0x7f20702a0eae google::LogMessageFatal::~LogMessageFatal() @ 0x7f207020124e paddle::GpuAllocator::alloc() @ 0x7f207020204f paddle::PoolAllocator::alloc() @ 0x7f20701fab7f paddle::GpuMemoryHandle::GpuMemoryHandle() @ 0x7f20701c39ae paddle::GpuVectorT<>::GpuVectorT() @ 0x7f20701c3b68 paddle::VectorT<>::create() @ 0x7f20701c3c19 paddle::VectorT<>::createParallelVector() @ 0x7f206ff19d66 paddle::Parameter::enableType() @ 0x7f206ff20eda paddle::parameterInitNN() @ 0x7f206ff23e1e paddle::NeuralNetwork::init() @ 0x7f206ff1e04f paddle::GradientMachine::create() @ 0x7f2070278935 GradientMachine::createFromPaddleModelPtr() @ 0x7f2070278b1f GradientMachine::createByConfigProtoStr() @ 0x7f206fed5857 _wrap_GradientMachine_createByConfigProtoStr @ 0x5632e56828db (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5681c38 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e56815fe (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e56957e6 (unknown) @ 0x5632e56ae0de (unknown) @ 0x5632e56adcea (unknown) @ 0x5632e566a86b (unknown) @ 0x5632e5681420 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5681c38 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5679629 (unknown)

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#15484
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7