ptr无法分配GPU
Created by: lucywsq
提示ptr无法分配GPU,麻烦大佬给我指点迷津。 usr/bin/python2.7 /home/yang/PycharmProjects/note7/code/train.py [INFO 2019-01-05 18:23:52,363 layers.py:2716] output for conv_0: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,364 layers.py:3361] output for batch_norm_0: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,364 layers.py:2716] output for conv_1: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,365 layers.py:3361] output for batch_norm_1: c = 16, h = 80, w = 180, size = 230400 [INFO 2019-01-05 18:23:52,366 layers.py:2858] output for pool_0: c = 16, h = 40, w = 90, size = 57600 [INFO 2019-01-05 18:23:52,367 layers.py:2716] output for conv_2: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,368 layers.py:3361] output for batch_norm_2: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,368 layers.py:2716] output for conv_3: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,369 layers.py:3361] output for batch_norm_3: c = 32, h = 40, w = 90, size = 115200 [INFO 2019-01-05 18:23:52,369 layers.py:2858] output for pool_1: c = 32, h = 20, w = 45, size = 28800 [INFO 2019-01-05 18:23:52,370 layers.py:2716] output for conv_4: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,371 layers.py:3361] output for batch_norm_4: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,371 layers.py:2716] output for conv_5: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,372 layers.py:3361] output for batch_norm_5: c = 64, h = 20, w = 45, size = 57600 [INFO 2019-01-05 18:23:52,372 layers.py:2858] output for pool_2: c = 64, h = 10, w = 23, size = 14720 [INFO 2019-01-05 18:23:52,373 layers.py:2716] output for conv_6: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,374 layers.py:3361] output for batch_norm_6: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,374 layers.py:2716] output for conv_7: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,375 layers.py:3361] output for batch_norm_7: c = 128, h = 10, w = 23, size = 29440 [INFO 2019-01-05 18:23:52,376 layers.py:2858] output for pool_3: c = 128, h = 5, w = 12, size = 7680 I0105 18:23:52.460633 6178 Util.cpp:166] commandline: --use_gpu=True --trainer_count=1 F0105 18:23:52.470379 6178 Allocator.h:89] Check failed: ptr Fail to allocate GPU memory 1024 bytes
* Check failure stack trace: *
@ 0x7f207029beed google::LogMessage::Fail() @ 0x7f207029f99c google::LogMessage::SendToLog() @ 0x7f207029ba13 google::LogMessage::Flush() @ 0x7f20702a0eae google::LogMessageFatal::~LogMessageFatal() @ 0x7f207020124e paddle::GpuAllocator::alloc() @ 0x7f207020204f paddle::PoolAllocator::alloc() @ 0x7f20701fab7f paddle::GpuMemoryHandle::GpuMemoryHandle() @ 0x7f20701c39ae paddle::GpuVectorT<>::GpuVectorT() @ 0x7f20701c3b68 paddle::VectorT<>::create() @ 0x7f20701c3c19 paddle::VectorT<>::createParallelVector() @ 0x7f206ff19d66 paddle::Parameter::enableType() @ 0x7f206ff20eda paddle::parameterInitNN() @ 0x7f206ff23e1e paddle::NeuralNetwork::init() @ 0x7f206ff1e04f paddle::GradientMachine::create() @ 0x7f2070278935 GradientMachine::createFromPaddleModelPtr() @ 0x7f2070278b1f GradientMachine::createByConfigProtoStr() @ 0x7f206fed5857 _wrap_GradientMachine_createByConfigProtoStr @ 0x5632e56828db (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5681c38 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e56815fe (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e56957e6 (unknown) @ 0x5632e56ae0de (unknown) @ 0x5632e56adcea (unknown) @ 0x5632e566a86b (unknown) @ 0x5632e5681420 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5681c38 (unknown) @ 0x5632e5679d0a (unknown) @ 0x5632e5679629 (unknown)Process finished with exit code 134 (interrupted by signal 6: SIGABRT)