Aborted at (unix time) try "date -d @" if you are using GNU date ***
Created by: lawup
Paddle诸位老师好,我按照官方的方法在ubuntu14.04(64位)上搭建了paddle环境(使用deb包在Ubuntu上安装PaddlePaddle,用的是百度云BCC云服务器8G8核,安装了paddle-0.8.0b0-Linux-cpu.deb),跑quickstart的例子没问题,但是跑图像分类的例子( http://www.paddlepaddle.org/doc/demo/image_classification/index.html ) 在进行训练的时候sh train.sh报了如下错误,我百般求解无果,特来打扰,期望能得到大神的指导(因为只有CPU,所以唯一修改了DEMO的地方就是把use_gpu=1改为了0): 主要的错误点类似于是: *** Aborted at 1478770503 (unix time) try "date -d @1478770503" if you are using GNU date ***
详细的错误log如下:
root@instance-89rrmi6d:~/Paddle-develop/demo/image_classification# sh train.sh
I1110 17:35:02.720129 13086 Util.cpp:144] commandline: /usr/bin/../opt/paddle/bin/paddle_trainer --config=vgg_16_cifar.py --dot_period=10 --log_period=100 --test_all_data_in_one_period=1 --use_gpu=0 --trainer_count=1 --num_passes=200 --save_dir=./cifar_vgg_model
I1110 17:35:02.720441 13086 Util.cpp:113] Calling runInitFunctions
I1110 17:35:02.720787 13086 Util.cpp:126] Call runInitFunctions done.
[INFO 2016-11-10 17:35:02,762 layers.py:1430] channels=3 size=3072
[INFO 2016-11-10 17:35:02,763 layers.py:1430] output size for __conv_0__ is 32
[INFO 2016-11-10 17:35:02,764 layers.py:1430] channels=64 size=65536
[INFO 2016-11-10 17:35:02,765 layers.py:1430] output size for __conv_1__ is 32
[INFO 2016-11-10 17:35:02,766 layers.py:1490] output size for __pool_0__ is 16*16
[INFO 2016-11-10 17:35:02,767 layers.py:1430] channels=64 size=16384
[INFO 2016-11-10 17:35:02,767 layers.py:1430] output size for __conv_2__ is 16
[INFO 2016-11-10 17:35:02,769 layers.py:1430] channels=128 size=32768
[INFO 2016-11-10 17:35:02,769 layers.py:1430] output size for __conv_3__ is 16
[INFO 2016-11-10 17:35:02,770 layers.py:1490] output size for __pool_1__ is 8*8
[INFO 2016-11-10 17:35:02,771 layers.py:1430] channels=128 size=8192
[INFO 2016-11-10 17:35:02,771 layers.py:1430] output size for __conv_4__ is 8
[INFO 2016-11-10 17:35:02,773 layers.py:1430] channels=256 size=16384
[INFO 2016-11-10 17:35:02,773 layers.py:1430] output size for __conv_5__ is 8
[INFO 2016-11-10 17:35:02,775 layers.py:1430] channels=256 size=16384
[INFO 2016-11-10 17:35:02,775 layers.py:1430] output size for __conv_6__ is 8
[INFO 2016-11-10 17:35:02,776 layers.py:1490] output size for __pool_2__ is 4*4
[INFO 2016-11-10 17:35:02,777 layers.py:1430] channels=256 size=4096
[INFO 2016-11-10 17:35:02,777 layers.py:1430] output size for __conv_7__ is 4
[INFO 2016-11-10 17:35:02,779 layers.py:1430] channels=512 size=8192
[INFO 2016-11-10 17:35:02,779 layers.py:1430] output size for __conv_8__ is 4
[INFO 2016-11-10 17:35:02,781 layers.py:1430] channels=512 size=8192
[INFO 2016-11-10 17:35:02,781 layers.py:1430] output size for __conv_9__ is 4
[INFO 2016-11-10 17:35:02,782 layers.py:1490] output size for __pool_3__ is 2*2
[INFO 2016-11-10 17:35:02,783 layers.py:1490] output size for __pool_4__ is 1*1
[INFO 2016-11-10 17:35:02,785 networks.py:960] The input order is [image, label]
[INFO 2016-11-10 17:35:02,786 networks.py:963] The output order is [__cost_0__]
I1110 17:35:02.795364 13086 Trainer.cpp:169] trainer mode: Normal
I1110 17:35:02.825625 13086 PyDataProvider2.cpp:219] loading dataprovider image_provider::processData
[INFO 2016-11-10 17:35:02,890 image_provider.py:52] Image size: 32
[INFO 2016-11-10 17:35:02,891 image_provider.py:53] Meta path: data/cifar-out/batches/batches.meta
[INFO 2016-11-10 17:35:02,891 image_provider.py:58] DataProvider Initialization finished
I1110 17:35:02.891543 13086 PyDataProvider2.cpp:219] loading dataprovider image_provider::processData
[INFO 2016-11-10 17:35:02,891 image_provider.py:52] Image size: 32
[INFO 2016-11-10 17:35:02,891 image_provider.py:53] Meta path: data/cifar-out/batches/batches.meta
[INFO 2016-11-10 17:35:02,892 image_provider.py:58] DataProvider Initialization finished
I1110 17:35:02.892290 13086 GradientMachine.cpp:134] Initing parameters..
I1110 17:35:03.474557 13086 GradientMachine.cpp:141] Init parameters done.
Current Layer forward/backward stack is
*** Aborted at 1478770503 (unix time) try "date -d @1478770503" if you are using GNU date ***
Current Layer forward/backward stack is
PC: @ 0x7f60e41676e2 (unknown)
Current Layer forward/backward stack is
*** SIGSEGV (@0x7f6000000009) received by PID 13086 (TID 0x7f60e50c7780) from PID 9; stack trace: ***
Current Layer forward/backward stack is
@ 0x7f60e49aa340 (unknown)
Current Layer forward/backward stack is
@ 0x7f60e41676e2 (unknown)
Current Layer forward/backward stack is
@ 0x62393a paddle::DenseScanner::fill()
Current Layer forward/backward stack is
@ 0x627ebc paddle::PyDataProvider2::getNextBatchInternal()
Current Layer forward/backward stack is
@ 0x61e8c2 paddle::DataProvider::getNextBatch()
Current Layer forward/backward stack is
@ 0x65b9c7 paddle::Trainer::trainOnePass()
Current Layer forward/backward stack is
@ 0x65f2d7 paddle::Trainer::train()
Current Layer forward/backward stack is
@ 0x50b0a3 main
Current Layer forward/backward stack is
@ 0x7f60e2ff4ec5 (unknown)
Current Layer forward/backward stack is
@ 0x516095 (unknown)
Current Layer forward/backward stack is
@ 0x0 (unknown)
/usr/bin/paddle: line 46: 13086 Segmentation fault (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}
No data to plot. Exiting!
root@instance-89rrmi6d:~/Paddle-develop/demo/image_classification#