Is it necessary to handle "SIGFPE, Arithmetic exception"
Created by: zuowang
I got "SIGFPE, Arithmetic exception." Is it necessary to handle such exception?
Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 0x7fffe9e38700 (LWP 1662)]
__expf_finite () at ../sysdeps/x86_64/fpu/e_expf.S:132
132 ../sysdeps/x86_64/fpu/e_expf.S: No such file or directory.
(gdb) bt
#0 __expf_finite () at ../sysdeps/x86_64/fpu/e_expf.S:132
#1 0x00007ffff6502a42 in __GI___expf (x=91.3864059)
at ../sysdeps/ieee754/flt-32/w_expf.c:26
#2 0x00000000009c65de in std::exp (__x=91.3864059)
at /usr/include/c++/4.8/cmath:242
#3 0x0000000000ca4b7b in paddle::binary::vTanh<float>::cpuOperator (
this=0x7fffe9e37920, a=@0x7fff804a0b48: -45.693203,
b=@0x7fff804a0b48: -45.693203)
at /root/paddle/paddle/math/MathFunctions.cpp:163
#4 0x0000000000ca45d9 in hl_cpu_apply_binary_op<float, paddle::binary::vTanh<float>, false, false> (op=..., A_h=0x7fff80461040, B_h=0x7fff80461040, dimM=1,
dimN=102400, lda=102400, ldb=102400)
at /root/paddle/paddle/cuda/include/hl_cpu_matrix_kernel.cuh:49
#5 0x0000000000ca3f18 in paddle::vTanh<float> (n=102400, a=0x7fff80461040,
r=0x7fff80461040) at /root/paddle/paddle/math/MathFunctions.cpp:166
#6 0x0000000000c655d7 in paddle::CpuMatrix::tanh (this=0x7fff80460f68,
output=...) at /root/paddle/paddle/math/Matrix.cpp:3233
#7 0x0000000000a52cc9 in paddle::tanhActivation::forward (this=0x3884170,
act=...)
at /root/paddle/paddle/gserver/activations/ActivationFunction.cpp:219
#8 0x00000000009f82ce in paddle::Layer::forwardActivation (this=0x387d640)
at /root/paddle/paddle/gserver/layers/Layer.cpp:332
#9 0x000000000098fd7a in paddle::FullyConnectedLayer::forward (
this=0x387d640, passType=paddle::enumeration_wrapper::PASS_TRAIN)
at /root/paddle/paddle/gserver/layers/FullyConnectedLayer.cpp:100
#10 0x0000000000aaa70c in paddle::NeuralNetwork::forward (this=0x33bcdb0,
inArgs=std::vector of length 8, capacity 8 = {...}, outArgs=0x33b2a08,
passType=paddle::enumeration_wrapper::PASS_TRAIN)
at /root/paddle/paddle/gserver/gradientmachines/NeuralNetwork.cpp:242
#11 0x0000000000ae75c0 in paddle::TrainerThread::forward (this=0x33b28e0)
at /root/paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:581
#12 0x0000000000ae728e in paddle::TrainerThread::computeThread (this=0x33b28e0)
at /root/paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:519
#13 0x0000000000ae6d79 in paddle::TrainerThread::__lambda46::operator() (
__closure=0x3885c60)
at /root/paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:465
#14 0x0000000000aecf22 in std::_Bind_simple<paddle::TrainerThread::start()::__lambda46()>::_M_invoke<>(std::_Index_tuple<>) (this=0x3885c60)
at /usr/include/c++/4.8/functional:1732
#15 0x0000000000aecc81 in std::_Bind_simple<paddle::TrainerThread::start()::__lambda46()>::operator()(void) (this=0x3885c60)
at /usr/include/c++/4.8/functional:1720
#16 0x0000000000aecae2 in std::thread::_Impl<std::_Bind_simple<paddle::TrainerThread::start()::__lambda46()> >::_M_run(void) (this=0x3885c48)
at /usr/include/c++/4.8/thread:115
#17 0x00007ffff6886a60 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#18 0x00007ffff78c2184 in start_thread (arg=0x7fffe9e38700)
at pthread_create.c:312
#19 0x00007ffff5fee37d in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111