error: ‘__m256’ does not name a type
Created by: CodesFarmer
I compile the paddle on Centos6.10 with kernel 3.10.0_3-0-0-22 The version of g++/gcc is 4.8.2 I compile the paddle for generating a docker image, below is the configuration for compiling the paddle. The process was broken after 12% percent of progress finished, and the error is
/paddle/paddle/legacy/cuda/src/avx_mathfun.h:52:9: error: ‘__m256’ does not name a type
Configuring cmake in /paddle/build ...
-DCMAKE_BUILD_TYPE=Release
-DPYTHON_EXECUTABLE:FILEPATH=/opt/python/cp27-cp27mu/bin/python
-DPYTHON_INCLUDE_DIR:PATH=/opt/python/cp27-cp27mu/include/python2.7
-DPYTHON_LIBRARIES:FILEPATH=/opt/_internal/cpython-2.7.11-ucs4/lib/libpython2.7.so
-DWITH_DSO=ON
-DWITH_DOC=OFF
-DWITH_GPU=ON
-DWITH_AMD_GPU=OFF
-DWITH_DISTRIBUTE=ON
-DWITH_MKL=ON
-DWITH_AVX=ON
-DWITH_GOLANG=OFF
-DCUDA_ARCH_NAME=All
-DWITH_C_API=OFF
-DWITH_PYTHON=ON
-DWITH_SWIG_PY=ON
-DCUDNN_ROOT=/usr/
-DWITH_TESTING=OFF
-DWITH_FAST_BUNDLE_TEST=ON
-DCMAKE_MODULE_PATH=/opt/rocm/hip/cmake
-DCMAKE_EXPORT_COMPILE_COMMANDS=ON
-DWITH_FLUID_ONLY=OFF
-DCMAKE_EXPORT_COMPILE_COMMANDS=ON
-DWITH_CONTRIB=ON
-DWITH_INFERENCE=ON
-DWITH_INFERENCE_API_TEST=ON
-DINFERENCE_DEMO_INSTALL_DIR=/root/.cache/inference_demo
-DWITH_ANAKIN=OFF
-DPY_VERSION=2.7
-DCMAKE_INSTALL_PREFIX=/paddle/build
========================================
- cmake .. -DCMAKE_BUILD_TYPE=Release -DPYTHON_EXECUTABLE:FILEPATH=/opt/python/cp27-cp27mu/bin/python -DPYTHON_INCLUDE_DIR:PATH=/opt/python/cp27-cp27mu/include/python2.7 -DPYTHON_LIBRARIES:FILEPATH=/opt/_internal/cpython-2.7.11-ucs4/lib/libpython2.7.so -DWITH_DSO=ON -DWITH_DOC=OFF -DWITH_GPU=ON -DWITH_AMD_GPU=OFF -DWITH_DISTRIBUTE=ON -DWITH_MKL=ON -DWITH_AVX=ON -DWITH_GOLANG=OFF -DCUDA_ARCH_NAME=All -DWITH_SWIG_PY=ON -DWITH_C_API=OFF -DWITH_PYTHON=ON -DCUDNN_ROOT=/usr/ -DWITH_TESTING=OFF -DWITH_FAST_BUNDLE_TEST=ON -DCMAKE_MODULE_PATH=/opt/rocm/hip/cmake -DWITH_FLUID_ONLY=OFF -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DWITH_CONTRIB=ON -DWITH_INFERENCE=ON -DWITH_INFERENCE_API_TEST=ON -DINFERENCE_DEMO_INSTALL_DIR=/root/.cache/inference_demo -DWITH_ANAKIN=OFF -DPY_VERSION=2.7 -DCMAKE_INSTALL_PREFIX=/paddle/build -- Found Paddle host system: centos, version: 6.10 -- Found Paddle host system's CPU: 8 cores -- CXX compiler: /opt/rh/devtoolset-2/root/usr/bin/c++, version: GNU 4.8.2 -- C compiler: /opt/rh/devtoolset-2/root/usr/bin/cc, version: GNU 4.8.2 -- Do not have AVX2 intrinsics and disabled MKL-DNN -- MKLML_VER: mklml_lnx_2019.0.20180710, MKLML_URL: http://paddlepaddledeps.cdn.bcebos.com/mklml_lnx_2019.0.20180710.tgz -- Protobuf protoc executable: /paddle/build/third_party/install/protobuf/bin/protoc -- Protobuf-lite library: /paddle/build/third_party/install/protobuf/lib/libprotobuf-lite.a -- Protobuf library: /paddle/build/third_party/install/protobuf/lib/libprotobuf.a -- Protoc library: /paddle/build/third_party/install/protobuf/lib/libprotoc.a -- Protobuf version: 3.1 -- Found cblas and lapack in MKLML (include: /paddle/build/third_party/install/mklml/include, library: /paddle/build/third_party/install/mklml/lib/libmklml_intel.so) -- BLAS library: /paddle/build/third_party/install/mklml/lib/libmklml_intel.so -- BLAS Include: /paddle/build/third_party/install/mklml/include -- BOOST_TAR: boost_1_41_0, BOOST_URL: http://paddlepaddledeps.cdn.bcebos.com/boost_1_41_0.tar.gz -- warp-ctc library: /paddle/build/third_party/install/warpctc/lib/libwarpctc.so -- Use grpc framework. -- Current cuDNN header is /usr/local/cuda/include/cudnn.h. Current cuDNN version is v7. -- Enable Intel OpenMP with /paddle/build/third_party/install/mklml/lib/libiomp5.so -- CUDA detected: 9.0 -- Added CUDA NVCC flags for: sm_30 sm_35 sm_50 sm_52 sm_60 sm_61 sm_70 -- Paddle version is 0.0.0 -- Skip compiling with MKLDNNMatrix -- Skip compiling with MKLDNNLayers and MKLDNNActivations -- Compile with MKLPackedLayers -- add pass graph_to_program_pass base -- add pass graph_viz_pass base -- add pass fc_fuse_pass inference -- add pass attention_lstm_fuse_pass inference -- add pass infer_clean_graph_pass inference -- add pass fc_lstm_fuse_pass inference -- add pass embedding_fc_lstm_fuse_pass inference -- add pass fc_gru_fuse_pass inference -- add pass seq_concat_fc_fuse_pass inference -- add pass conv_bn_fuse_pass inference -- add pass seqconv_eltadd_relu_fuse_pass inference -- generating grpc send_recv.proto -- Configuring done -- Generating done