文档C++ 预测 API的例子编译后,运行报错:段错误(核心已转储)
Created by: zuokanmingyuli
-环境信息
1)PaddlePaddle版本:1.5.2.post107
2)CPU:-
3)GPU:notebook GTX 1060、CUDA 10.0、CUDNN 7.5.1
4)系统环境:ubuntu 18.04,Python 3.6.8
-预测信息 1)C++预测:预测库安装包的版本信息 cuda10.0_cudnn7_avx_mkl,其中的version.txt文件:
GIT COMMIT ID: a5696153a013552bfaa612ca407eba318e18ba54
WITH_MKL: ON
WITH_MKLDNN: ON
WITH_GPU: ON
CUDA version: 10.0
CUDNN version: v7
2)CMake包含路径的完整命令:
INCLUDE_DIRECTORIES(/home/xxx/paddle_cpp/src/include)
target_link_libraries(paddle_NativePredictor
/home/xxx/paddle_cpp/src/lib/libpaddle_fluid.so
/home/xxx/paddle_cpp/src/third_party/install/mkldnn/lib/libmkldnn.so.0
/home/xxx/paddle_cpp/src/third_party/install/mklml/lib/libmklml_intel.so
)
3)预测库来源:官网下载
-
复现信息: 文档 - C++ 预测 API介绍(https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/native_infer.html ) 中的前两个使用样例报错。 我将文档 - 'C++ 预测 API介绍 - NativePredictor使用样例' 的例子代码放在一个paddle_NativePredictor.cpp中,没有改动,然后写好CMakeLists.txt如上所述,然后编译,未报错,运行可执行程序报错:段错误(核心已转储) 同样的方法,将代码换成了文档 - 'C++ 预测 API介绍 - AnalysisPredictor 使用样例' ,也是基本是报同样的错。
-
问题描述: C++ 预测 API介绍 - NativePredictor使用样例就是编译好之后报错:段错误(核心已转储),gdb信息如下:
Temporary breakpoint 1, main ()
at /home/abc/paddle_cpp/src/paddle_NativePredictor.cpp:54
54 int main() {
(gdb) n
56 paddle::RunNative(1, "./mobilenet");
(gdb)
Program received signal SIGSEGV, Segmentation fault.
0x00007fffe018319f in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(gdb)
Single stepping until exit from function _ZNSsC2ERKSs,
which has no line number information.
第二个例子“AnalysisPredictor使用样例”也报错,同样的方法,仅仅将cpp里的代码换成了文档 - 'C++ 预测 API介绍 - AnalysisPredictor 使用样例' ,报错error: ‘accumulate’ is not a member of ‘std’,于是我在这段代码开头加上了“ #include ”,之后报错如下:
CMakeFiles/paddle_NativePredictor.dir/paddle_NativePredictor.cpp.o:在函数‘paddle::CreateConfig(paddle::AnalysisConfig*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’中:
/home/abc/paddle_cpp2/src/paddle_NativePredictor.cpp:7:对‘paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用
collect2: error: ld returned 1 exit status
CMakeFiles/paddle_NativePredictor.dir/build.make:86: recipe for target 'paddle_NativePredictor' failed
make[2]: *** [paddle_NativePredictor] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/paddle_NativePredictor.dir/all' failed
make[1]: *** [CMakeFiles/paddle_NativePredictor.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
我将代码改写如下:
#include "paddle_inference_api.h"
#include <numeric>
int main() {
paddle::AnalysisConfig config;
int batch_size = 1;
std::string model_dirname = "./mobilenet";
config.SetModel(model_dirname);
config.EnableUseGpu(1000, 0);
config.SwitchUseFeedFetchOps(false);
config.SwitchSpecifyInputNames(true);
// config.SwitchIrDebug(true);
auto predictor = CreatePaddlePredictor(config);
return 0;
}
编译好之后执行报错:段错误 (核心已转储) gdb信息如下:
64 int main() {
(gdb) n
66 paddle::AnalysisConfig config;
(gdb)
68 int batch_size = 1;
(gdb)
69 std::string model_dirname = "./mobilenet";
(gdb)
71 config.SetModel(model_dirname);
(gdb)
72 config.EnableUseGpu(10, 0 );
(gdb)
Program received signal SIGSEGV, Segmentation fault.
__memmove_avx_unaligned_erms ()
at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:249
249 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: 没有那个文件或目录.