预测时候报错,关于fraction_of_gpu_memory_to_use
Created by: houj04
系统环境:
Linux ubuntu 5.0.0-23-generic #24 (closed)~18.04.1-Ubuntu SMP Mon Jul 29 16:12:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
gcc版本:
gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
预测库来源: https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html 从上面的链接里面下载的cpu_avx_mkl的1.5.1版本
预测库版本:
GIT COMMIT ID: bc9fd1fc WITH_MKL: ON WITH_MKLDNN: ON WITH_GPU: OFF
被编译的代码:
#include "paddle_inference_api.h"
int main() {
// create a config and modify associated options
paddle::NativeConfig config;
config.model_dir = "xxx";
config.use_gpu = false;
config.fraction_of_gpu_memory = 0.15;
config.device = 0;
// create a native PaddlePredictor
auto predictor = paddle::CreatePaddlePredictor<paddle::NativeConfig>(config);
return 0;
}
编译和运行的命令(预测库解压到了~/fluid_inference/里面)
#!/bin/bash
set -x
# compile
g++ -I ~/fluid_inference/paddle/include/ simple.cpp -L ~/fluid_inference/paddle/lib/ -lpaddle_fluid -Wl,-rpath ~/fluid_inference/third_party/install/mklml/lib/ -Wl,-rpath ~/fluid_inference/third_party/install/mkldnn/lib/ -Wl,-rpath ~/fluid_inference/paddle/lib/
# prepare running libs
export LD_LIBRARY_PATH=~/fluid_inference/third_party/install/mklml/lib/:~/fluid_inference/third_party/install/mkldnn/lib/:~/fluid_inference/paddle/lib/
# run
./a.out
运行报错信息:
ERROR: unknown command line flag 'fraction_of_gpu_memory_to_use'