win7调用官方v1.7的库,编译出错:“无法打开文件libpaddle_fluid.lib”
Created by: ZHOUKAI1
win7+vs2015,下载官方paddleV1.7编译好的库,想加载模型进行预测,结果库加载不起来,官方库:https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/inference/windows_cpp_inference.html
cuda 10.0+cudnn7版本 和 cuda9版本都试过了。都出现 错误 LNK1104 无法打开文件“D:\paddle\lib\libpaddle_fluid.lib” paddlecuda Y:\DL\paddle\paddle_infer_vs2015\paddlecuda\LINK 1
cuda路径和paddle依赖的头文件以及lib路径都配置了,不知问题出在哪?帮帮忙,谢谢!
我把libpaddle_fluid.lib依赖库 去掉,编译出现
错误 LNK2001 无法解析的外部符号 "class std::unique_ptr<class paddle::PaddlePredictor,struct std::default_delete > __cdecl paddle::CreatePaddlePredictor(struct paddle::AnalysisConfig const &)" (??$CreatePaddlePredictor@UAnalysisConfig@paddle@@@paddle@@YA?AV?$unique_ptr@VPaddlePredictor@paddle@@U?$default_delete@VPaddlePredictor@paddle@@@std@@@std@@AEBUAnalysisConfig@0@@Z) paddlecuda Y:\DL\paddle\paddle_infer_vs2015\paddlecuda\windows_mobilenet.obj 1
demo代码是官方的 // Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.
#include #include #include #include #include #include #include <gflags/gflags.h> #include <glog/logging.h> #include "paddle/include/paddle_inference_api.h"
DEFINE_string(modeldir, "", "Directory of the inference model."); DEFINE_bool(use_gpu, false, "Whether use gpu.");
namespace paddle { namespace demo {
void RunAnalysis() { // 1. create AnalysisConfig AnalysisConfig config; if (FLAGS_modeldir.empty()) { LOG(INFO) << "Usage: path\mobilenet --modeldir=path/to/your/model"; exit(1); }
// CreateConfig(&config); if (FLAGS_use_gpu) { config.EnableUseGpu(100, 0); } config.SetModel(FLAGS_modeldir + "/model", FLAGS_modeldir + "/params");
// use ZeroCopyTensor, Must be set to false config.SwitchUseFeedFetchOps(false);
// 2. create predictor, prepare input data std::unique_ptr predictor = CreatePaddlePredictor(config); int batch_size = 1; int channels = 3; int height = 300; int width = 300; int nums = batch_size * channels * height * width;
float* input = new float[nums]; for (int i = 0; i < nums; ++i) input[i] = 0;
// 3. create input tensor, use ZeroCopyTensor auto input_names = predictor->GetInputNames(); auto input_t = predictor->GetInputTensor(input_names[0]); input_t->Reshape({batch_size, channels, height, width}); input_t->copy_from_cpu(input);
// 4. run predictor predictor->ZeroCopyRun();
// 5. get out put std::vector out_data; auto output_names = predictor->GetOutputNames(); auto output_t = predictor->GetOutputTensor(output_names[0]); std::vector output_shape = output_t->shape(); int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1, std::multiplies());
out_data.resize(out_num); output_t->copy_to_cpu(out_data.data()); delete[] input; }
} // namespace demo } // namespace paddle
int main() { // google::ParseCommandLineFlags(&argc, &argv, true); paddle::demo::RunAnalysis(); std::cout << "=========================Runs successfully====================" << std::endl; return 0; }