x86.md 3.0 KB
Newer Older
H
huzhiqiang 已提交
1 2 3 4
---
layout: post
title: Lite支持x86编译
---
H
huzhiqiang 已提交
5 6
* TOC
{:toc}
H
huzhiqiang 已提交
7 8 9

Paddle-Lite 支持在Docker或Linux环境编译x86预测库。环境搭建参考[环境准备](../source_compile)

H
huzhiqiang 已提交
10
## 编译
H
huzhiqiang 已提交
11

H
huzhiqiang 已提交
12
1、 下载代码
H
huzhiqiang 已提交
13 14 15 16 17 18
```bash
git clone https://github.com/PaddlePaddle/Paddle-Lite.git
#需要切换到 release/v2.0.0之后版本
git checkout <release_tag>
```

H
huzhiqiang 已提交
19
2、 源码编译
H
huzhiqiang 已提交
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45

```bash
cd Paddle-Lite
./lite/tool/build.sh x86
```

## 编译结果说明

x86编译结果位于 `build.lite.x86/inference_lite_lib`
**具体内容**说明:

1、 `bin`文件夹:可执行工具文件 `test_model_bin`

2、 `cxx`文件夹:包含c++的库文件与相应的头文件

- `include`  : 头文件
- `lib` : 库文件
  - 打包的静态库文件:
    - `libpaddle_api_full_bundled.a`  :包含 full_api 和 light_api 功能的静态库
    - `libpaddle_api_light_bundled.a` :只包含 light_api 功能的静态库
  - 打包的动态态库文件:
    - `libpaddle_full_api_shared.so` :包含 full_api 和 light_api 功能的动态库
    - `libpaddle_light_api_shared.so`:只包含 light_api 功能的动态库

3、 `third_party` 文件夹:第三方库文件

H
huzhiqiang 已提交
46
## x86预测API使用示例
H
huzhiqiang 已提交
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109

```c++
#include <gflags/gflags.h>
#include <iostream>
#include <vector>
#include "paddle_api.h"          // NOLINT
#include "paddle_use_kernels.h"  // NOLINT
#include "paddle_use_ops.h"      // NOLINT
#include "paddle_use_passes.h"   // NOLINT

using namespace paddle::lite_api;  // NOLINT

DEFINE_string(model_dir, "", "Model dir path.");
DEFINE_string(optimized_model_dir, "", "Optimized model dir.");
DEFINE_bool(prefer_int8_kernel, false, "Prefer to run model with int8 kernels");

int64_t ShapeProduction(const shape_t& shape) {
  int64_t res = 1;
  for (auto i : shape) res *= i;
  return res;
}
void RunModel() {
  // 1. Set CxxConfig
  CxxConfig config;
  config.set_model_file(FLAGS_model_dir + "model");
  config.set_param_file(FLAGS_model_dir + "params");

  config.set_valid_places({
    lite_api::Place{TARGET(kX86), PRECISION(kFloat)},
    lite_api::Place{TARGET(kHost), PRECISION(kFloat)}
  });

  // 2. Create PaddlePredictor by CxxConfig
  std::shared_ptr<PaddlePredictor> predictor =
      CreatePaddlePredictor<CxxConfig>(config);

  // 3. Prepare input data
  std::unique_ptr<Tensor> input_tensor(std::move(predictor->GetInput(0)));
  input_tensor->Resize(shape_t({1, 3, 224, 224}));
  auto* data = input_tensor->mutable_data<float>();
  for (int i = 0; i < ShapeProduction(input_tensor->shape()); ++i) {
    data[i] = 1;
  }

  // 4. Run predictor
  predictor->Run();

  // 5. Get output
  std::unique_ptr<const Tensor> output_tensor(
      std::move(predictor->GetOutput(0)));
  std::cout << "Output dim: " << output_tensor->shape()[1] << std::endl;
  for (int i = 0; i < ShapeProduction(output_tensor->shape()); i += 100) {
    std::cout << "Output[" << i << "]:" << output_tensor->data<float>()[i] << std::endl;
  }
}

int main(int argc, char** argv) {
  google::ParseCommandLineFlags(&argc, &argv, true);
  RunModel();
  return 0;
}
```