未验证 提交 4fed9d0b 编写于 作者: S silingtong123 提交者: GitHub

modify the doc of Windows inference library(#1663)

上级 fd9d77cd
......@@ -29,17 +29,20 @@
--------------
用户也可以从 PaddlePaddle 核心代码编译C++预测库,只需在编译时配制下面这些编译选项:
|选项 | 值 |
|:-------------|:-------------------|
|CMAKE_BUILD_TYPE | Release |
|ON_INFER | ON (推荐) |
|WITH_GPU | ON/OFF |
|WITH_MKL | ON/OFF |
|WITH_PYTHON | OFF |
|选项 |说明 | 值 |
|:-------------|:-------|:------------|
|CMAKE_BUILD_TYPE | 配置生成器上的构建类型,windows预测库目前只支持Release | Release |
|ON_INFER | 是否生成预测库,编译预测库时必须设置为ON | ON |
|WITH_GPU | 是否支持GPU | ON/OFF |
|WITH_MKL | 是否使用Intel MKL(数学核心库) | ON/OFF |
|WITH_PYTHON | 是否内嵌PYTHON解释器 | OFF(推荐) |
|MSVC_STATIC_CRT|是否使用/MT 模式进行编译,Windows默认使用 /MT 模式进行编译 |ON/OFF|
|CUDA_TOOKIT_ROOT_DIR|编译GPU预测库时,需设置CUDA的根目录|YOUR_CUDA_PATH|
请按照推荐值设置,以避免链接不必要的库。其它可选编译选项按需进行设定。
更多具体编译选项含义请参见[编译选项表](../../../beginners_guide/install/Tables.html/#Compile)
Windows下安装与编译预测库步骤:(在Windows命令提示符下执行以下指令)
1. 将PaddlePaddle的源码clone在当下目录的Paddle文件夹中,并进入Paddle目录:
......@@ -49,18 +52,22 @@ Windows下安装与编译预测库步骤:(在Windows命令提示符下执行
```
2. 执行cmake:
- 编译CPU预测
```bash
# 创建build目录用于编译
# 创建并进入build目录
mkdir build
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# -DWITH_GPU`为是否使用GPU的配置选项,-DWITH_MKL 为是否使用Intel MKL(数学核心库)的配置选项,请按需配置。
# Windows默认使用 /MT 模式进行编译,如果想使用 /MD 模式,请使用以下命令。如不清楚两者的区别,请使用上面的命令
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
- 编译GPU预测库:
```bash
# -DCUDA_TOOKIT_ROOT_DIR 为cuda根目录,例如-DCUDA_TOOKIT_ROOT_DIR="D:\\cuda"
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=ON -DWITH_GPU=ON -DON_INFER=ON -DWITH_PYTHON=OFF -DCUDA_TOOKIT_ROOT_DIR=YOUR_CUDA_PATH
```
3. 使用Blend for Visual Studio 2015 打开 `paddle.sln` 文件,选择平台为`x64`,配置为`Release`,编译inference_lib_dist项目。
操作方法:在Visual Studio中选择相应模块,右键选择"生成"(或者"build")
......@@ -111,17 +118,17 @@ version.txt 中记录了该预测库的版本信息,包括Git Commit ID、使
git clone https://github.com/PaddlePaddle/Paddle.git
```
### 编译demo
Windows下编译预测demo步骤:(在Windows命令提示符下执行以下指令)
#### 使用脚本编译运行
打开cmd窗口,使用下面的命令:
进入到demo_ci目录,运行脚本`run_windows_demo.bat`,根据提示按需输入参数:
```dos
# path为下载Paddle的目录
cd path\Paddle\paddle\fluid\inference\api\demo_ci
run_windows_demo.bat
```
其中,run_windows_demo.bat 的部分选项如下,请根据提示按需输入参数
其中,run_windows_demo.bat 的部分选项如下:
```dos
gpu_inference=Y #是否使用GPU预测库,默认使用CPU预测库
......@@ -134,101 +141,101 @@ vcvarsall_dir=path\vc\vcvarsall.bat #设置visual studio #本机工具命令提
```
#### 手动编译运行
打开cmd窗口,使用下面的命令:
```dos
# path为下载Paddle的目录
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
```
`cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON`
注:
-DDEMO_NAME 是要编译的文件
-DPADDLE_LIB fluid_inference_install_dir,例如
-DPADDLE_LIB=D:\fluid_inference_install_dir
Cmake可以在[官网进行下载](https://cmake.org/download/),并添加到环境变量中。
执行完毕后,build 目录如图所示,打开箭头指向的 solution 文件:
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
编译生成选项改成 `Release`
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image6.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image7.png">
</p>
[下载模型](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz)并解压到当前目录,执行命令:
1. 开启GLOG
`set GLOG_v=100`
2. 进行预测,path为模型解压后的目录
`Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model`
1. 进入demo_ci目录,创建并进入build目录
```dos
# path为下载Paddle的目录
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
```
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image9.png">
</p>
2. 执行cmake(cmake可以在[官网进行下载](https://cmake.org/download/),并添加到环境变量中):
- 使用CPU预测库编译demo
```dos
# -DDEMO_NAME 是要编译的文件
# -DDPADDLE_LIB是预测库目录,例如-DPADDLE_LIB=D:\fluid_inference_install_dir
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON ^
-DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON
```
- 使用GPU预测库编译demo
```dos
# -DCUDA_LIB CUDA的库目录,例如-DCUDA_LIB=D:\cuda\lib\x64
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=ON -DWITH_MKL=ON -DWITH_STATIC_LIB=ON ^
-DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON -DCUDA_LIB=YOUR_CUDA_LIB
```
3. 使用Blend for Visual Studio 2015 打开 `cpp_inference_demo.sln` 文件,选择平台为`x64`,配置为`Release`,编译simple_on_word2vec项目。
操作方法: 在Visual Studio中选择相应模块,右键选择"生成"(或者"build")
4. [下载模型](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz)并解压到当前目录,执行命令:
```dos
# 开启GLOG
set GLOG_v=100
# 进行预测,path为模型解压后的目录
Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model
```
## 使用AnalysisConfig管理预测配置
### 实现一个简单预测demo
[完整的代码示例](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/api/demo_ci/windows_mobilenet.cc)
本示例使用了AnalysisConfig管理AnalysisPredictor的预测配置,提供了模型路径设置、预测引擎运行设备选择以及使用ZeroCopyTensor管理输入/输出的设置。配置方法如下:
#### 创建AnalysisConfig
```C++
AnalysisConfig config;
```
**Note:** 使用ZeroCopyTensor,务必在创建config时设置`config->SwitchUseFeedFetchOps(false);`
```C++
config->SwitchUseFeedFetchOps(false); // 关闭feed和fetch OP使用,使用ZeroCopy接口必须设置此项
config->EnableUseGpu(100 /*设定GPU初始显存池为MB*/, 0 /*设定GPU ID为0*/); //开启GPU预测
```
#### 设置模型和参数路径
从磁盘加载模型时,根据模型和参数文件存储方式不同,设置AnalysisConfig加载模型和参数的路径有两种形式,此处使用combined形式:
* combined形式:模型文件夹`model_dir`下只有一个模型文件`__model__`和一个参数文件`__params__`时,传入模型文件和参数文件路径。
```C++
config->SetModel("./model_dir/__model__", "./model_dir/__params__");
```
本示例使用了AnalysisConfig管理AnalysisPredictor的预测配置,提供了模型路径设置、预测引擎运行设备选择以及使用ZeroCopyTensor管理输入/输出的设置。具体步骤如下:
#### 使用ZeroCopyTensor管理输入
ZeroCopyTensor是AnalysisPredictor的输入/输出数据结构
**Note:** 使用ZeroCopyTensor,务必在创建config时设置`config->SwitchUseFeedFetchOps(false);`
1. 创建AnalysisConfig
```C++
AnalysisConfig config;
config->SwitchUseFeedFetchOps(false); // 关闭feed和fetch OP使用,使用ZeroCopy接口必须设置此项
// config->EnableUseGpu(100 /*设定GPU初始显存池为MB*/, 0 /*设定GPU ID为0*/); //开启GPU预测
```
```C++
// 通过创建的AnalysisPredictor获取输入tensor
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
2. 在config中设置模型和参数路径
// 对tensor进行reshape,channels,height,width三个参数的设置必须与模型中输入所要求的一致
input_t->Reshape({batch_size, channels, height, width});
```
从磁盘加载模型时,根据模型和参数文件存储方式不同,设置AnalysisConfig加载模型和参数的路径有两种形式,此处使用combined形式:
- 非combined形式:模型文件夹`model_dir`下存在一个模型文件和多个参数文件时,传入模型文件夹路径,模型文件名默认为`__model__`
``` c++
config->SetModel("path\\model_dir\\__model__")
```
- combined形式:模型文件夹`model_dir`下只有一个模型文件`__model__`和一个参数文件`__params__`时,传入模型文件和参数文件路径。
```C++
config->SetModel("path\\model_dir\\__model__", "path\\model_dir\\__params__");
```
3. 创建predictor,准备输入数据
```C++
std::unique_ptr<PaddlePredictor> predictor = CreatePaddlePredictor(config);
int batch_size = 1;
int channels = 3; // channels,height,width三个参数必须与模型中对应输入的shape一致
int height = 300;
int width = 300;
int nums = batch_size * channels * height * width;
float* input = new float[nums];
for (int i = 0; i < nums; ++i) input[i] = 0;
```
4. 使用ZeroCopyTensor管理输入
```C++
// 通过创建的AnalysisPredictor获取输入Tensor,该Tensor为ZeroCopyTensor
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
// 对Tensor进行reshape,将准备好的输入数据从CPU拷贝到ZeroCopyTensor中
input_t->Reshape({batch_size, channels, height, width});
input_t->copy_from_cpu(input);
```
#### 运行预测引擎
```C++
predictor->ZeroCopyRun();
```
5. 运行预测引擎
```C++
predictor->ZeroCopyRun();
```
#### 使用ZeroCopyTensor管理输出
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
```
6. 使用ZeroCopyTensor管理输出
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
std::vector<int> output_shape = output_t->shape();
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies<int>());
out_data.resize(out_num);
output_t->copy_to_cpu(out_data.data()); // 将ZeroCopyTensor中数据拷贝到cpu中,得到输出数据
delete[] input;
```
**Note:** 关于AnalysisPredictor的更多介绍,请参考[C++预测API介绍](./native_infer.html)
......@@ -30,13 +30,17 @@ Build From Source Code
Users can also compile C++ inference libraries from the PaddlePaddle core code by specifying the following compile options at compile time:
|Option | Value |
|:-------------|:-------------------|
|CMAKE_BUILD_TYPE | Release |
|ON_INFER | ON(recommended) |
|WITH_GPU | ON/OFF |
|WITH_MKL | ON/OFF |
|WITH_PYTHON | OFF |
|Option | Description | Value |
|:-------------|:-----|:--------------|
|CMAKE_BUILD_TYPE|Specifies the build type on single-configuration generators, Windows inference library currently only supports Release| Release |
|ON_INFER|Whether to generate the inference library. Must be set to ON when compiling the inference library. | ON |
|WITH_GPU|Whether to support GPU | ON/OFF |
|WITH_MKL|Whether to support MKL | ON/OFF |
|WITH_PYTHON|Whether the PYTHON interpreter is embedded | OFF |
|MSVC_STATIC_CRT|Whether to compile with / MT mode | ON |
|CUDA_TOOKIT_ROOT_DIR | When compiling the GPU inference library, you need to set the CUDA root directory | YOUR_CUDA_PATH |
For details on the compilation options, see [the compilation options list](../../../beginners_guide/install/Tables_en.html/#Compile)
**Paddle Windows Inference Library Compilation Steps**
......@@ -47,6 +51,8 @@ Users can also compile C++ inference libraries from the PaddlePaddle core code b
```
2. Run Cmake command
- compile CPU inference library
```bash
# create build directory
mkdir build
......@@ -54,13 +60,17 @@ Users can also compile C++ inference libraries from the PaddlePaddle core code b
# change to the build directory
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# use -DWITH_GPU to control we are building CPU or GPU version
# use -DWITH_MKL to select math library: Intel MKL or OpenBLAS
# By default on Windows we use /MT for C Runtime Library, If you want to use /MD, please use the below command
# If you have no ideas the differences between the two, use the above one
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
- compile GPU inference library
```bash
# -DCUDA_TOOKIT_ROOT_DIR is cuda root directory, such as -DCUDA_TOOKIT_ROOT_DIR="D:\\cuda"
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=ON -DWITH_GPU=ON -DON_INFER=ON -DWITH_PYTHON=OFF -DCUDA_TOOKIT_ROOT_DIR=YOUR_CUDA_PATH
```
3. Open the `paddle.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build).
......@@ -113,9 +123,9 @@ git clone https://github.com/PaddlePaddle/Paddle.git
#### Compile with script
Run the run_windows_demo.bat from cmd on windows, and input parameters as required according to the prompts.
Open the windows command line and run the `run_windows_demo.bat`, and input parameters as required according to the prompts.
```dos
# Path is the directory where you downloaded paddle.
# Path is the directory of Paddle you downloaded.
cd path\Paddle\paddle\fluid\inference\api\demo_ci
run_windows_demo.bat
```
......@@ -133,103 +143,107 @@ vcvarsall_dir=path\vc\vcvarsall.bat # Set the path of visual studio command pro
#### Compile manually
```dos
# Path is the directory where you downloaded paddle.
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_patddle\paddle_fluid.lib -DMSVC_STATIC_CRT=ON
```
Note:
-DDEMO_NAME is the file to be built
-DPADDLE_LIB is the path of fluid_install_dir, for example:
-DPADDLE_LIB=D:\fluid_install_dir
Cmake can be [downloaded at official site](https://cmake.org/download/) and added to environment variables.
After the execution, the directory build is shown in the picture below. Then please open the solution file that which the arrow points at:
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
Modify option of building and generating as `Release` .
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image6.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image7.png">
</p>
In the dependent packages provided, please copy openblas and model files under Release directory to the directory of Release built and generated.
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image8.png">
</p>
[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
1. Open GLOG
1. Create and change to the build directory
```dos
# path is the directory where Paddle is downloaded
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
```
2. Run Cmake command, cmake can be [downloaded at official site](https://cmake.org/download/) and added to environment variables.
- compile inference demo with CPU inference library
```dos
# Path is the directory where you downloaded paddle.
# -DDEMO_NAME is the file to be built
# DPADDLE_LIB is the path of fluid_install_dir, for example: DPADDLE_LIB=D:\fluid_install_dir
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON
```
- compile inference demo with GPU inference library
```dos
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=ON -DWITH_MKL=ON -DWITH_STATIC_LIB=ON ^
-DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON -DCUDA_LIB=YOUR_CUDA_LIB
```
3. Open the `cpp_inference_demo.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `simple_on_word2vec` project in the Solution Explorer(Rigth click the project and click Build).
`set GLOG_v=100`
In the dependent packages provided, please copy openblas and model files under Release directory to the directory of Release built and generated.
2. Start inference. Path is the directory where you decompres model.
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image8.png">
</p>
`Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model`
4. [Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
```dos
# Open GLOG
set GLOG_v=100
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image9.png">
</p>
# Start inference, path is the directory where you decompres model
Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model
```
## Using AnalysisConfig to manage prediction configurations
### Implementing a simple inference demo
[Complete code example](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/api/demo_ci/windows_mobilenet.cc)
This example uses Analysisconfig to manage the Analysispredictor prediction configuration. The configuration method is as follows:
#### Create AnalysisConfig
``` c++
AnalysisConfig config;
```
**Note:** With ZeroCopyTensor, you must set `config->SwitchUseFeedFetchOps(false)` when creating the config.
``` c++
config->SwitchUseFeedFetchOps(false); // Turn off the use of feed and fetch OP, this must be set when using the ZeroCopy interface.
config->EnableUseGpu(100 /*Set the GPU initial memory pool to 100MB*/, 0 /*Set GPU ID to 0*/); // Turn on GPU prediction
```
1. Create AnalysisConfig
``` c++
AnalysisConfig config;
config->SwitchUseFeedFetchOps(false); // Turn off the use of feed and fetch OP, this must be set when using the ZeroCopy interface.
// config->EnableUseGpu(100 /*Set the GPU initial memory pool to 100MB*/, 0 /*Set GPU ID to 0*/); // Turn on GPU prediction
```
#### Set paths of models and parameters
``` c++
config->SetModel("./model_dir/__model__", "./model_dir/__params__");
```
2. Set path of models and parameters
- When there is a model file and multiple parameter files under the model folder `model_dir`, the model folder path is passed in, and the model file name defaults to `__model__`.
``` c++
config->SetModel("path\\model_dir\\__model__", "path\\model_dir\\__params__");
```
#### Manage input with ZeroCopyTensor
ZeroCopyTensor is the input/output data structure of AnalysisPredictor
- When there is only one model file `__model__` and one parameter file `__params__` in the model folder `model_dir`, the model file and parameter file path are passed in.
```C++
config->SetModel("path\\model_dir\\__model__", "path\\model_dir\\__params__");
```
**Note:** With ZeroCopyTensor, you must set `config->SwitchUseFeedFetchOps(false)` when creating the config.
3. Create predictor and prepare input data
``` C++
std::unique_ptr<PaddlePredictor> predictor = CreatePaddlePredictor(config);
int batch_size = 1;
int channels = 3; // The parameters of channels, height, and width must be the same as those required by the input in the model.
int height = 300;
int width = 300;
int nums = batch_size * channels * height * width;
float* input = new float[nums];
for (int i = 0; i < nums; ++i) input[i] = 0;
```
``` c++
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
4. Manage input with ZeroCopyTensor
```C++
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
// Reshape the input tensor, where the parameters of channels, height, and width must be the same as those required by the input in the model.
input_t->Reshape({batch_size, channels, height, width});
```
// Reshape the input tensor, copy the prepared input data from the CPU to ZeroCopyTensor
input_t->Reshape({batch_size, channels, height, width});
input_t->copy_from_cpu(input);
```
#### Run prediction engine
```C++
predictor->ZeroCopyRun();
```
5. Run prediction engine
```C++
predictor->ZeroCopyRun();
```
#### Manage input with ZeroCopyTensor
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
```
6. Manage input with ZeroCopyTensor
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
std::vector<int> output_shape = output_t->shape();
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies<int>());
out_data.resize(out_num);
output_t->copy_to_cpu(out_data.data()); // Copy data from ZeroCopyTensor to cpu
delete[] input;
```
**Note:** For more introduction to AnalysisPredictor, please refer to the [introduction of C++ Prediction API](./native_infer_en.html).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册