提交 422168d9 编写于 作者: S silingtong123 提交者: liuwei1031

modify the windows inference doc (#1612)

* modify the windows inference doc

* Remove the complete code sample and add the corresponding link

* change linux style path to indows style path
上级 3a27ae3d
......@@ -2,16 +2,28 @@
安装与编译Windows预测库
===========================
直接下载安装
下载安装包与对应的测试环境
-------------
| 版本说明 | 预测库(1.6.1版本) |
| 版本说明 | 预测库(1.6.1版本) | 编译器 | 构建工具 | cuDNN | CUDA |
|:---------|:-------------------|:-------------------|:----------------|:--------|:-------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.11.1 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.11.1 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.3.1 | 9 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.4.1 | 10 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.3.1 | 9 |
### 硬件环境
测试环境硬件配置:
| CPU | I7-8700K |
|:---------|:-------------------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/cpu/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/post97/fluid_inference_install_dir.zip) |
| 内存 | 16G |
| 硬盘 | 1T hdd + 256G ssd |
| 显卡 | GTX1080 8G |
测试环境操作系统使用 win10 家庭版本
从源码编译预测库
--------------
......@@ -43,48 +55,18 @@ Windows下安装与编译预测库步骤:(在Windows命令提示符下执行
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# -DWITH_GPU`为是否使用GPU的配置选项,-DWITH_MKL 为是否使用Intel MKL(数学核心库)的配置选项,请按需配置。
# Windows默认使用 /MT 模式进行编译,如果想使用 /MD 模式,请使用以下命令。如不清楚两者的区别,请使用上面的命令
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
3. 使用Blend for Visual Studio 2015 打开 `paddle.sln` 文件,选择平台为`x64`,配置为`Release`,编译inference_lib_dist项目。
操作方法:在Visual Studio中选择相应模块,右键选择"生成"(或者"build")
编译成功后,使用C++预测库所需的依赖(包括:(1)编译出的PaddlePaddle预测库和头文件;(2)第三方链接库和头文件;(3)版本信息与编译选项信息)
均会存放于`fluid_inference_install_dir`目录中。目录结构如下:
fluid_inference_install_dir/
├── CMakeCache.txt
├── paddle
│   ├── include
│   │   ├── paddle_anakin_config.h
│   │   ├── paddle_analysis_config.h
│   │   ├── paddle_api.h
│   │   ├── paddle_inference_api.h
│   │   ├── paddle_mkldnn_quantizer_config.h
│   │   └── paddle_pass_builder.h
│   └── lib
│   ├── libpaddle_fluid.a
│   └── libpaddle_fluid.so
├── third_party
│   ├── boost
│   │   └── boost
│   ├── eigen3
│   │   ├── Eigen
│   │   └── unsupported
│   └── install
│   ├── gflags
│   ├── glog
│   ├── mkldnn
│   ├── mklml
│   ├── protobuf
│   ├── xxhash
│   └── zlib
└── version.txt
均会存放于`fluid_inference_install_dir`目录中。
version.txt 中记录了该预测库的版本信息,包括Git Commit ID、使用OpenBlas或MKL数学库、CUDA/CUDNN版本号,如:
......@@ -120,15 +102,46 @@ version.txt 中记录了该预测库的版本信息,包括Git Commit ID、使
安装Visual Studio 2015,安装选项中选择安装内容时勾选自定义,选择安装全部关于c,c++,vc++的功能。
### 其他要求
### 编译demo
下载并解压 fluid_inference_install_dir.zip 压缩包。
1. 你需要直接下载Windows预测库或者从Paddle源码编译预测库,确保windows预测库存在。
进入 Paddle/paddle/fluid/inference/api/demo_ci 目录,新建build目录并进入,然后使用cmake生成vs2015的solution文件。
指令为:
2. 你需要下载Paddle源码,确保demo文件和脚本文件存在:
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
```
### 编译demo
`cmake .. -G "Visual Studio 14 2015" -A x64 -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib`
####使用脚本编译运行
打开cmd窗口,使用下面的命令:
```dos
# path为下载Paddle的目录
cd path\Paddle\paddle\fluid\inference\api\demo_ci
run_windows_demo.bat
```
其中,run_windows_demo.bat 的部分选项如下,请根据提示按需输入参数:
```dos
gpu_inference=Y #是否使用GPU预测库,默认使用CPU预测库
use_mkl=Y #该预测库是否使用MKL,默认为Y
use_gpu=Y #是否使用GPU进行预测,默认为N。使用GPU预测需要下载GPU版本预测库
paddle_inference_lib=path\fluid_inference_install_dir #设置paddle预测库的路径
cuda_lib_dir=path\lib\x64 #设置cuda库的路径
vcvarsall_dir=path\vc\vcvarsall.bat #设置visual studio #本机工具命令提示符路径
```
####手动编译运行
打开cmd窗口,使用下面的命令:
```dos
# path为下载Paddle的目录
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
```
`cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON`
注:
......@@ -146,16 +159,6 @@ Cmake可以在[官网进行下载](https://cmake.org/download/),并添加到
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
根据实际的预测库版本选择`运行库``/MT``/MD`
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image4.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image5.png">
</p>
编译生成选项改成 `Release`
<p align="center">
......@@ -166,18 +169,66 @@ Cmake可以在[官网进行下载](https://cmake.org/download/),并添加到
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image7.png">
</p>
通过cmd进到Release目录执行:
[下载模型](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz)并解压到当前目录,执行命令:
1. 开启GLOG
`set GLOG_v=100`
2. 进行预测
2. 进行预测,path为模型解压后的目录
`simple_on_word2vec.exe --dirname=.\word2vec.inference.model`
`Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model`
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image9.png">
</p>
## 使用AnalysisConfig管理预测配置
[完整的代码示例](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/api/demo_ci/windows_mobilenet.cc)
本示例使用了AnalysisConfig管理AnalysisPredictor的预测配置,提供了模型路径设置、预测引擎运行设备选择以及使用ZeroCopyTensor管理输入/输出的设置。配置方法如下:
#### 创建AnalysisConfig
```C++
AnalysisConfig config;
```
**Note:** 使用ZeroCopyTensor,务必在创建config时设置`config->SwitchUseFeedFetchOps(false);`
```C++
config->SwitchUseFeedFetchOps(false); // 关闭feed和fetch OP使用,使用ZeroCopy接口必须设置此项
config->EnableUseGpu(100 /*设定GPU初始显存池为MB*/, 0 /*设定GPU ID为0*/); //开启GPU预测
```
#### 设置模型和参数路径
从磁盘加载模型时,根据模型和参数文件存储方式不同,设置AnalysisConfig加载模型和参数的路径有两种形式,此处使用combined形式:
* combined形式:模型文件夹`model_dir`下只有一个模型文件`__model__`和一个参数文件`__params__`时,传入模型文件和参数文件路径。
```C++
config->SetModel("./model_dir/__model__", "./model_dir/__params__");
```
#### 使用ZeroCopyTensor管理输入
ZeroCopyTensor是AnalysisPredictor的输入/输出数据结构
**Note:** 使用ZeroCopyTensor,务必在创建config时设置`config->SwitchUseFeedFetchOps(false);`
```C++
// 通过创建的AnalysisPredictor获取输入tensor
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
// 对tensor进行reshape,channels,height,width三个参数的设置必须与模型中输入所要求的一致
input_t->Reshape({batch_size, channels, height, width});
```
#### 运行预测引擎
```C++
predictor->ZeroCopyRun();
```
#### 使用ZeroCopyTensor管理输出
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
```
**Note:** 关于AnalysisPredictor的更多介绍,请参考[C预测API介绍](./native_infer.html)
......@@ -5,13 +5,25 @@ Install and Compile C++ Inference Library on Windows
Direct Download and Install
-------------
| Version | Inference Libraries(v1.6.1) |
|:---------|:-------------------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/cpu/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/post97/fluid_inference_install_dir.zip) |
| Version | Inference Libraries(v1.6.1) | Compiler | Build tools | cuDNN | CUDA |
|:---------|:-------------------|:-------------------|:----------------|:--------|:-------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.11.1 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.11.1 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.3.1 | 9 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.4.1 | 10 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.1/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.11.1 | 7.3.1 | 9 |
### Hardware Environment
Hardware Configuration of the experimental environment:
| CPU | I7-8700K |
|:--------------|:-------------------|
| Memory | 16G |
| Hard Disk | 1T hdd + 256G ssd |
| Graphics Card | GTX1080 8G |
The operating system is win10 family version in the experimental environment.
Build From Source Code
--------------
......@@ -41,50 +53,20 @@ Users can also compile C++ inference libraries from the PaddlePaddle core code b
# change to the build directory
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# use -DWITH_GPU to control we are building CPU or GPU version
# use -DWITH_MKL to select math library: Intel MKL or OpenBLAS
# By default on Windows we use /MT for C Runtime Library, If you want to use /MD, please use the below command
# If you have no ideas the differences between the two, use the above one
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
3. Open the `paddle.sln` using VisualStudio 2015,choose the`x64` for Solution Platforms,and `Release` for Solution Configurations,then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build)
The inference library will be installed in `fluid_inference_install_dir`:
fluid_inference_install_dir/
├── CMakeCache.txt
├── paddle
│   ├── include
│   │   ├── paddle_anakin_config.h
│   │   ├── paddle_analysis_config.h
│   │   ├── paddle_api.h
│   │   ├── paddle_inference_api.h
│   │   ├── paddle_mkldnn_quantizer_config.h
│   │   └── paddle_pass_builder.h
│   └── lib
│   ├── libpaddle_fluid.a
│   └── libpaddle_fluid.so
├── third_party
│   ├── boost
│   │   └── boost
│   ├── eigen3
│   │   ├── Eigen
│   │   └── unsupported
│   └── install
│   ├── gflags
│   ├── glog
│   ├── mkldnn
│   ├── mklml
│   ├── protobuf
│   ├── xxhash
│   └── zlib
└── version.txt
version.txt constains the detailed configurations about the library,including git commit ID、math library, CUDA, CUDNN versions:
3. Open the `paddle.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build).
The inference library will be installed in `fluid_inference_install_dir`.
version.txt constains the detailed configurations about the library, including git commit ID、math library, CUDA, CUDNN versions:
GIT COMMIT ID: cc9028b90ef50a825a722c55e5fda4b7cd26b0d6
......@@ -118,16 +100,46 @@ The operating system is win10 family version in the experimental environment.
Install Visual Studio 2015. Please choose "customize" for the options of contents to be installed and choose to install all functions relevant to c, c++ and vc++.
### Other requirements
1. You need to download the Windows inference library or compile the inference library from Paddle source code.
2. You need to run the command to get the Paddle source code.
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
```
### Usage of Inference demo
#### Compile with script
Run the run_windows_demo.bat from cmd on windows, and input parameters as required according to the prompts.
```dos
# Path is the directory where you downloaded paddle.
cd path\Paddle\paddle\fluid\inference\api\demo_ci
run_windows_demo.bat
```
Some options of the script are as follows:
Usage of Inference demo
------------------------
```dos
gpu_inference=Y # Use gpu_inference_lib or not(Y/N), default: N.
use_mkl=Y # Use MKL or not(Y/N), default: Y.
use_gpu=Y # Whether to use GPU for prediction, defalut: N.
Decompress Paddle, Release and fluid_install_dir compressed package.
paddle_inference_lib=path\fluid_inference_install_dir # Set the path of paddle inference library.
cuda_lib_dir=path\lib\x64 # Set the path of cuda library.
vcvarsall_dir=path\vc\vcvarsall.bat # Set the path of visual studio command prompt.
```
First enter into Paddle/paddle/fluid/inference/api/demo_ci, then create and enter into directory /build, finally use cmake to generate vs2015 solution file.
Commands are as follows:
#### Compile manually
`cmake .. -G "Visual Studio 14 2015" -A x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_patddle\paddle_fluid.lib`
```dos
# Path is the directory where you downloaded paddle.
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_patddle\paddle_fluid.lib -DMSVC_STATIC_CRT=ON
```
Note:
......@@ -145,16 +157,6 @@ After the execution, the directory build is shown in the picture below. Then ple
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
Modify `Runtime Library` to `/MT`(default) or `/MD` according to the inference library version :
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/project_property.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/runtime_library.png">
</p>
Modify option of building and generating as `Release` .
<p align="center">
......@@ -171,16 +173,63 @@ In the dependent packages provided, please copy openblas and model files under R
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image8.png">
</p>
Enter into Release in cmd and run:
[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
1. Open GLOG
`set GLOG_v=100`
2. Start inference
2. Start inference. Path is the directory where you decompres model.
`simple_on_word2vec.exe --dirname=.\word2vec.inference.model`
`Release\simple_on_word2vec.exe --dirname=path\word2vec.inference.model`
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image9.png">
</p>
## Using AnalysisConfig to manage prediction configurations
[Complete code example](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/api/demo_ci/windows_mobilenet.cc)
This example uses Analysisconfig to manage the Analysispredictor prediction configuration. The configuration method is as follows:
#### Create AnalysisConfig
``` c++
AnalysisConfig config;
```
**Note:** With ZeroCopyTensor, you must set `config->SwitchUseFeedFetchOps(false)` when creating the config.
``` c++
config->SwitchUseFeedFetchOps(false); // Turn off the use of feed and fetch OP, this must be set when using the ZeroCopy interface.
config->EnableUseGpu(100 /*Set the GPU initial memory pool to 100MB*/, 0 /*Set GPU ID to 0*/); // Turn on GPU prediction
```
#### Set paths of models and parameters
``` c++
config->SetModel("./model_dir/__model__", "./model_dir/__params__");
```
#### Manage input with ZeroCopyTensor
ZeroCopyTensor is the input/output data structure of AnalysisPredictor
**Note:** With ZeroCopyTensor, you must set `config->SwitchUseFeedFetchOps(false)` when creating the config.
``` c++
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
// Reshape the input tensor, where the parameters of channels, height, and width must be the same as those required by the input in the model.
input_t->Reshape({batch_size, channels, height, width});
```
#### Run prediction engine
```C++
predictor->ZeroCopyRun();
```
#### Manage input with ZeroCopyTensor
```C++
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
```
**Note:** For more introduction to AnalysisPredictor, please refer to the [introduction of C++ Prediction API](./native_infer_en.html).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册