未验证 提交 64dfd3c8 编写于 作者: L liuwei1031 提交者: GitHub

imporve windows inference lib doc (#1440)

* imporve windows inference lib doc

* update sample images for c runtime library part of english version
上级 f04f0aa5
......@@ -23,46 +23,44 @@
|选项 | 值 |
|:-------------|:-------------------|
|CMAKE_BUILD_TYPE | Release |
|FLUID_INFERENCE_INSTALL_DIR | 安装路径(可选) |
|ON_INFER | ON(推荐) |
|ON_INFER | ON (推荐) |
|WITH_GPU | ON/OFF |
|WITH_MKL | ON/OFF |
|WITH_PYTHON | OFF |
请按照推荐值设置,以避免链接不必要的库。其它可选编译选项按需进行设定。
Windows下安装与编译预测库步骤:(在Windows命令提示符下执行以下指令)
1. 设置预测库的安装路径,将path_to_paddle替换为PaddlePaddle预测库的安装路径:
1. 将PaddlePaddle的源码clone在当下目录的Paddle文件夹中,并进入Paddle目录:
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
```
`PADDLE_ROOT=path_to_paddle`(不设置则使用默认路径)
2. 执行cmake:
```bash
# 创建build目录用于编译
mkdir build
2. 将PaddlePaddle的源码clone在当下目录的Paddle文件夹中,并进入Paddle目录:
cd build
- `git clone https://github.com/PaddlePaddle/Paddle.git`
- `cd Paddle`
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# -DWITH_GPU`为是否使用GPU的配置选项,-DWITH_MKL 为是否使用Intel MKL(数学核心库)的配置选项,请按需配置。
3. 创建名为build的目录并进入:
# Windows默认使用 /MT 模式进行编译,如果想使用 /MD 模式,请使用以下命令。如不清楚两者的区别,请使用上面的命令
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
- `mkdir build`
- `cd build`
4. 执行cmake:
- `cmake .. -G "Visual Studio 14 2015 Win 64" -DFLUID_INFERENCE_INSTALL_DIR=${PADDLE_ROOT} -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON`
- `-DFLUID_INFERENCE_INSTALL_DIR=$PADDLE_ROOT`为可选配置选项,如未设置,则使用默认路径。
- `-DWITH_GPU`为是否使用GPU的配置选项,`-DWITH_MKL`为是否使用Intel MKL(数学核心库)的配置选项,请按需配置。
5.`https://github.com/wopeizl/Paddle_deps`下载预编译好的第三方依赖包(openblas, snappystream),将整个`third_party`文件夹复制到`build`目录下。
6. 使用Blend for Visual Studio 2015 打开 `paddle.sln` 文件,选择平台为`x64`,配置为`Release`,先编译third_party模块,再编译inference_lib_dist模块。
3. 使用Blend for Visual Studio 2015 打开 `paddle.sln` 文件,选择平台为`x64`,配置为`Release`,编译inference_lib_dist项目。
操作方法:在Visual Studio中选择相应模块,右键选择"生成"(或者"build")
编译成功后,使用C++预测库所需的依赖(包括:(1)编译出的PaddlePaddle预测库和头文件;(2)第三方链接库和头文件;(3)版本信息与编译选项信息)
均会存放于PADDLE_ROOT目录中。目录结构如下:
均会存放于`fluid_inference_install_dir`目录中。目录结构如下:
PaddleRoot/
fluid_inference_install_dir/
├── CMakeCache.txt
├── paddle
│   ├── include
......@@ -153,7 +151,7 @@ Cmake可以在[官网进行下载](https://cmake.org/download/),并添加到
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
修改编译属性为 `/MT`
根据实际的预测库版本选择`运行库``/MT``/MD`
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image4.png">
......
Model Inference on Windows
===========================
Set up Environment
Pre-Built Inference Libraries
-------------
| Version | Inference Libraries(v1.5.1) |
|:---------|:-------------------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/cpu_mkl_avx/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/cpu_open_avx/fluid_inference_install_dir.zip) |
| cuda8.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_mkl_avx_8.0/fluid_inference_install_dir.zip) |
| cuda8.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_open_avx_8.0/fluid_inference_install_dir.zip)|
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_mkl_avx_9.0/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_open_avx_9.0/fluid_inference_install_dir.zip) |
Build From Source Code
--------------
Important Compilation Flags:
|Option | Value |
|:-------------|:-------------------|
|CMAKE_BUILD_TYPE | Release |
|ON_INFER | ON (recommended) |
|WITH_GPU | ON/OFF |
|WITH_MKL | ON/OFF |
|WITH_PYTHON | OFF |
**Paddle Windows Inference Library Compilation Steps**
1. Clone Paddle source code from GitHub:
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
```
2. Run Cmake command
```bash
# create build directory
mkdir build
# change to the build directory
cd build
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# use -DWITH_GPU to control we are building CPU or GPU version
# use -DWITH_MKL to select math library: Intel MKL or OpenBLAS
# By default on Windows we use /MT for C Runtime Library, If you want to use /MD, please use the below command
# If you have no ideas the differences between the two, use the above one
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
3. Open the `paddle.sln` using VisualStudio 2015,choose the`x64` for Solution Platforms,and `Release` for Solution Configurations,then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build)
The inference library will be installed in `fluid_inference_install_dir`:
fluid_inference_install_dir/
├── CMakeCache.txt
├── paddle
│   ├── include
│   │   ├── paddle_anakin_config.h
│   │   ├── paddle_analysis_config.h
│   │   ├── paddle_api.h
│   │   ├── paddle_inference_api.h
│   │   ├── paddle_mkldnn_quantizer_config.h
│   │   └── paddle_pass_builder.h
│   └── lib
│   ├── libpaddle_fluid.a
│   └── libpaddle_fluid.so
├── third_party
│   ├── boost
│   │   └── boost
│   ├── eigen3
│   │   ├── Eigen
│   │   └── unsupported
│   └── install
│   ├── gflags
│   ├── glog
│   ├── mkldnn
│   ├── mklml
│   ├── protobuf
│   ├── snappy
│   ├── snappystream
│   ├── xxhash
│   └── zlib
└── version.txt
version.txt constains the detailed configurations about the library,including git commit ID、math library, CUDA, CUDNN versions:
GIT COMMIT ID: cc9028b90ef50a825a722c55e5fda4b7cd26b0d6
WITH_MKL: ON
WITH_MKLDNN: ON
WITH_GPU: ON
CUDA version: 8.0
CUDNN version: v7
Inference Demo Compilation
-------------------
### Hardware Environment
......@@ -51,14 +148,14 @@ After the execution, the directory build is shown in the picture below. Then ple
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image3.png">
</p>
Modify the attribute of build as `/MT` :
Modify `Runtime Library` to `/MT`(default) or `/MD` according to the inference library version :
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image4.png">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/project_property.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/image5.png">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/advanced_usage/deploy/inference/image/runtime_library.png">
</p>
Modify option of building and generating as `Release` .
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册