未验证 提交 a380686c 编写于 作者: Z zhouwei25 提交者: GitHub

fix inference lib package on windows (#1554)

Update download link of inference lib package on windows.

* fix the DOC of Windows compile , install and inference lib

* fix inference lib package on windows
上级 4a4bf7b8
......@@ -5,13 +5,13 @@
直接下载安装
-------------
| 版本说明 | 预测库(1.6.0版本) |
|:---------|:-------------------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.2-win/cpu_mkl_avx/fluid_inference_install_dir.zip) |
| cuda8.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.2-win/gpu_mkl_avx_8.0/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.2-win/gpu_mkl_avx_9.0/fluid_inference_install_dir.zip) |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.2-win/gpu_mkl_avx_10.0/fluid_inference_install_dir.zip) |
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/cpu/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/open/cpu/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/open/post97/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/post97/fluid_inference_install_dir.zip) |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/post107/fluid_inference_install_dir.zip) |
从源码编译预测库
--------------
......@@ -43,11 +43,11 @@ Windows下安装与编译预测库步骤:(在Windows命令提示符下执行
cd build
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# -DWITH_GPU`为是否使用GPU的配置选项,-DWITH_MKL 为是否使用Intel MKL(数学核心库)的配置选项,请按需配置。
# Windows默认使用 /MT 模式进行编译,如果想使用 /MD 模式,请使用以下命令。如不清楚两者的区别,请使用上面的命令
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
3. 使用Blend for Visual Studio 2015 打开 `paddle.sln` 文件,选择平台为`x64`,配置为`Release`,编译inference_lib_dist项目。
......@@ -82,8 +82,6 @@ Windows下安装与编译预测库步骤:(在Windows命令提示符下执行
│   ├── mkldnn
│   ├── mklml
│   ├── protobuf
│   ├── snappy
│   ├── snappystream
│   ├── xxhash
│   └── zlib
└── version.txt
......@@ -130,7 +128,7 @@ version.txt 中记录了该预测库的版本信息,包括Git Commit ID、使
进入 Paddle/paddle/fluid/inference/api/demo_ci 目录,新建build目录并进入,然后使用cmake生成vs2015的solution文件。
指令为:
`cmake .. -G "Visual Studio 14 2015 Win64" -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib`
`cmake .. -G "Visual Studio 14 2015" -A x64 -DWITH_GPU=OFF -DWITH_MKL=ON -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib`
注:
......
Model Inference on Windows
Install and Compile C++ Inference Library on Windows
===========================
Pre-Built Inference Libraries
Direct Download and Install
-------------
| Version | Inference Libraries(v1.6.0) |
| Version | Inference Libraries(v1.6.0) |
|:---------|:-------------------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/cpu_mkl_avx/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/cpu_open_avx/fluid_inference_install_dir.zip) |
| cuda8.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_mkl_avx_8.0/fluid_inference_install_dir.zip) |
| cuda8.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_open_avx_8.0/fluid_inference_install_dir.zip)|
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_mkl_avx_9.0/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-inference-lib.bj.bcebos.com/1.5.1-win/gpu_open_avx_9.0/fluid_inference_install_dir.zip) |
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/cpu/fluid_inference_install_dir.zip) |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/open/cpu/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/open/post97/fluid_inference_install_dir.zip) |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/post97/fluid_inference_install_dir.zip) |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.6.0/win-infer/mkl/post107/fluid_inference_install_dir.zip) |
Build From Source Code
--------------
Important Compilation Flags:
|Option | Value |
Users can also compile C++ inference libraries from the PaddlePaddle core code by specifying the following compile options at compile time:
|Option | Value |
|:-------------|:-------------------|
|CMAKE_BUILD_TYPE | Release |
|ON_INFER | ON (recommended) |
|WITH_GPU | ON/OFF |
|ON_INFER | ON(recommended) |
|WITH_GPU | ON/OFF |
|WITH_MKL | ON/OFF |
|WITH_PYTHON | OFF |
|WITH_PYTHON | OFF |
**Paddle Windows Inference Library Compilation Steps**
......@@ -43,13 +42,13 @@ Important Compilation Flags:
# change to the build directory
cd build
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF
# use -DWITH_GPU to control we are building CPU or GPU version
# use -DWITH_MKL to select math library: Intel MKL or OpenBLAS
# By default on Windows we use /MT for C Runtime Library, If you want to use /MD, please use the below command
# If you have no ideas the differences between the two, use the above one
cmake .. -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
cmake .. -G "Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DWITH_MKL=OFF -DWITH_GPU=OFF -DON_INFER=ON -DWITH_PYTHON=OFF -DMSVC_STATIC_CRT=OFF
```
3. Open the `paddle.sln` using VisualStudio 2015,choose the`x64` for Solution Platforms,and `Release` for Solution Configurations,then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build)
......@@ -81,8 +80,6 @@ The inference library will be installed in `fluid_inference_install_dir`:
│   ├── mkldnn
│   ├── mklml
│   ├── protobuf
│   ├── snappy
│   ├── snappystream
│   ├── xxhash
│   └── zlib
└── version.txt
......@@ -130,7 +127,7 @@ Decompress Paddle, Release and fluid_install_dir compressed package.
First enter into Paddle/paddle/fluid/inference/api/demo_ci, then create and enter into directory /build, finally use cmake to generate vs2015 solution file.
Commands are as follows:
`cmake .. -G "Visual Studio 14 2015 Win64" -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_patddle\paddle_fluid.lib`
`cmake .. -G "Visual Studio 14 2015" -A x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_patddle\paddle_fluid.lib`
Note:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册