3. Open the `paddle.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build).
[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
1. Open GLOG
1. Create and change to the build directory
```dos
# path is the directory where Paddle is downloaded
cd path\Paddle\paddle\fluid\inference\api\demo_ci
mkdir build
cd build
```
2. Run Cmake command, cmake can be [downloaded at official site](https://cmake.org/download/) and added to environment variables.
- compile inference demo with CPU inference library
```dos
# Path is the directory where you downloaded paddle.
# -DDEMO_NAME is the file to be built
# DPADDLE_LIB is the path of fluid_install_dir, for example: DPADDLE_LIB=D:\fluid_install_dir
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_STATIC_LIB=ON -DCMAKE_BUILD_TYPE=Release -DDEMO_NAME=simple_on_word2vec -DPADDLE_LIB=path_to_the_paddle_lib -DMSVC_STATIC_CRT=ON
```
- compile inference demo with GPU inference library
```dos
cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=ON -DWITH_MKL=ON -DWITH_STATIC_LIB=ON ^
3. Open the `cpp_inference_demo.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `simple_on_word2vec` project in the Solution Explorer(Rigth click the project and click Build).
`set GLOG_v=100`
In the dependent packages provided, please copy openblas and model files under Release directory to the directory of Release built and generated.
2. Start inference. Path is the directory where you decompres model.
4.[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
- When there is a model file and multiple parameter files under the model folder `model_dir`, the model folder path is passed in, and the model file name defaults to `__model__`.
ZeroCopyTensor is the input/output data structure of AnalysisPredictor
- When there is only one model file `__model__` and one parameter file `__params__` in the model folder `model_dir`, the model file and parameter file path are passed in.