3. Open the `paddle.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build).
3. Open the `paddle.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `inference_lib_dist` project in the Solution Explorer(Rigth click the project and click Build).
3. Open the `cpp_inference_demo.sln` using VisualStudio 2015, choose the`x64` for Slution Platforms, and `Release` for Solution Configurations, then build the `simple_on_word2vec` project in the Solution Explorer(Rigth click the project and click Build).
</p>
Modify option of building and generating as `Release` .
[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
1. Open GLOG
`set GLOG_v=100`
In the dependent packages provided, please copy openblas and model files under Release directory to the directory of Release built and generated.
2. Start inference. Path is the directory where you decompres model.
4.[Download model](http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz) and decompress it to the current directory. Run the command:
```dos
# Open GLOG
set GLOG_v=100
<palign="center">
# Start inference, path is the directory where you decompres model
This example uses Analysisconfig to manage the Analysispredictor prediction configuration. The configuration method is as follows:
This example uses Analysisconfig to manage the Analysispredictor prediction configuration. The configuration method is as follows:
#### Create AnalysisConfig
1. Create AnalysisConfig
``` c++
``` c++
AnalysisConfigconfig;
AnalysisConfigconfig;
```
config->SwitchUseFeedFetchOps(false);// Turn off the use of feed and fetch OP, this must be set when using the ZeroCopy interface.
**Note:** With ZeroCopyTensor, you must set `config->SwitchUseFeedFetchOps(false)` when creating the config.
// config->EnableUseGpu(100 /*Set the GPU initial memory pool to 100MB*/, 0 /*Set GPU ID to 0*/); // Turn on GPU prediction
``` c++
```
config->SwitchUseFeedFetchOps(false);// Turn off the use of feed and fetch OP, this must be set when using the ZeroCopy interface.
config->EnableUseGpu(100/*Set the GPU initial memory pool to 100MB*/,0/*Set GPU ID to 0*/);// Turn on GPU prediction
```
#### Set paths of models and parameters
2. Set path of models and parameters
``` c++
- When there is a model file and multiple parameter files under the model folder `model_dir`, the model folder path is passed in, and the model file name defaults to `__model__`.
- When there is only one model file `__model__` and one parameter file `__params__` in the model folder `model_dir`, the model file and parameter file path are passed in.
ZeroCopyTensor is the input/output data structure of AnalysisPredictor