MKLDNN: Fluid High level inference API does not account for MKL-DNN
Created by: jczaja
Recently Inference High Level API was introduced. It does not have functionality to use MKL-DNN ops for inference.
To use MKL-DNN for High Level API based programs it is needed to add enabling function of MKLDNN:
if (FLAGS_use_mkldnn) executor_->EnableMKLDNN(*inference_program_);
Around line 92 of: https://github.com/PaddlePaddle/Paddle/blob/f9f8fbaa5ec43e960c3054f8a76475d945cb678a/paddle/contrib/inference/paddle_inference_api_impl.cc#L88-L94
The other problem is that when Paddle is build with -DWITH_GPU=OFF , then Gflags is not initialized in High Level API eg. GFlags is only processed for situation when GPU is used: https://github.com/PaddlePaddle/Paddle/blob/f9f8fbaa5ec43e960c3054f8a76475d945cb678a/paddle/contrib/inference/paddle_inference_api_impl.cc#L279
I would appreciate if you can help with adding FLAGS_use_mkldnn so that High Level API can take advantage of it.