[MKL-DNN] Further clean softmax mkl-dnn op
Created by: jczaja
softmax op works on flattened dims of Tensor. For this purpose Two input & two output tensors are created to share allocation , but diffrent in dims.
This is not needed, since dims for mkl-dnn primitives are stored in Memory descriptor, we can have one input and one output and just use diffrent memory descriptors (2D for actual computation) and the other MD to be registered in output