Created by: pzelazko-intel
Currently reorders of convolution weights are not saved for next iterations. In this commit I'm saving reordered weights for inference.
For training, saving these weights would lower the performance. The reason is that not all layers use MKLDNN layouts (like momentum in training) and this change would introduce more reorders.
To change tensor format within operator, I've added methods for mutable access to tensor from ExecutionContext. Moreover, I had to add additional case for inplace tensor transformation. Without it weights would be transformed from NCHW format to MKLDNN one only locally and change of format would not be propagated to next OP calls.