Created by: wozna
This PR adds transpose int8 for mkl-dnn when previous operator is quantized. There are 12 such patterns in mobilenet_ssd.
In the future, the best improvement will be if transpose2 int8 will only be used between two quantized operators. For example in mobilenet_ssd there are 12 chains conv2d->transpose2->reshape2->concat If the reshape2 is also quantized, we will avoid additional quantization that will affect accuracy.