Created by: wozna
This PR enables quantization of reshape2 operator. Type int8 is not added in MLKDNN operator implementation, but in global implementation.
Reshape and transpose ops are quantized only if previous operator or next operator is quantized. Thanks that we avoid situation where there is one int8 operators between two fp32 operators.