Add support to save INT8 model after mkldnn quantization passes
Created by: lidanqing-intel
Save INT8 model after mkldnn quantization passes: Currently, models save by quant2_mkldnn_pass.py are float32 models. But for other devices, INT8 models are saved mostly. Please add support to save INT8 type models.