Created by: bingyanghuang
This PR is to convert QuantizationFreezePass generated IrGraph to MKL-DNN support INT8 runnable IrGraph. Following transformations have been done in this pass:
-
1. Convert int8 range weights with float type, (which generated by the QuantizationFreezePass), to fp32 range weights with float dtype by the corresponding scales.
2. Create the new conv2d op with the converted weights and link its output to fake_dequantize_abs_max output and set conv2d's attribute "force_fp32_output" as true
3. Transform fake_quantize_xx to quantize op
4. Remove fake_dequantize_abs_max op
-
1. conv2d's output is rightly linked to the fake_dequantize op's output
2. conv2d's weights has been converted to fp32 range, they are not integer any more
3. check the graph locally to make sure that op's type have been transformed as we expected.