PaddleSlim and Paddle inference (Intel cpu) support all common quantized models
Created by: juncaipeng
We usually use PaddleSlim to quantize a FP32 model to INT8 model, and run the int8 model in inference stage.
However, there are some problems between PaddleSlim and Paddle inference (Intel cpu):
- Some quantized model do not provide the needed output scale for Paddle inference (Intel cpu). In details, the quantized model only contains the input scale of some ops (conv2d, depthwise_conv2d, mul, elementwise_add, pool2d and so on), but supported quantized ops (conv2d, depthwise_conv2d, mul ) also need the output scale. If a quantized op does not have output scale obtained from input scale in the next op, it will raise an error.
- Paddle inference (Intel cpu) do not support
fake_channel_wise_dequantize_max_abs
op.
Therefore, PaddleSlim will provide the output threshold for some ops, and save it in the op's attributes.
In inference stage, we can look up all ops and gather the output thresholds, and then transform it to output scale for quantized op.
@lidanqing-intel