diff --git a/docs/en/extension/paddle_quantization.md b/docs/en/extension/paddle_quantization.md index 2054a031d7553658e937d10c96823980deeca8d4..ce0eeed679dcf6493af2a85478a2b587fb62b4ca 100644 --- a/docs/en/extension/paddle_quantization.md +++ b/docs/en/extension/paddle_quantization.md @@ -1,7 +1,13 @@ # Model Quantifization -模型量化是 [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim) 的特色功能之一,支持动态和静态两种量化训练方式,对权重全局量化和 Channel-Wise 量化,同时以兼容 Paddle-Lite 的格式保存模型。 -[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 使用该量化工具,量化了78.9%的mobilenet_v3_large_x1_0的蒸馏模型, 量化后SD855上预测速度从19.308ms加速到14.395ms,存储大小从21M减小到10M, top1识别准确率75.9%。 -具体的训练方法可以参见 [PaddleSlim 量化训练](https://paddlepaddle.github.io/PaddleSlim/quick_start/quant_aware_tutorial.html)。 - Int8 quantization is one of the key features in [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim). +It supports two kinds of training aware, **Dynamic strategy** and **Static strategy**, +layer-wise and channel-wise quantization, +and using PaddleLite to deploy models generated by PaddleSlim. + +By using this toolkit, [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) quantized the mobilenet_v3_large_x1_0 model whose accuracy is 78.9% after distilled. +After quantized, the prediction speed is accelerated from 19.308ms to 14.395ms on SD855. +The storage size is reduced from 21M to 10M. +The top1 recognition accuracy rate is 75.9%. +For specific training methods, please refer to [PaddleSlim quant aware](https://paddlepaddle.github.io/PaddleSlim/quick_start/quant_aware_tutorial.html)。 +