From 6fcbfdc785ee986f10f1b5c267c9dad6c5461c06 Mon Sep 17 00:00:00 2001 From: WuHaobo Date: Wed, 10 Jun 2020 08:39:57 +0800 Subject: [PATCH] docs/en/extension/paddle_quantization.md --- docs/en/extension/paddle_quantization.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 docs/en/extension/paddle_quantization.md diff --git a/docs/en/extension/paddle_quantization.md b/docs/en/extension/paddle_quantization.md new file mode 100644 index 00000000..2054a031 --- /dev/null +++ b/docs/en/extension/paddle_quantization.md @@ -0,0 +1,7 @@ +# Model Quantifization + +模型量化是 [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim) 的特色功能之一,支持动态和静态两种量化训练方式,对权重全局量化和 Channel-Wise 量化,同时以兼容 Paddle-Lite 的格式保存模型。 +[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 使用该量化工具,量化了78.9%的mobilenet_v3_large_x1_0的蒸馏模型, 量化后SD855上预测速度从19.308ms加速到14.395ms,存储大小从21M减小到10M, top1识别准确率75.9%。 +具体的训练方法可以参见 [PaddleSlim 量化训练](https://paddlepaddle.github.io/PaddleSlim/quick_start/quant_aware_tutorial.html)。 + +Int8 quantization is one of the key features in [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim). -- GitLab