Demo/Documentation for model compression
Created by: prakharg24
I want to compress MobileNet-SSD for object detection. I want to try the following ->
- 8-bit Quantization
- Knowledge Distillation
- Channel Pruning
I think there are inbuilt API functions for some of the above in paddlepaddle. But I haven't found any example that I can directly replicate to do the compression.
I have the following MobileNet-SSD pipeline set up -- https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/ssd
Can someone explain to me in brief how to use inbuilt API function to do model compression in paddlepaddle or point me to a document?
Thank You