Created by: bingyanghuang
This is the first version to run the INT8 v2 solution in current slim. User perspective:
- User needs to indicate the FP32 model path
- User needs to create a config.yaml file to indicate the save model path and different int8 quantization strategies
- User needs to give the ImageNet validation dataset location(hard code)
- After running, user can get a converted int8 model
Supported functionalities:
- Run the int8v2 solution from slim. Command line:
python python/paddle/fluid/contrib/slim/tests/test_infer_quant_strategy.py --infer_model_path=/path/to/Paddle/build/third_party/inference_demo/int8v2/resnet50/model --device=CPU
- Save the params after running the test
There exists some hard codes in this version, will be removed when the functions are more stable(e.g Quantizer Config will be independent). This is just for sharing a feasibility plan. @luotao1 @wzzju @wojtuss