README.md 1.1 KB
Newer Older
Y
Yuantao Feng 已提交
1
# Quantization with ONNXRUNTIME and Neural Compressor
2

Y
Yuantao Feng 已提交
3
[ONNXRUNTIME](https://github.com/microsoft/onnxruntime) and [Neural Compressor](https://github.com/intel/neural-compressor) are used for quantization in the Zoo.
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

Install dependencies before trying quantization:
```shell
pip install -r requirements.txt
```

## Usage

Quantize all models in the Zoo:
```shell
python quantize.py
```

Quantize one of the models in the Zoo:
```shell
# python quantize.py <key_in_models>
python quantize.py yunet
```

Customizing quantization configs:
```python
# add model into `models` dict in quantize.py
models = dict(
    # ...
    model1=Quantize(model_path='/path/to/model1.onnx'
                    calibration_image_dir='/path/to/images',
                    transforms=Compose([''' transforms ''']), # transforms can be found in transforms.py
                    per_channel=False, # set False to quantize in per-tensor style
                    act_type='int8',   # available types: 'int8', 'uint8'
                    wt_type='int8'     # available types: 'int8', 'uint8'
    )
)
# quantize the added models
python quantize.py model1
Y
Yuantao Feng 已提交
38
```