提交 207ab6cf 编写于 作者: H huangxu96

Added save_quant_model function

上级 1a2575bf
......@@ -134,3 +134,25 @@ def dy_quant_aware(model,
imperative_qat.quantize(model)
return model
def save_quant_model(model,
path,
input_shape,
input_dtype=['float32']):
imperative_qat = ImperativeQuantAware(
weight_quantize_type='abs_max',
activation_quantize_type='moving_average_abs_max',
quantizable_layer_type=[
'Conv2D', 'Linear', 'ReLU', 'Pool2D', 'LeakyReLU'
])
imperative_qat.save_quantized_model(
dirname=path,
model=model,
input_shape=input_shape,
input_dtype=['float32'],
feed=[0],
fetch=[0])
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册