未验证 提交 09267010 编写于 作者: G guofei 提交者: GitHub

Fix the error of save_quantized_model when weight_preprocess is PACT (#680)

Fix the error of save_quantized_model when weight_preprocess is PACT
上级 dae3c307
......@@ -208,6 +208,7 @@ class QAT(object):
act_quantize_layer=self.act_quantize)
def quantize(self, model):
self._model = copy.deepcopy(model)
self.imperative_qat.quantize(model)
def save_quantized_model(self, model, path, input_spec=None):
......@@ -230,8 +231,8 @@ class QAT(object):
with paddle.utils.unique_name.guard():
if hasattr(model, "_layers"):
model = model._layers
model.__init__()
model = self._model
self.imperative_qat.quantize(model)
state_dict = model.state_dict()
model.set_state_dict(state_dict)
return model
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册