提交 c07e3bb1 编写于 作者: L lidanqing 提交者: lidanqing-intel

[Bug fix] Do not quantize weights Y when matmul X and Y both other ops outputs (#43297)

* fix some matmul that X and Y both other ops outputs, do not dequantize the Y.

* fix CI format

* fix according to review
上级 f8681ffc
...@@ -354,10 +354,9 @@ bool QuantDequantMkldnnPass::IsInt8Weight( ...@@ -354,10 +354,9 @@ bool QuantDequantMkldnnPass::IsInt8Weight(
auto* op_desc = op_node->Op(); auto* op_desc = op_node->Op();
auto var_name = op_desc->Input(weight_name)[0]; auto var_name = op_desc->Input(weight_name)[0];
auto* var = scope->FindVar(var_name); auto* var = scope->FindVar(var_name);
PADDLE_ENFORCE_NOT_NULL( if (var == nullptr) {
var, platform::errors::NotFound( return false;
"The input persistable [%s] var of [%s] op is not found.", }
var_name, op_desc->Type()));
auto* weight_tensor = var->GetMutable<LoDTensor>(); auto* weight_tensor = var->GetMutable<LoDTensor>();
auto* weight_data = weight_tensor->data<float>(); auto* weight_data = weight_tensor->data<float>();
bool is_int8 = true; bool is_int8 = true;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册