提交 b2f05bf8 编写于 作者: M Megvii Engine Team

fix(mge/module): fix quantized fold weight value range limit for fused conv/bn modules

GitOrigin-RevId: 007c2f13b68575fcf63324f9e87e9d69bab83fc1
上级 0df74604
......@@ -62,6 +62,7 @@ class _ConvBnActivation2d(Float._ConvBnActivation2d, QATModule):
self.conv.groups, -1, 1, 1, 1
)
w_fold = self.apply_quant_weight(w_fold)
# b_fold = gamma * (b - bn_mean) / bn_std + beta
b_fold = beta + gamma * (conv_bias - bn_mean) * bn_istd
return w_fold, b_fold
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册