conf: allow fuzz in XML with cur balloon > max
Commit 1b1402b9 introduced a regression. Since older libvirt versions would silently round memory up (until the previous patch), but populated current memory based on querying the guest, it was possible to have dumpxml show cur > max by the amount of the rounding. For example, if a user requested 1048570 KiB memory (just shy of 1GiB), the qemu driver would actually run with 1048576 KiB, and libvirt 0.9.10 would output a current that was 6KiB larger than the maximum. Situations where this could have an impact include, but are not limited to, migration from old to new libvirt, managedsave in old libvirt and start in new libvirt, snapshot creation in old libvirt and revert in new libvirt - without this patch, the new libvirt would reject the VM because of the rounding discrepancy. Fix things by adding a fuzz factor, and silently clamp current down to maximum in that case, rather than failing to reparse XML for an existing VM. From a practical standpoint, this has no user impact: 'virsh dumpxml' will continue to query the running guest rather than rely on the incoming xml, which will see the currect current value, and even if clamping down occurs during parsing, it will be by at most the fuzz factor of a megabyte alignment, and rounded back up when passed back to the hypervisor. Meanwhile, we continue to reject cur > max if the difference is beyond the fuzz factor of nearest megabyte. But this is not a real change in behavior, since with 0.9.10, even though the parser allowed it, later in the processing stream we would reject it at the qemu layer; so rejecting it in the parser just moves error detection to a nicer place. * src/conf/domain_conf.c (virDomainDefParseXML): Don't reject existing XML. Based on a report by Zhou Peng.
Showing
想要评论请 注册 或 登录