1. 10 12月, 2014 1 次提交
    • M
      CVE-2014-8131: Fix possible deadlock and segfault in qemuConnectGetAllDomainStats() · 5d8bee6d
      Martin Kletzander 提交于
      When user doesn't have read access on one of the domains he requested,
      the for loop could exit abruptly or continue and override pointer which
      pointed to locked object.
      
      This patch fixed two issues at once.  One is that domflags might have
      had QEMU_DOMAIN_STATS_HAVE_JOB even when there was no job started (this
      is fixed by doing domflags |= QEMU_DOMAIN_STATS_HAVE_JOB only when the
      job was acquired and cleaning domflags on every start of the loop.
      Second one is that the domain is kept locked when
      virConnectGetAllDomainStatsCheckACL() fails and continues the loop when
      it didn't end.  Adding a simple virObjectUnlock() and clearing the
      pointer ought to do.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 57023c0a)
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      5d8bee6d
  2. 16 11月, 2014 5 次提交
    • P
      nwfilter: fix deadlock caused updating network device and nwfilter · 26a87687
      Pavel Hrdina 提交于
      Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
      and starting of guest, but this same deadlock exists for
      updating/attaching network device to domain.
      
      The deadlock was introduced by removing global QEMU driver lock because
      nwfilter was counting on this lock and ensure that all driver locks are
      locked inside of nwfilter{Define,Undefine}.
      
      This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
      the deadlock for all possible paths in QEMU driver. LXC and UML drivers
      still have global lock.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      (cherry picked from commit 41127244)
      26a87687
    • M
      qemu: Update fsfreeze status on domain state transitions · d937f1f9
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1160084
      
      As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been
      frozen. However, this falls into that set of information which don't
      survive domain restart. Therefore, we need to clear the flag upon some
      state transitions. Moreover, once we clear the flag we must update the
      status file too.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 6ea54769)
      d937f1f9
    • M
      qemu: make advice from numad available when building commandline · 08182c7f
      Martin Kletzander 提交于
      Particularly in qemuBuildNumaArgStr(), there was a need for the advice
      due to memory backing, which needs to know the nodeset it will be pinned
      to.  With newer qemu this caused the following error when starting
      domain:
      
        error: internal error: Advice from numad is needed in case of
        automatic numa placement
      
      even when starting perfectly valid domain, e.g.:
      
        ...
        <vcpu placement='auto'>4</vcpu>
        <numatune>
          <memory mode='strict' placement='auto'/>
        </numatune>
        <cpu>
          <numa>
            <cell id='0' cpus='0' memory='524288'/>
            <cell id='1' cpus='1' memory='524288'/>
          </numa>
        </cpu>
        ...
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 11a48758)
      08182c7f
    • E
      qemu: forbid snapshot-delete --children-only on external snapshot · 3b4b9aee
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=956506 documents that
      given a domain where an internal snapshot parent has an external
      snapshot child, we lacked a safety check when trying to use the
      --children-only option to snapshot-delete:
      
      $ virsh start dom
      $ virsh snapshot-create-as dom internal
      $ virsh snapshot-create-as dom external --disk-only
      $ virsh snapshot-delete dom external
      error: Failed to delete snapshot external
      error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
      $ virsh snapshot-delete dom internal --children
      error: Failed to delete snapshot internal
      error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
      $ virsh snapshot-delete dom internal --children-only
      Domain snapshot internal children deleted
      
      While I'd still like to see patches that actually do proper external
      snapshot deletion, we should at least fix the inconsistency in the
      meantime.  With this patch:
      
      $ virsh snapshot-delete dom internal --children-only
      error: Failed to delete snapshot internal
      error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
      
      * src/qemu/qemu_driver.c (qemuDomainSnapshotDelete): Fix condition.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit 2086a990)
      3b4b9aee
    • P
      qemu: restore: Fix restoring of VM when the restore hook returns empty XML · 3d52d5e6
      Peter Krempa 提交于
      The documentation for the restore hook states that returning an empty
      XML is equivalent with copying the input. There was a bug in the code
      checking the returned string by checking the string instead of the
      contents. Use the new helper to check if the string is empty.
      
      (cherry picked from commit e3867799)
      3d52d5e6
  3. 30 10月, 2014 1 次提交
    • L
      qemu: x86_64 is good enough for i686 · cd1b72fd
      Lubomir Rintel 提交于
      virt-manager on Fedora sets up i686 hosts with "/usr/bin/qemu-kvm" emulator,
      which in turn unconditionally execs qemu-system-x86_64 querying capabilities
      then fails:
      
      Error launching details: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
      
      Traceback (most recent call last):
        File "/usr/share/virt-manager/virtManager/engine.py", line 748, in _show_vm_helper
          details = self._get_details_dialog(uri, vm.get_connkey())
        File "/usr/share/virt-manager/virtManager/engine.py", line 726, in _get_details_dialog
          obj = vmmDetails(conn.get_vm(connkey))
        File "/usr/share/virt-manager/virtManager/details.py", line 399, in __init__
          self.init_details()
        File "/usr/share/virt-manager/virtManager/details.py", line 784, in init_details
          domcaps = self.vm.get_domain_capabilities()
        File "/usr/share/virt-manager/virtManager/domain.py", line 518, in get_domain_capabilities
          self.get_xmlobj().os.machine, self.get_xmlobj().type)
        File "/usr/lib/python2.7/site-packages/libvirt.py", line 3492, in getDomainCapabilities
          if ret is None: raise libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
      libvirtError: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
      
      Journal:
      
      Oct 16 21:08:26 goatlord.localdomain libvirtd[1530]: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
      
      (cherry picked from commit afe8f420)
      cd1b72fd
  4. 30 9月, 2014 2 次提交
  5. 26 9月, 2014 3 次提交
  6. 25 9月, 2014 3 次提交
  7. 24 9月, 2014 2 次提交
    • P
      qemu: Report better errors from broken backing chains · 639a0098
      Peter Krempa 提交于
      Request erroring out from the backing chain traveller and drop qemu's
      internal backing chain integrity tester.
      
      The backing chain traveller reports errors by itself with possibly more
      detail than qemuDiskChainCheckBroken ever could.
      
      We also need to make sure that we reconnect to existing qemu instances
      even at the cost of losing the backing chain info (this really should be
      stored in the XML rather than reloaded from disk, but that needs some
      work).
      639a0098
    • P
      cputune_event: queue the event for cputune updates · 0dce260c
      Pavel Hrdina 提交于
      Now we have universal tunable event so we can use it for reporting
      changes to user. The cputune values will be prefixed with "cputune" to
      distinguish it from other tunable events.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      0dce260c
  8. 22 9月, 2014 5 次提交
  9. 19 9月, 2014 2 次提交
  10. 18 9月, 2014 11 次提交
  11. 17 9月, 2014 2 次提交
  12. 16 9月, 2014 3 次提交