1. 15 1月, 2014 3 次提交
    • J
      qemu: Fix job usage in qemuDomainBlockJobImpl · 7354aaf4
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      
      (cherry picked from commit f93d2caa)
      7354aaf4
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · 0e98442e
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      
      (cherry picked from commit b7992595)
      0e98442e
    • J
      qemu: Do not access stale data in virDomainBlockStats · 1bfc35e3
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      
      (cherry picked from commit db86da5c)
      1bfc35e3
  2. 09 1月, 2014 2 次提交
  3. 03 12月, 2013 1 次提交
  4. 07 11月, 2013 1 次提交
    • D
      Fix race condition reconnecting to vms & loading configs · b044210e
      Daniel P. Berrange 提交于
      The following sequence
      
       1. Define a persistent QMEU guest
       2. Start the QEMU guest
       3. Stop libvirtd
       4. Kill the QEMU process
       5. Start libvirtd
       6. List persistent guests
      
      At the last step, the previously running persistent guest
      will be missing. This is because of a race condition in the
      QEMU driver startup code. It does
      
       1. Load all VM state files
       2. Spawn thread to reconnect to each VM
       3. Load all VM config files
      
      Only at the end of step 3, does the 'virDomainObjPtr' get
      marked as "persistent". There is therefore a window where
      the thread reconnecting to the VM will remove the persistent
      VM from the list.
      
      The easy fix is to simply switch the order of steps 2 & 3.
      
      In addition to this though, we must only attempt to reconnect
      to a VM which had a non-zero PID loaded from its state file.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit f26701f5)
      b044210e
  5. 30 10月, 2013 1 次提交
  6. 07 10月, 2013 1 次提交
    • D
      Remove use of virConnectPtr from all remaining nwfilter code · 5395f0b5
      Daniel P. Berrange 提交于
      The virConnectPtr is passed around loads of nwfilter code in
      order to provide it as a parameter to the callback registered
      by the virt drivers. None of the virt drivers use this param
      though, so it serves no purpose.
      
      Avoiding the need to pass a virConnectPtr means that the
      nwfilterStateReload method no longer needs to open a bogus
      QEMU driver connection. This addresses a race condition that
      can lead to a crash on startup.
      
      The nwfilter driver starts before the QEMU driver and registers
      some callbacks with DBus to detect firewalld reload. If the
      firewalld reload happens while the QEMU driver is still starting
      up though, the nwfilterStateReload method will open a connection
      to the partially initialized QEMU driver and cause a crash.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 999d72fb)
      5395f0b5
  7. 27 9月, 2013 1 次提交
  8. 26 9月, 2013 1 次提交
  9. 24 9月, 2013 1 次提交
  10. 18 9月, 2013 1 次提交
  11. 17 9月, 2013 5 次提交
  12. 16 9月, 2013 1 次提交
  13. 12 9月, 2013 1 次提交
  14. 10 9月, 2013 1 次提交
    • E
      qemu: endjob returns a bool · 6cd15482
      Eric Blake 提交于
      Osier Yang pointed out that ever since commit 31cb030a, the
      signature of qemuDomainObjEndJob was changed to return a bool.
      While comparison against 0 or > 0 still gives the right results,
      it looks fishy; we also had one place that was comparing < 0
      which is effectively dead code.
      
      * src/qemu/qemu_migration.c (qemuMigrationPrepareAny): Fix dead
      code bug.
      (qemuMigrationBegin): Use more canonical form of bool check.
      * src/qemu/qemu_driver.c (qemuAutostartDomain)
      (qemuDomainCreateXML, qemuDomainSuspend, qemuDomainResume)
      (qemuDomainShutdownFlags, qemuDomainReboot, qemuDomainReset)
      (qemuDomainDestroyFlags, qemuDomainSetMemoryFlags)
      (qemuDomainSetMemoryStatsPeriod, qemuDomainInjectNMI)
      (qemuDomainSendKey, qemuDomainGetInfo, qemuDomainScreenshot)
      (qemuDomainSetVcpusFlags, qemuDomainGetVcpusFlags)
      (qemuDomainRestoreFlags, qemuDomainGetXMLDesc)
      (qemuDomainCreateWithFlags, qemuDomainAttachDeviceFlags)
      (qemuDomainUpdateDeviceFlags, qemuDomainDetachDeviceFlags)
      (qemuDomainBlockResize, qemuDomainBlockStats)
      (qemuDomainBlockStatsFlags, qemuDomainMemoryStats)
      (qemuDomainMemoryPeek, qemuDomainGetBlockInfo)
      (qemuDomainAbortJob, qemuDomainMigrateSetMaxDowntime)
      (qemuDomainMigrateGetCompressionCache)
      (qemuDomainMigrateSetCompressionCache)
      (qemuDomainMigrateSetMaxSpeed)
      (qemuDomainSnapshotCreateActiveInternal)
      (qemuDomainRevertToSnapshot, qemuDomainSnapshotDelete)
      (qemuDomainQemuMonitorCommand, qemuDomainQemuAttach)
      (qemuDomainBlockJobImpl, qemuDomainBlockCopy)
      (qemuDomainBlockCommit, qemuDomainOpenGraphics)
      (qemuDomainGetBlockIoTune, qemuDomainGetDiskErrors)
      (qemuDomainPMSuspendForDuration, qemuDomainPMWakeup)
      (qemuDomainQemuAgentCommand, qemuDomainFSTrim): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      6cd15482
  15. 09 9月, 2013 1 次提交
    • E
      qemu: don't leak vm on failure · d047b2d9
      Eric Blake 提交于
      Failure to attach to a domain during 'virsh qemu-attach' left
      the list of domains in an odd state:
      
      $ virsh qemu-attach 4176
      error: An error occurred, but the cause is unknown
      
      $ virsh list --all
       Id    Name                           State
      ----------------------------------------------------
       2     foo                            shut off
      
      $ virsh qemu-attach 4176
      error: Requested operation is not valid: domain is already active as 'foo'
      
      $ virsh undefine foo
      error: Failed to undefine domain foo
      error: Requested operation is not valid: cannot undefine transient domain
      
      $ virsh shutdown foo
      error: Failed to shutdown domain foo
      error: invalid argument: monitor must not be NULL
      
      It all stems from leaving the list of domains unmodified on
      the initial failure; we should follow the lead of createXML
      which removes vm on failure (the actual initial failure still
      needs to be fixed in a later patch, but at least this patch
      gets us to the point where we aren't getting stuck with an
      unremovable "shut off" transient domain).
      
      While investigating, I also found a leak in qemuDomainCreateXML;
      the two functions should behave similarly.  Note that there are
      still two unusual paths: if dom is not allocated, the user will
      see an OOM error even though the vm remains registered (but oom
      errors already indicate tricky cleanup); and if the vm starts
      and then quits again all before the job ends, it is possible
      to return a non-NULL dom even though the dom will no longer be
      useful for anything (but this at least lets the user know their
      short-lived vm ran).
      
      * src/qemu/qemu_driver.c (qemuDomainCreateXML): Don't leak vm on
      failure to obtain job.
      (qemuDomainQemuAttach): Match cleanup of qemuDomainCreateXML.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      d047b2d9
  16. 05 9月, 2013 1 次提交
  17. 04 9月, 2013 1 次提交
  18. 26 8月, 2013 3 次提交
  19. 22 8月, 2013 2 次提交
  20. 17 8月, 2013 1 次提交
  21. 31 7月, 2013 1 次提交
  22. 26 7月, 2013 1 次提交
  23. 24 7月, 2013 2 次提交
  24. 23 7月, 2013 1 次提交
  25. 22 7月, 2013 2 次提交
  26. 20 7月, 2013 1 次提交
  27. 18 7月, 2013 1 次提交
  28. 17 7月, 2013 1 次提交