1. 23 3月, 2015 2 次提交
  2. 19 3月, 2015 1 次提交
  3. 16 3月, 2015 2 次提交
  4. 02 3月, 2015 3 次提交
  5. 24 2月, 2015 2 次提交
  6. 19 1月, 2015 1 次提交
  7. 15 1月, 2015 1 次提交
    • J
      Check for domain liveness in qemuDomainObjExitMonitor · dc2fd51f
      Ján Tomko 提交于
      The domain might disappear during the time in monitor when
      the virDomainObjPtr is unlocked, so the caller needs to check
      if it's still alive.
      
      Since most of the callers are going to need it, put the
      check inside qemuDomainObjExitMonitor and return -1 if
      the domain died in the meantime.
      dc2fd51f
  8. 14 1月, 2015 1 次提交
    • D
      Give virDomainDef parser & formatter their own flags · 0ecd6851
      Daniel P. Berrange 提交于
      The virDomainDefParse* and virDomainDefFormat* methods both
      accept the VIR_DOMAIN_XML_* flags defined in the public API,
      along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
      defined in domain_conf.c.
      
      This is seriously confusing & error prone for a number of
      reasons:
      
       - VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
         VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
         formatting operation
       - Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
         to parse or to format, but not both.
      
      This patch cleanly separates out the flags. There are two
      distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
      flags that are used by the corresponding methods. The
      VIR_DOMAIN_XML_* flags received via public API calls must
      be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
      needed.
      
      The various calls to virDomainDefParse which hardcoded the
      use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
      VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
      0ecd6851
  9. 12 1月, 2015 1 次提交
    • P
      qxl: change the default value for vgamem_mb to 16 MiB · 0e502466
      Pavel Hrdina 提交于
      The default value should be 16 MiB instead of 8 MiB. Only really old
      version of upstream QEMU used the 8 MiB as default for vga framebuffer.
      
      Without this change if you update your libvirt where we introduced the
      "vgamem" attribute for QXL video device the value will be set to 8 MiB,
      but previously your guest had 16 MiB because we didn't pass any value to
      QEMU command line which means QEMU used its own 16 MiB as default.
      
      This will affect all users with guest's display resolution higher than
      1920x1080.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      0e502466
  10. 07 1月, 2015 1 次提交
    • P
      qemu: Don't unref domain after exit from nested async job · 79bb49a8
      Peter Krempa 提交于
      In commit 540c339a the whole domain
      reference counting was refactored in the qemu driver. Domain jobs now
      don't need to reference the domain object as they now expect the
      reference from the calling function.
      
      However, the patch forgot to remove the unref call in case we exit the
      monitor when we were acquiring a nested job. This caused the daemon to
      crash on a subsequent access to the domain object once we've done an
      operation requiring a nested job for a monitor access.
      
      An easy reproducer case:
      
      1) Start a vm with qcow disks
      2) virsh snapshot-create-as DOMNAME
      3) virsh dumpxml DOMNAME
      4) daemon crashes in a semi-random spot while accessing a now-removed VM
      object.
      
      Fortunately, the commit wasn't released yet, so there are no security
      implications.
      Reported-by: NShanzi Yu <shyu@redhat.com>
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      79bb49a8
  11. 21 12月, 2014 1 次提交
    • M
      qemu: completely rework reference counting · 540c339a
      Martin Kletzander 提交于
      There is one problem that causes various errors in the daemon.  When
      domain is waiting for a job, it is unlocked while waiting on the
      condition.  However, if that domain is for example transient and being
      removed in another API (e.g. cancelling incoming migration), it get's
      unref'd.  If the first call, that was waiting, fails to get the job, it
      unref's the domain object, and because it was the last reference, it
      causes clearing of the whole domain object.  However, when finishing the
      call, the domain must be unlocked, but there is no way for the API to
      know whether it was cleaned or not (unless there is some ugly temporary
      variable, but let's scratch that).
      
      The root cause is that our APIs don't ref the objects they are using and
      all use the implicit reference that the object has when it is in the
      domain list.  That reference can be removed when the API is waiting for
      a job.  And because each domain doesn't do its ref'ing, it results in
      the ugly checking of the return value of virObjectUnref() that we have
      everywhere.
      
      This patch changes qemuDomObjFromDomain() to ref the domain (using
      virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
      should be the only function in which the return value of
      virObjectUnref() is checked.  This makes all reference counting
      deterministic and makes the code a bit clearer.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      540c339a
  12. 17 12月, 2014 1 次提交
    • E
      getstats: perform recursion in monitor collection · b1802714
      Eric Blake 提交于
      When requested in a later patch, the QMP command results are now
      examined recursively.  As qemu_driver will eventually have to
      read items out of the hash table as stored by this patch, the
      computation of backing alias string is done in a shared location.
      
      * src/qemu/qemu_domain.h (qemuDomainStorageAlias): New prototype.
      * src/qemu/qemu_domain.c (qemuDomainStorageAlias): Implement it.
      * src/qemu/qemu_monitor_json.c
      (qemuMonitorJSONGetOneBlockStatsInfo)
      (qemuMonitorJSONBlockStatsUpdateCapacityOne): Perform recursion.
      (qemuMonitorJSONGetAllBlockStatsInfo)
      (qemuMonitorJSONBlockStatsUpdateCapacity): Update callers.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      b1802714
  13. 16 12月, 2014 1 次提交
  14. 04 12月, 2014 1 次提交
    • P
      qemu: process: Refactor reconnecting to qemu processes · 3ecebf07
      Peter Krempa 提交于
      Move entering the job into the thread to simplify the program flow. Also
      as the code holds a separate reference to the domain object some
      conditions can be simplified.
      
      After this patch qemuDomainObjTransferJob is no longer needed so this
      patch removes it.
      3ecebf07
  15. 28 11月, 2014 1 次提交
    • M
      qemu: Don't track quiesced state of FSs · 6085d917
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1160084
      
      As of b6d4dad1 (1.2.5) we are trying to keep the status of FSFreeze
      in the guest. Even though I've tried to fixed couple of corner cases
      (6ea54769), it occurred to me just recently, that the approach is
      broken by design. Firstly, there are many other ways to talk to
      qemu-ga (even through libvirt) that filesystems can be thawed (e.g.
      qemu-agent-command) without libvirt noticing. Moreover, there are
      plenty of ways to thaw filesystems without even qemu-ga noticing (yes,
      qemu-ga keeps internal track of FSFreeze status). So, instead of
      keeping the track ourselves, or asking qemu-ga for stale state, it's
      the best to let qemu-ga deal with that (and possibly let guest kernel
      propagate an error).
      
      Moreover, there's one bug with the following approach, if fsfreeze
      command failed, we've executed fsthaw subsequently. So issuing
      domfsfreeze in virsh gave the following result:
      
      virsh # domfsfreeze gentoo
      Froze 1 filesystem(s)
      
      virsh # domfsfreeze gentoo
      error: Unable to freeze filesystems
      error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
      
      virsh # domfsfreeze gentoo
      Froze 1 filesystem(s)
      
      virsh # domfsfreeze gentoo
      error: Unable to freeze filesystems
      error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      6085d917
  16. 25 11月, 2014 2 次提交
  17. 15 11月, 2014 1 次提交
  18. 07 11月, 2014 1 次提交
  19. 03 11月, 2014 1 次提交
    • M
      qemu: avoid rare race when undefining domain · b629c64e
      Martin Kletzander 提交于
      When one domain is being undefined and at the same time started, for
      example, there is a possibility of a rare problem occuring.
      
       - Thread 1 does virDomainUndefine(), has the lock, checks that the
         domain is active and because it's not, calls
         virDomainObjListRemove().
      
       - Thread 2 does virDomainCreate() and tries to lock the domain.
      
       - Thread 1 needs to lock domain list in order to remove the domain from
         it, but must unlock domain first (proper order is to lock domain list
         first and the domain itself second).
      
       - Thread 2 grabs the lock, starts the domain and releases the lock.
      
       - Thread 1 grabs the lock and removes the domain from list.
      
      With this patch:
      
       - qemuDomainRemoveInactive() creates a QEMU_JOB_MODIFY if that's
         possible, but since it must remove the domain from list either way,
         it continues even when starting the job failed.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1150505Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      b629c64e
  20. 26 9月, 2014 1 次提交
    • P
      qemu: Always re-detect backing chain · fe7ef7b1
      Peter Krempa 提交于
      Since 363e9a68 we track backing chain metadata when creating snapshots
      the right way even for the inactive configuration. As we did not yet
      update other code paths that modify the backing chain (blockpull) the
      newDef backing chain gets out of sync.
      
      After stopping of a VM the new definition gets copied to the next start
      one. The new VM then has incorrect backing chain info. This patch
      switches the backing chain detector to always purge the existing backing
      chain and forces re-detection to avoid this issue until we'll have full
      backing chain tracking support.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1144922
      fe7ef7b1
  21. 24 9月, 2014 3 次提交
    • P
      qemu: Report better errors from broken backing chains · 639a0098
      Peter Krempa 提交于
      Request erroring out from the backing chain traveller and drop qemu's
      internal backing chain integrity tester.
      
      The backing chain traveller reports errors by itself with possibly more
      detail than qemuDiskChainCheckBroken ever could.
      
      We also need to make sure that we reconnect to existing qemu instances
      even at the cost of losing the backing chain info (this really should be
      stored in the XML rather than reloaded from disk, but that needs some
      work).
      639a0098
    • P
      qemu: Sanitize argument names and empty disk check in qemuDomainDetermineDiskChain · 172ca0e7
      Peter Krempa 提交于
      Reuse virStorageSourceIsEmpty and rename "force" argument to
      "force_probe".
      172ca0e7
    • P
      util: storage: Allow metadata crawler to report useful errors · b8549877
      Peter Krempa 提交于
      Add a new parameter to virStorageFileGetMetadata that will break the
      backing chain detection process and report useful error message rather
      than having to use virStorageFileChainGetBroken.
      
      This patch just introduces the option, usage will be provided
      separately.
      b8549877
  22. 23 9月, 2014 1 次提交
  23. 19 9月, 2014 2 次提交
  24. 18 9月, 2014 2 次提交
  25. 16 9月, 2014 1 次提交
  26. 10 9月, 2014 4 次提交
  27. 08 9月, 2014 1 次提交