1. 03 8月, 2015 2 次提交
  2. 31 7月, 2015 2 次提交
  3. 30 7月, 2015 1 次提交
  4. 29 7月, 2015 1 次提交
    • E
      qemu: Adjust VM id allocation · b2960501
      Erik Skultety 提交于
      Our atomic increment (virAtomicIntInc) uses (if available) gcc
      __sync_add_and_fetch builtin. In qemu driver though, we'd profit more
      from __sync_fetch_and_add builtin. To keep it simplistic, this patch
      adjusts qemu driver initialization rather than adding a new atomic
      increment macro.
      b2960501
  5. 27 7月, 2015 1 次提交
  6. 25 7月, 2015 1 次提交
    • L
      qemu: reorganize loop in qemuDomainAssignPCIAddresses · 07268782
      Laine Stump 提交于
      This loop occurs just after we've assured that all devices that
      require a PCI device have been assigned and all necessary PCI
      controllers have been added. It is the perfect place to add other
      potentially auto-generated PCI controller attributes that are
      dependent on the controller's PCI address (upcoming patch).
      
      There is a convenient loop through all controllers at the end of the
      function, but the patch to add new functionality will be cleaner if we
      first rearrange that loop a bit.
      
      Note that the loop originally was accessing info.addr.pci.bus prior to
      determining that the pci part of the object was valid. This isn't
      dangerous in any way, but seemed a bit ugly, so I fixed it.
      07268782
  7. 24 7月, 2015 2 次提交
  8. 22 7月, 2015 2 次提交
  9. 21 7月, 2015 1 次提交
    • P
      qemu: Update state of block job to READY only if it actually is ready · eae59247
      Peter Krempa 提交于
      Few parts of the code looked at the current progress of and assumed that
      a two phase blockjob is in the _READY state as soon as the progress
      reached 100% (info.cur == info.end). In current versions of qemu this
      assumption is invalid and qemu exposes a new flag 'ready' in the
      query-block-jobs output that is set to true if the job is actually
      finished.
      
      This patch adds internal data handling for reading the 'ready' flag and
      acting appropriately as long as the flag is present.
      
      While this still doesn't fix the virsh client problem with two phase
      block jobs and the --pivot option, it at least improves the error
      message:
      
      $ virsh blockcommit  --wait --verbose vm vda  --base vda[1] --active --pivot
      Block commit: [100 %]error: failed to pivot job for disk vda
      error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed
      
      to
      
      $ virsh blockcommit  --wait --verbose VM vda  --base vda[1] --active --pivot
      Block commit: [100 %]error: failed to pivot job for disk vda
      error: block copy still active: disk 'vda' not ready for pivot yet
      eae59247
  10. 20 7月, 2015 2 次提交
    • M
      qemu: Reject updating unsupported disk information · 717c99f3
      Martin Kletzander 提交于
      If one calls update-device with information that is not updatable,
      libvirt reports success even though no data were updated.  The example
      used in the bug linked below uses updating device with <boot order='2'/>
      which, in my opinion, is a valid thing to request from user's
      perspective.  Mainly since we properly error out if user wants to update
      such data on a network device for example.
      
      And since there are many things that might happen (update-device on disk
      basically knows just how to change removable media), check for what's
      changing and moreover, since the function might be usable in other
      drivers (updating only disk path is a valid possibility) let's abstract
      it for any two disks.
      
      We can't possibly check for everything since for many fields our code
      does not properly differentiate between default and unspecified values.
      Even though this could be changed, I don't feel like it's worth the
      complexity so it's not the aim of this patch.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1007228
      717c99f3
    • F
      qemu: Use heads parameter for QXL driver · 7b401c3b
      Frediano Ziglio 提交于
      Allows to specify maximum number of head to QXL driver.
      
      Actually can be a compatiblity problem as heads in the XML configuration
      was set by default to '1'.
      Signed-off-by: NFrediano Ziglio <fziglio@redhat.com>
      7b401c3b
  11. 15 7月, 2015 3 次提交
  12. 14 7月, 2015 7 次提交
    • P
      qemu: process: Improve update of maximum balloon state at startup · c212e0c7
      Peter Krempa 提交于
      In commit 641a145d I've added code that
      resets the balloon memory value to full size prior to resuming the vCPUs
      since the size certainly was not reduced at that point.
      
      Since qemuProcessStart is used also in code paths with already booted
      up guests (migration, save/restore) the assumption is not entirely true
      since the guest might already been running before.
      
      This patch adds a function that queries the monitor rather than using
      the full size since a balloon event would not be reissued in case we are
      recovering a saved migration state.
      
      Additionally the new function is used also when reconnecting to a VM
      after libvirtd restart since we might have missed a few balloon events
      while libvirtd was not running.
      c212e0c7
    • M
      qemuDomainSetNumaParamsLive: Check for NUMA mode more wisely · 1cf25f63
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1232663
      
      In one of my previous ptaches (bcd9a564) I've tried to fix the problem
      that we blindly assumed strict NUMA mode for guests. This led to
      several problems like us pinning a domain onto a nodeset via libnuma
      among with CGroups. Once the nodeset was changed by user, well, it did
      not result in desired effect. See the original commit for more info.
      But, the commit I wrote had a bug: when NUMA parameters are changed on
      a running domain we require domain to be strictly pinned onto a
      nodeset. Due to a typo a condition was mis-evaluated.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      1cf25f63
    • J
      nodeinfo: Add sysfs_prefix to nodeGetMemoryStats · c71f0654
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_SYSTEM_PATH.
      c71f0654
    • J
      nodeinfo: Add sysfs_prefix to nodeCapsInitNUMA · b97b3048
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_CPU_PATH which is a
      derivative of SYSFS_SYSTEM_PATH
      
      Use cpupath for nodeCapsInitNUMAFake and remove SYSFS_CPU_PATH
      b97b3048
    • J
      nodeinfo: Add sysfs_prefix to nodeGetInfo · 29e4f224
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_SYSTEM_PATH.
      29e4f224
    • J
      nodeinfo: Add sysfs_prefix to nodeGetCPUMap · f1c6179f
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_SYSTEM_PATH.
      f1c6179f
    • J
      nodeinfo: Add sysfs_prefix to nodeGetCPUCount · f1a43a0f
      John Ferlan 提交于
      Add the sysfs_prefix argument to the call to allow for setting the
      path for tests to something other than SYSFS_SYSTEM_PATH.
      f1a43a0f
  13. 13 7月, 2015 1 次提交
    • M
      qemuProcessHandleMigrationStatus: Update migration status more frequently · 45cc2fca
      Michal Privoznik 提交于
      After Jirka's migration patches libvirt is listening on migration
      events from qemu instead of actively polling on the monitor. There is,
      however, a little regression (introduced in 6d2edb6a). The
      problem is, the current status of migration job is updated in
      qemuProcessHandleMigrationStatus if and only if migration job was
      started. But eventually every asynchronous job may result in
      migration. Therefore, since this job is not strictly a
      migration job, internal state was not updated and later checks failed:
      
        virsh # save fedora22 /tmp/fedora22_ble.save
        error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
        error: operation failed: domain save job: is not active
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      45cc2fca
  14. 10 7月, 2015 14 次提交