1. 26 2月, 2015 1 次提交
    • L
      network: only clear bandwidth if it has been set · 118b2408
      Laine Stump 提交于
      libvirt was unconditionally calling virNetDevBandwidthClear() for
      every interface (and network bridge) of a type that supported
      bandwidth, whether it actually had anything set or not. This doesn't
      hurt anything (unless ifname == NULL!), but is wasteful.
      
      This patch makes sure that all calls to virNetDevBandwidthClear() are
      qualified by checking that the interface really had some bandwidth
      setup done, and checks for a null ifname inside
      virNetDevBandwidthClear(), silently returning success if it is null
      (as well as removing the ATTRIBUTE_NONNULL from that function's
      prototype, since we can't guarantee that it is never null,
      e.g. sometimes a type='ethernet' interface has no ifname as it is
      provided on the fly by qemu).
      118b2408
  2. 25 2月, 2015 3 次提交
    • Y
      Fix typos in messages · 8a833d1e
      Yuri Chornoivan 提交于
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      8a833d1e
    • J
      Assign default SCSI controller model before checking attribute validity · 52a166f4
      Ján Tomko 提交于
      If the qemu binary on x86 does not support lsi SCSI controller,
      but it supports virtio-scsi, we reject the virtio-specific attributes
      for no reason.
      
      Move the default controller assignment before the check.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1168849
      52a166f4
    • M
      qemu: Use correct flags for ABI stability check in SaveImageUpdateDef · cf2d4c60
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1183869
      
      Soo. you've successfully started yourself a domain. And since you want
      to use it on your host exclusively you are confident enough to
      passthrough the host CPU model, like this:
      
        <cpu mode='host-passthrough'/>
      
      Then, after a while, you want to save the domain into a file (e.g.
      virsh save dom dom.save). And here comes the trouble. The file consist
      of two parts: Libvirt header (containing domain XML among other
      things), and qemu migration data. Now, the domain XML in the header is
      formatted using special flags (VIR_DOMAIN_XML_SECURE |
      VIR_DOMAIN_XML_UPDATE_CPU | VIR_DOMAIN_XML_INACTIVE |
      VIR_DOMAIN_XML_MIGRATABLE).
      
      Then, on your way back from the bar, you think of changing something
      in the XML in the saved file (we have a command for it after all), say
      listen address for graphics console. So you successfully type in the
      command:
      
        virsh save-image-edit dom.save
      
      Change all the bits, and exit the editor. But instead of success
      you're left with sad error message:
      
        error: unsupported configuration: Target CPU model <null> does not
        match source Pentium Pro
      
      Sigh. Digging into the code you see lines, where we check for ABI
      stability. The new XML you've produced is compared with the old one
      from the saved file to see if qemu ABI will break or not. Wait, what?
      We are using different flags to parse the XML you've provided so we
      were just lucky it worked in some cases? Yep, that's right.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      cf2d4c60
  3. 24 2月, 2015 3 次提交
  4. 22 2月, 2015 1 次提交
  5. 21 2月, 2015 11 次提交
  6. 20 2月, 2015 2 次提交
  7. 19 2月, 2015 4 次提交
    • M
      qemuMigrationDriveMirror: Listen to events · 80c5f10e
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1179678
      
      When migrating with storage, libvirt iterates over domain disks and
      instruct qemu to migrate the ones we are interested in (shared, RO and
      source-less disks are skipped). The disks are migrated in series. No
      new disk is transferred until the previous one hasn't been quiesced.
      This is checked on the qemu monitor via 'query-jobs' command. If the
      disk has been quiesced, it practically went from copying its content
      to mirroring state, where all disk writes are mirrored to the other
      side of migration too. Having said that, there's one inherent error in
      the design. The monitor command we use reports only active jobs. So if
      the job fails for whatever reason, we will not see it anymore in the
      command output. And this can happen fairly simply: just try to migrate
      a domain with storage. If the storage migration fails (e.g. due to
      ENOSPC on the destination) we resume the host on the destination and
      let it run on partly copied disk.
      
      The proper fix is what even the comment in the code says: listen for
      qemu events instead of polling. If storage migration changes state an
      event is emitted and we can act accordingly: either consider disk
      copied and continue the process, or consider disk mangled and abort
      the migration.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      80c5f10e
    • M
      qemuProcessHandleBlockJob: Take status into account · 76c61cdc
      Michal Privoznik 提交于
      Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
      completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
      the event looks something like this:
      
      "timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
      "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
      8412790784, "offset": 409993216, "speed": 8796093022207, "type":
      "mirror", "error": "No space left on device"}}
      
      If "len" does not equal "offset" it's considered an error, and we can
      clearly see "error" field filled in. However, later in the event
      processing this case was handled no differently to case of job being
      aborted via separate API. It's time that we start differentiate these
      two because of the future work.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      76c61cdc
    • M
      qemuProcessHandleBlockJob: Set disk->mirrorState more often · c37943a0
      Michal Privoznik 提交于
      Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
      each time. The callback code handling the events checks if a blockjob
      was started via our public APIs prior to setting the mirrorState.
      However, some block jobs may be started internally (e.g. during
      storage migration), in which case we don't bother with setting
      disk->mirror (there's nothing we can set it to anyway), or other
      fields. But it will come handy if we update the mirrorState in these
      cases too. The event wasn't delivered just for fun - we've started the
      job after all.
      
      So, in this commit, the mirrorState is set to whatever job status
      we've obtained. Of course, there are some actions on some statuses
      that we want to perform. But instead of if {} else if {} else {} ...
      enumeration, let's move to switch().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      c37943a0
    • P
      qemu: Exit job on error path of qemuDomainSetVcpusFlags() · 0df2f040
      Peter Krempa 提交于
      Commit e105dc98 moved some code but
      didn't adjust the jump labels so that the job would be terminated.
      0df2f040
  8. 17 2月, 2015 5 次提交
  9. 13 2月, 2015 2 次提交
  10. 12 2月, 2015 3 次提交
    • D
      qemu: do upfront check for vcpupids being null when querying pinning · 9358b63a
      Daniel P. Berrange 提交于
      The qemuDomainHelperGetVcpus attempted to report an error when the
      vcpupids info was NULL. Unfortunately earlier code would clamp the
      value of 'maxinfo' to 0 when nvcpupids was 0, so the error reporting
      would end up being skipped.
      
      This lead to 'virsh vcpuinfo <dom>' just returning an empty list
      instead of giving the user a clear error.
      9358b63a
    • D
      qemu: fix setting of VM CPU affinity with TCG · a103bb10
      Daniel P. Berrange 提交于
      If a previous commit I fixed the incorrect handling of vcpu pids
      for TCG mode QEMU:
      
        commit b07f3d82
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Thu Dec 18 16:34:39 2014 +0000
      
          Don't setup fake CPU pids for old QEMU
      
          The code assumes that def->vcpus == nvcpupids, so when we setup
          fake CPU pids for old QEMU with nvcpupids == 1, we cause the
          later code to read off the end of the array. This has fun results
          like sche_setaffinity(0, ...) which changes libvirtd's own CPU
          affinity, or even better sched_setaffinity($RANDOM, ...) which
          changes the affinity of a random OS process.
      
      The intent was that this would merely disable the ability to set
      per-vCPU affinity. It should still have been possible to set VM
      level host CPU affinity.
      
      Unfortunately, when you set  <vcpu cpuset='0-1'>4</vcpu>, the XML
      parser will internally take this & initialize an entry in the
      def->cputune.vcpupin array for every VCPU. IOW this is implicitly
      being treated as
      
        <cputune>
          <vcpupin cpuset='0-1' vcpu='0'/>
          <vcpupin cpuset='0-1' vcpu='1'/>
          <vcpupin cpuset='0-1' vcpu='2'/>
          <vcpupin cpuset='0-1' vcpu='3'/>
        </cputune>
      
      Even more fun, the faked cputune elements are hidden from view when
      querying the live XML, because their cpuset mask is the same as the
      VM default cpumask.
      
      The upshot was that it was impossible to set VM level CPU affinity.
      
      To fix this we must update qemuProcessSetVcpuAffinities so that it
      only reports a fatal error if the per-VCPU cpu mask is different
      from the VM level cpu mask.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      a103bb10
    • M
  11. 11 2月, 2015 2 次提交
  12. 10 2月, 2015 3 次提交