1. 06 12月, 2016 16 次提交
  2. 05 12月, 2016 5 次提交
  3. 02 12月, 2016 1 次提交
  4. 01 12月, 2016 6 次提交
    • G
      qemuDomainAttachNetDevice: pass mq and vectors for vhost-user with multiqueue · f81b33b5
      gaohaifeng 提交于
      Two reasons:
      1.in none hotplug, we will pass it. We can see from libvirt function
      qemuBuildVhostuserCommandLine
      2.qemu will use this vetcor num to init msix table. If we don't pass, qemu
      will use default value, this will cause VM can only use default value
      interrupts at most.
      Signed-off-by: Ngaohaifeng <gaohaifeng.gao@huawei.com>
      f81b33b5
    • E
      qemu: Prevent detaching SCSI controller used by hostdev · 655429a0
      Eric Farman 提交于
      Consider the following XML snippets:
      
        $ cat scsicontroller.xml
            <controller type='scsi' model='virtio-scsi' index='0'/>
        $ cat scsihostdev.xml
            <hostdev mode='subsystem' type='scsi'>
              <source>
                <adapter name='scsi_host0'/>
                <address bus='0' target='8' unit='1074151456'/>
              </source>
            </hostdev>
      
      If we create a guest that includes the contents of scsihostdev.xml,
      but forget the virtio-scsi controller described in scsicontroller.xml,
      one is silently created for us.  The same holds true when attaching
      a hostdev before the matching virtio-scsi controller.
      (See qemuDomainFindOrCreateSCSIDiskController for context.)
      
      Detaching the hostdev, followed by the controller, works well and the
      guest behaves appropriately.
      
      If we detach the virtio-scsi controller device first, any associated
      hostdevs are detached for us by the underlying virtio-scsi code (this
      is fine, since the connection is broken).  But all is not well, as the
      guest is unable to receive new virtio-scsi devices (the attach commands
      succeed, but devices never appear within the guest), nor even be
      shutdown, after this point.
      
      While this is not libvirt's problem, we can prevent falling into this
      scenario by checking if a controller is being used by any hostdev
      devices.  The same is already done for disk elements today.
      
      Applying this patch and then using the XML snippets from earlier:
      
        $ virsh detach-device guest_01 scsicontroller.xml
        error: Failed to detach device from scsicontroller.xml
        error: operation failed: device cannot be detached: device is busy
      
        $ virsh detach-device guest_01 scsihostdev.xml
        Device detached successfully
      
        $ virsh detach-device guest_01 scsicontroller.xml
        Device detached successfully
      Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
      Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com>
      Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
      655429a0
    • L
      qemu: assign VFIO devices to PCIe addresses when appropriate · 70249927
      Laine Stump 提交于
      Although nearly all host devices that are assigned to guests using
      VFIO ("<hostdev>" devices in libvirt) are physically PCI Express
      devices, until now libvirt's PCI address assignment has always
      assigned them addresses on legacy PCI controllers in the guest, even
      if the guest's machinetype has a PCIe root bus (e.g. q35 and
      aarch64/virt).
      
      This patch tries to assign them to an address on a PCIe controller
      instead, when appropriate. First we do some preliminary checks that
      might allow setting the flags without doing any extra work, and if
      those conditions aren't met (and if libvirt is running privileged so
      that it has proper permissions), we perform the (relatively) time
      consuming task of reading the device's PCI config to see if it is an
      Express device. If this is successful, the connect flags are set based
      on the result, but if we aren't able to read the PCI config (most
      likely due to the device not being present on the system at the time
      of the check) we assume it is (or will be) an Express device, since
      that is almost always the case anyway.
      70249927
    • L
      qemu: propagate virQEMUDriver object to qemuDomainDeviceCalculatePCIConnectFlags · 9b0848d5
      Laine Stump 提交于
      If libvirtd is running unprivileged, it can open a device's PCI config
      data in sysfs, but can only read the first 64 bytes. But as part of
      determining whether a device is Express or legacy PCI,
      qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future
      patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond
      the first 64 bytes of the PCI config data and fails with an error log
      if the read is unsuccessful.
      
      In order to avoid creating a parallel "quiet" version of
      virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down
      through all the call chains that initialize the
      qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver
      pointer with the rest of the iterdata so that it can be used by
      qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used
      yet, but will be used in an upcoming patch (that detects Express vs
      legacy PCI for VFIO assigned devices) to examine driver->privileged.
      9b0848d5
    • L
      util: new function virPCIDeviceGetConfigPath() · bfdc1451
      Laine Stump 提交于
      The path to the config file for a PCI device is conventiently stored
      in a virPCIDevice object, but that object's contents aren't directly
      visible outside of virpci.c, so we need to have an accessor function
      for it if anyone needs to look at it.
      bfdc1451
    • L
      util: new function virFileLength() · e026563f
      Laine Stump 提交于
      This new function just calls fstat() (if provided with a valid fd) or
      stat() (if fd is -1) and returns st_size (or -1 if there is an
      error). We may decide we want this function to be more complex, and
      handle things like block devices - this is a placeholder (that works)
      for any more complicated function.
      e026563f
  5. 30 11月, 2016 4 次提交
  6. 29 11月, 2016 3 次提交
  7. 28 11月, 2016 3 次提交
    • C
      storage_backend_rbd: check the return value of rados_conf_set · 17879605
      Chen Hanxiao 提交于
      We had a lot of rados_conf_set and check works.
      Use helper virStorageBackendRBDRADOSConfSet for them.
      Signed-off-by: NChen Hanxiao <chenhanxiao@gmail.com>
      17879605
    • P
      qemu: capabilities: Don't partially reprope caps on process reconnect · b87a1134
      Peter Krempa 提交于
      Thanks to the complex capability caching code virQEMUCapsProbeQMP was
      never called when we were starting a new qemu VM. On the other hand,
      when we are reconnecting to the qemu process we reload the capability
      list from the status XML file. This means that the flag preventing the
      function being called was not set and thus we partially reprobed some of
      the capabilities.
      
      The recent addition of CPU hotplug clears the
      QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it.
      The partial re-probe on reconnect results into attempting to call the
      unsupported command and then killing the VM.
      
      Remove the partial reprobe and depend on the stored capabilities. If it
      will be necessary to reprobe the capabilities in the future, we should
      do a full reprobe rather than this partial one.
      b87a1134
    • J
      qemu: Add support for unavailable-features · a1adfb0f
      Jiri Denemark 提交于
      QEMU 2.8.0 adds support for unavailable-features in
      query-cpu-definitions reply. The unavailable-features array lists CPU
      features which prevent a corresponding CPU model from being usable on
      current host. It can only be used when all the unavailable features are
      disabled. Empty array means the CPU model can be used without
      modifications.
      
      We can use unavailable-features for providing CPU model usability info
      in domain capabilities XML:
      
          <domainCapabilities>
            ...
            <cpu>
              <mode name='host-passthrough' supported='yes'/>
              <mode name='host-model' supported='yes'>
                <model fallback='allow'>Skylake-Client</model>
                ...
              </mode>
              <mode name='custom' supported='yes'>
                <model usable='yes'>qemu64</model>
                <model usable='yes'>qemu32</model>
                <model usable='no'>phenom</model>
                <model usable='yes'>pentium3</model>
                <model usable='yes'>pentium2</model>
                <model usable='yes'>pentium</model>
                <model usable='yes'>n270</model>
                <model usable='yes'>kvm64</model>
                <model usable='yes'>kvm32</model>
                <model usable='yes'>coreduo</model>
                <model usable='yes'>core2duo</model>
                <model usable='no'>athlon</model>
                <model usable='yes'>Westmere</model>
                <model usable='yes'>Skylake-Client</model>
                ...
              </mode>
            </cpu>
            ...
          </domainCapabilities>
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      a1adfb0f
  8. 26 11月, 2016 2 次提交
    • J
      qemu: Avoid reporting "host" as a supported CPU model · 73411a7f
      Jiri Denemark 提交于
      "host" CPU model is supported by a special host-passthrough CPU mode and
      users is not allowed to specify this model directly with custom mode.
      Thus we should not advertise "host" CPU model in domain capabilities.
      This worked well on architectures for which libvirt provides a list of
      supported CPU models in cpu_map.xml (since "host" is not in the list).
      But we need to explicitly filter "host" model out for all other
      architectures.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      73411a7f
    • J
      qemu: Probe CPU models for KVM and TCG · 7bf6f345
      Jiri Denemark 提交于
      CPU models (and especially some additional details which we will start
      probing for later) differ depending on the accelerator. Thus we need to
      call query-cpu-definitions in both KVM and TCG mode to get all data we
      want.
      
      Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
      having to squash even more stuff into this single patch. They will all
      be switched back later in separate commits.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      7bf6f345