1. 26 11月, 2016 4 次提交
    • J
      qemu: Probe CPU models for KVM and TCG · 7bf6f345
      Jiri Denemark 提交于
      CPU models (and especially some additional details which we will start
      probing for later) differ depending on the accelerator. Thus we need to
      call query-cpu-definitions in both KVM and TCG mode to get all data we
      want.
      
      Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
      having to squash even more stuff into this single patch. They will all
      be switched back later in separate commits.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      7bf6f345
    • J
      qemu: Enable KVM when probing capabilities · 25ba9c31
      Jiri Denemark 提交于
      CPU related capabilities may differ depending on accelerator used when
      probing. Let's use KVM if available when probing QEMU and fall back to
      TCG. The created capabilities already contain all we need to distinguish
      whether KVM or TCG was used:
      
          - KVM was used when probing capabilities:
              QEMU_CAPS_KVM is set
              QEMU_CAPS_ENABLE_KVM is not set
      
          - TCG was used and QEMU supports KVM, but it failed (e.g., missing
            kernel module or wrong /dev/kvm permissions)
              QEMU_CAPS_KVM is not set
              QEMU_CAPS_ENABLE_KVM is set
      
          - KVM was not used and QEMU does not support it
              QEMU_CAPS_KVM is not set
              QEMU_CAPS_ENABLE_KVM is not set
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      25ba9c31
    • J
      qemucapsprobe: Ignore all greetings except the first one · 73078906
      Jiri Denemark 提交于
      When starting QEMU more than once during a single probing process,
      qemucapsprobe utility would save QMP greeting several times, which
      doesn't play well with our test monitor.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      73078906
    • J
      qemu: Probe KVM state earlier · 429a7b23
      Jiri Denemark 提交于
      Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest
      of the probing code can use these capabilities to handle KVM/TCG replies
      differently.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      429a7b23
  2. 25 11月, 2016 5 次提交
  3. 24 11月, 2016 2 次提交
    • R
      tests: eventtest: fix build on macOS · a4234291
      Roman Bogorodskiy 提交于
      macOS doesn't support clock_gettime(2), at least versions prior 10.12
      (I didn't actually check 10.12 though). So, use its own routines in
      eventtest.
      
       * configure.ac: check for requires symbols and define
         HAVE_MACH_CLOCK_ROUTINES if found
       * tests/eventtest.c: add clock_get_time() based implementation
      a4234291
    • R
      tests: eventtest: fix LDADD · 0d1c147f
      Roman Bogorodskiy 提交于
      Don't explicitly LDADD -lrt, use $(LIB_CLOCK_GETTIME) because
      not all platforms have clock_gettime(2) and librt available.
      0d1c147f
  4. 22 11月, 2016 3 次提交
  5. 16 11月, 2016 1 次提交
    • R
      bhyve: fix memory leaks in bhyvexml2argvtest · a6b81d55
      Roman Bogorodskiy 提交于
       * virNetDevTapCreateInBridgePort() mock: free '*ifname' before
         strdupping a hardoded value to it
       * testCompareXMLToArgvFiles(): unref 'conn' object in cleanup
       * testCompareXMLToArgvHelper(): free 'ldargs' and 'dmargs' in
         cleanup
      a6b81d55
  6. 15 11月, 2016 19 次提交
    • J
      cpu: Avoid adding <vendor> to custom CPUs · 98b7c37d
      Jiri Denemark 提交于
      Guest CPU definitions with mode='custom' and missing <vendor> are
      expected to run on a host CPU from any vendor as long as the required
      CPU model can be used as a guest CPU on the host. But even though no CPU
      vendor was explicitly requested we would sometimes force it due to a bug
      in virCPUUpdate and virCPUTranslate.
      
      The bug would effectively forbid cross vendor migrations even if they
      were previously working just fine.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      98b7c37d
    • J
      cpu: Introduce virCPUConvertLegacy API · d73422c1
      Jiri Denemark 提交于
      PPC driver needs to convert POWERx_v* legacy CPU model names into POWERx
      to maintain backward compatibility with existing domains. This patch
      adds a new step into the guest CPU configuration work flow which CPU
      drivers can use to convert legacy CPU definitions.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      d73422c1
    • J
      cputest: Don't test cpuGuestData · 509a4a40
      Jiri Denemark 提交于
      The API is no longer used anywhere else since it was replaced by a much
      saner work flow utilizing new APIs that work on virCPUDefPtr directly:
      virCPUCompare, virCPUUpdate, and virCPUTranslate.
      
      Not testing the new work flow caused some bugs to be hidden. This patch
      reveals them, but doesn't attempt to fix them. To make sure all test
      still pass after this patch, all affected test results are modified to
      pretend the tests succeeded. All of the bugs will be fixed in the
      following commits and the artificial modifications will be reverted.
      
      The following is the list of bugs in the new CPU model work flow:
      
      - a guest CPU with mode='custom' and missing <vendor/> gets the vendor
        copied from host's CPU (the vendor should only be copied to host-model
        CPUs):
          DO_TEST_UPDATE("x86", "host", "min", VIR_CPU_COMPARE_IDENTICAL)
          DO_TEST_UPDATE("x86", "host", "pentium3", VIR_CPU_COMPARE_IDENTICAL)
          DO_TEST_GUESTCPU("x86", "host-better", "pentium3", NULL, 0)
      
      - when a guest CPU with mode='custom' needs to be translated into
        another model because the original model is not supported by a
        hypervisor, the result will have its vendor set to the vendor of the
        original CPU model as specified in cpu_map.xml even if the original
        guest CPU XML didn't contain <vendor/>:
          DO_TEST_GUESTCPU("x86", "host", "guest", model486, 0)
          DO_TEST_GUESTCPU("x86", "host", "guest", models, 0)
          DO_TEST_GUESTCPU("x86", "host-Haswell-noTSX", "Haswell-noTSX",
                           haswell, 0)
      
      - legacy POWERx_v* model names are not recognized:
          DO_TEST_GUESTCPU("ppc64", "host", "guest-legacy", ppc_models, 0)
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      509a4a40
    • J
      cputest: Don't use preferred CPU model at all · adf44c7b
      Jiri Denemark 提交于
      Now that all tests pass NULL as the preferred model, we can just drop
      that test parameter.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      adf44c7b
    • J
      cputest: Don't use superfluous preferred model · 63842776
      Jiri Denemark 提交于
      In some cases preferred model doesn't really do anything since the
      result remains the same even if it is removed.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      63842776
    • J
      cputest: Don't use preferred model with forbidden fallback · 797bdced
      Jiri Denemark 提交于
      Using a preferred model for guest CPUs with forbidden fallback masks a
      bug in the code. It would just happily use another CPU model supported
      by a hypervisor even though it is explicitly forbidden in the CPU XML.
      
      This patch temporarily changes the expected result to -2, which is used
      when the result XML file cannot be found (but it was supposed not to be
      found since the tested API should have failed). The result will be
      switched back to -1 few commits later when the original bug gets fixed.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      797bdced
    • J
      cputest: Don't use unsupported preferred model · f60c5e4e
      Jiri Denemark 提交于
      Using a preferred CPU model which is not in the list of CPU models
      supported by a hypervisor does not make sense.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      f60c5e4e
    • J
      cputest: Don't use preferred model for minimum match CPUs · 43a94270
      Jiri Denemark 提交于
      Guest CPUs with match='minimum' should always be updated to match host
      CPU model. Trying to get different results by supplying preferred models
      does not make sense.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      43a94270
    • J
      cpu: Rename cpuDataFormat · 53a5986a
      Jiri Denemark 提交于
      The new name is virCPUDataFormat.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      53a5986a
    • J
      cpu: Rename cpuDataParse · be57e689
      Jiri Denemark 提交于
      The new name is virCPUDataParse.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      be57e689
    • L
      qemu: initially reserve one open pcie-root-port for hotplug · 70d15c9a
      Laine Stump 提交于
      For machinetypes with a pci-root bus (all legacy PCI), libvirt will
      make a "fake" reservation for one extra slot prior to assigning
      addresses to unaddressed PCI endpoint devices in the domain. This will
      trigger auto-adding of a pci-bridge for the final device to be
      assigned an address *if that device would have otherwise instead been
      the last device on the last available pci-bridge*; thus it assures
      that there will always be at least one slot left open in the domain's
      bus topology for expansion (which is important both for hotplug (since
      a new pci-bridge can't be added while the guest is running) as well as
      for offline additions to the config (since adding a new device might
      otherwise in some cases require re-addressing existing devices, which
      we want to avoid)).
      
      It's important to note that for the above case (legacy PCI), we must
      check for the special case of all slots on all buses being occupied
      *prior to assigning any addresses*, and avoid attempting to reserve
      the extra address in that case, because there is no free address in
      the existing topology, so no place to auto-add a pci-bridge for
      expansion (i.e. it would always fail anyway). Since that condition can
      only be reached by manual intervention, this is acceptable.
      
      For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's
      methodology for automatically expanding the bus topology is different
      - pcie-root-ports are plugged into slots (soon to be functions) of
      pcie-root as needed, and the new endpoint devices are assigned to the
      single slot in each pcie-root-port. This is done so that the devices
      are, by default, hotpluggable (the slots of pcie-root don't support
      hotplug, but the single slot of the pcie-root-port does). Since
      pcie-root-ports can only be plugged into pcie-root, and we don't
      auto-assign endpoint devices to the pcie-root slots, this means
      topology expansion doesn't compete with endpoint devices for slots, so
      we don't need to worry about checking for all "useful" slots being
      free *prior* to assigning addresses to new endpoint devices - as a
      matter of fact, if we attempt to reserve the open slots before the
      used slots, it can lead to errors.
      
      Instead this patch just reserves one slot for a "future potential"
      PCIe device after doing the assignment for actual devices, but only
      if the only PCI controller defined prior to starting address
      assignment was pcie-root, and only if we auto-added at least one PCI
      controller during address assignment. This assures two things:
      
      1) that reserving the open slots will only be done when the domain is
         initially defined, never at any time after, and
      
      2) that if the user understands enough about PCI controllers that they
         are adding them manually, that we don't mess up their plan by
         adding extras - if they know enough to add one pcie-root-port, or
         to manually assign addresses such that no pcie-root-ports are
         needed, they know enough to add extra pcie-root-ports if they want
         them (this could be called the "libguestfs clause", since
         libguestfs needs to be able to create domains with as few
         devices/controllers as possible).
      
      This is set to reserve a single free port for now, but could be
      increased in the future if public sentiment goes in that direction
      (it's easy to increase later, but essentially impossible to decrease)
      70d15c9a
    • L
      qemu: try to put ich9 sound device at 00:1B.0 · 8d873a5a
      Laine Stump 提交于
      Real Q35 hardware has an ICH9 chip that includes several integrated
      devices at particular addresses (see the file docs/q35-chipset.cfg in
      the qemu source). libvirt already attempts to put the first two sets
      of ich9 USB2 controllers it finds at 00:1D.* and 00:1A.* to match the
      real hardware. This patch does the same for the ich9 "HD audio"
      device.
      
      The main inspiration for this patch is that currently the *only*
      device in a reasonable "workstation" type virtual machine config that
      requires a legacy PCI slot is the audio device, Without this patch,
      the standard Q35 machine created by virt-manager will have a
      dmi-to-pci-bridge and a pci-bridge just for the sound device; with the
      patch (and if you change the sound device model from the default
      "ich6" to "ich9"), the machine definition constructed by virt-manager
      has absolutely no legacy PCI controllers - any legacy PCI devices
      (e.g. video and sound) are on pcie-root as integrated devices.
      8d873a5a
    • L
      qemu: add a USB3 controller to Q35 domains by default · d8bd8376
      Laine Stump 提交于
      Previously we added a set of EHCI+UHCI controllers to Q35 machines to
      mimic real hardware as closely as possible, but recent discussions
      have pointed out that the nec-usb-xhci (USB3) controller is much more
      virtualization-friendly (uses less CPU), so this patch switches the
      default for Q35 machinetypes to add an XHCI instead (if it's
      supported, which it of course *will* be).
      
      Since none of the existing test cases left out USB controllers in the
      input XML, a new Q35 test case was added which has *no* devices, so
      ends up with only the defaults always put in by qemu, plus those added
      by libvirt.
      d8bd8376
    • L
      qemu: don't force-add a dmi-to-pci-bridge just on principle · 80723220
      Laine Stump 提交于
      Now the a dmi-to-pci-bridge is automatically added just as it's needed
      (when a pci-bridge is being added), we no longer have any need to
      force-add one to every single Q35 domain.
      80723220
    • L
      qemu: update tests to not assume dmi-to-pci-bridge is always added · 815b51d9
      Laine Stump 提交于
      A few of the qemu test cases assume that a dmi-to-pci-bridge will
      always be added at index 1, and so they omit it from the input data
      even though a pci-bridge is present at index 2, e.g.:
      
         <controller type='pci' index='0' model='pcie-root'/>
         <controller type='pci' index='2' model='pci-bridge'/>
      
      Support for this odd practice was discussed on libvir-list and we
      decided that the complex code required to make this continue was not
      worth the headache of maintaining. So instead, this patch modifies the
      test cases to manually add a dmi-to-pci-bridge at index 1 (since an
      upcoming patch is going to eliminate the unconditional adding of
      dmi-to-pci-bridge).
      
      Because the auto-add was placing the dmi-to-pci-bridge later in the
      list (even though it has a lower index) the test output is also
      updated to take account for the new order (which puts the pci
      controllers in index-order)
      815b51d9
    • L
      qemu: auto-add pcie-root-port/dmi-to-pci-bridge controllers as needed · 0702f48e
      Laine Stump 提交于
      Previously libvirt would only add pci-bridge devices automatically
      when an address was requested for a device that required a legacy PCI
      slot and none was available. This patch expands that support to
      dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a
      machine with a pcie-root), and pcie-root-port (which is needed to add
      a hotpluggable PCIe device). It does *not* automatically add
      pcie-switch-upstream-ports or pcie-switch-downstream-ports (and
      currently there are no plans for that).
      
      Given the existing code to auto-add pci-bridge devices, automatically
      adding pcie-root-ports is fairly straightforward. The
      dmi-to-pci-bridge support is a bit tricky though, for a few reasons:
      
      1) Although the only reason to add a dmi-to-pci-bridge is so that
         there is a reasonable place to plug in a pci-bridge controller,
         most of the time it's not the presence of a pci-bridge *in the
         config* that triggers the requirement to add a dmi-to-pci-bridge.
         Rather, it is the presence of a legacy-PCI device in the config,
         which triggers auto-add of a pci-bridge, which triggers auto-add of
         a dmi-to-pci-bridge (this is handled in
         virDomainPCIAddressSetGrow() - if there's a request to add a
         pci-bridge we'll check if there is a suitable bus to plug it into;
         if not, we first add a dmi-to-pci-bridge).
      
      2) Once there is already a single dmi-to-pci-bridge on the system,
         there won't be a need for any more, even if it's full, as long as
         there is a pci-bridge with an open slot - you can also plug
         pci-bridges into existing pci-bridges. So we have to make sure we
         don't add a dmi-to-pci-bridge unless there aren't any
         dmi-to-pci-bridges *or* any pci-bridges.
      
      3) Although it is strongly discouraged, it is legal for a pci-bridge
         to be directly plugged into pcie-root, and we don't want to
         auto-add a dmi-to-pci-bridge if there is already a pci-bridge
         that's been forced directly into pcie-root.
      
      Although libvirt will now automatically create a dmi-to-pci-bridge
      when it's needed, the code still remains for now that forces a
      dmi-to-pci-bridge on all domains with pcie-root (in
      qemuDomainDefAddDefaultDevices()). That will be removed in a future
      patch.
      
      For now, the pcie-root-ports are added one to a slot, which is a bit
      wasteful and means it will fail after 31 total PCIe devices (30 if
      there are also some PCI devices), but helps keep the changeset down
      for this patch. A future patch will have 8 pcie-root-ports sharing the
      functions on a single slot.
      0702f48e
    • L
      qemu: assign nec-xhci (USB3) controller to a PCIe address when appropriate · 5266426b
      Laine Stump 提交于
      The nec-usb-xhci device (which is a USB3 controller) has always
      presented itself as a PCI device when plugged into a legacy PCI slot,
      and a PCIe device when plugged into a PCIe slot, but libvirt has
      always auto-assigned it to a legacy PCI slot.
      
      This patch changes that behavior to auto-assign to a PCIe slot on
      systems that have pcie-root (e.g. Q35 and aarch64/virt).
      
      Since we don't yet auto-create pcie-*-port controllers on demand, this
      means a config with an nec-xhci USB controller that has no PCI address
      assigned will also need to have an otherwise-unused pcie-*-port
      controller specified:
      
         <controller type='pci' model='pcie-root-port'/>
         <controller type='usb' model='nec-xhci'/>
      
      (this assumes there is an otherwise-unused slot on pcie-root to accept
      the pcie-root-port)
      5266426b
    • L
      qemu: assign e1000e network devices to PCIe slots when appropriate · 9dfe733e
      Laine Stump 提交于
      The e1000e is an emulated network device based on the Intel 82574,
      present in qemu 2.7.0 and later. Among other differences from the
      e1000, it presents itself as a PCIe device rather than legacy PCI. In
      order to get it assigned to a PCIe controller, this patch updates the
      flags setting for network devices when the model name is "e1000e".
      
      (Note that for some reason libvirt has never validated the network
      device model names other than to check that there are no dangerous
      characters in them. That should probably change, but is the subject of
      another patch.)
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1343094
      9dfe733e
    • L
      qemu: assign virtio devices to PCIe slot when appropriate · c7fc151e
      Laine Stump 提交于
      libvirt previously assigned nearly all devices to a "hotpluggable"
      legacy PCI slot even on machines with a PCIe root bus (and even though
      most such machines don't even support hotplug on legacy PCI slots!)
      Forcing all devices onto legacy PCI slots means that the domain will
      need a dmi-to-pci-bridge (to convert from PCIe to legacy PCI) and a
      pci-bridge (to provide hotpluggable legacy PCI slots which, again,
      usually aren't hotpluggable anyway).
      
      To help reduce the need for these legacy controllers, this patch tries
      to assign virtio-1.0-capable devices to PCIe slots whenever possible,
      by setting appropriate connectFlags in
      virDomainCalculateDevicePCIConnectFlags(). Happily, when that function
      was written (just a few commits ago) it was created with a
      "virtioFlags" argument, set by both of its callers, which is the
      proper connectFlags to set for any virtio-*-pci device - depending on
      the arch/machinetype of the domain, and whether or not the qemu binary
      supports virtio-1.0, that flag will have either been set to PCI or
      PCIe. This patch merely enables the functionality by setting the flags
      for the device to whatever is in virtioFlags if the device is a
      virtio-*-pci device.
      
      NB: the first virtio video device will be placed directly on bus 0
      slot 1 rather than on a pcie-root-port due to the override for primary
      video devices in qemuDomainValidateDevicePCISlotsQ35(). Whether or not
      to change that is a topic of discussion, but this patch doesn't change
      that particular behavior.
      
      NB2: since the slot must be hotpluggable, and pcie-root (the PCIe root
      complex) does *not* support hotplug, this means that suitable
      controllers must also be in the config (i.e. either pcie-root-port, or
      pcie-downstream-port). For now, libvirt doesn't add those
      automatically, so if you put virtio devices in a config for a qemu
      that has PCIe-capable virtio devices, you'll need to add extra
      pcie-root-ports yourself. That requirement will be eliminated in a
      future patch, but for now, it's simple to do this:
      
         <controller type='pci' model='pcie-root-port'/>
         <controller type='pci' model='pcie-root-port'/>
         <controller type='pci' model='pcie-root-port'/>
         ...
      
      Partially Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1330024
      c7fc151e
  7. 11 11月, 2016 3 次提交
  8. 10 11月, 2016 1 次提交
  9. 09 11月, 2016 2 次提交