1. 11 1月, 2017 4 次提交
    • L
      conf: start search for next unused PCI address at same slot as previous find · 66e0b08d
      Laine Stump 提交于
      There is a very slight time advantage to beginning the search for the
      next unused PCI address at the slot *after* the previous find (which
      is now used), but if we do that, we will miss allocating the other
      functions of the same slot (when we implement a
      VIR_PCI_CONNECT_AGGREGATE_SLOT flag to support that).
      66e0b08d
    • L
      conf: eliminate repetitive code in virDomainPCIAddressGetNextSlot() · 99bf66f3
      Laine Stump 提交于
      virDomainPCIAddressGetNextSlot() starts searching from the last
      allocated address and goes to the end of all the buses, then goes back
      to the first bus and searches from there up to the starting point (in
      case any address has been freed since the last time an address was
      allocated. The loops for these two are almost, but not exactly, the
      same, so they have remained as separate loops with the same code
      inside the loop. To lessen maintenance headaches, the identical code
      has been moved out into the function
      virDomainPCIAddressFindUnusedFunctionOnBus(), which is called in place
      of the loop contents.
      99bf66f3
    • L
      conf: eliminate concept of "reserveEntireSlot" · 9ff9d9f5
      Laine Stump 提交于
      setting reserveEntireSlot really accomplishes nothing - instead of
      going to the trouble of computing the value for reserveEntireSlot and
      then possibly setting *all* functions of the slot as in-use, we can
      just set the in-use bit only for the specific function being used by a
      device.  Later we will know from the context (the PCI connect flags,
      and whether we are reserving a specific address or asking for "the
      next available") whether or not it is okay to allocate other functions
      on the same slot.
      
      Although it's not used yet, we allow specifying "-1" for the function
      number when looking for the "next available slot" - this is going to
      end up meaning "return the lowest available function in the slot, but
      since we currently only provide a function from an otherwise unused
      slot, "-1" ends up meaning "0".
      9ff9d9f5
    • L
      conf: use struct instead of int for each slot in virDomainPCIAddressBus · 9838cad9
      Laine Stump 提交于
      When keeping track of which functions of which slots are allocated, we
      will need to have more information than just the current bitmap with a
      bit for each function that is currently stored for each slot in a
      virDomainPCIAddressBus. To prepare for adding more per-slot info, this
      patch changes "uint8_t slots" into "virDomainPCIAddressSlot slot", which
      currently has a single member named "functions" that serves the same
      purpose previously served directly by "slots".
      9838cad9
  2. 10 1月, 2017 1 次提交
  3. 09 1月, 2017 3 次提交
  4. 07 1月, 2017 2 次提交
    • J
      conf: Add more fchost search fields for storage pool vHBA creation · bb74a7ff
      John Ferlan 提交于
      Add new fields to the fchost structure to allow creation of a vHBA via
      the storage pool when a parent_wwnn/parent_wwpn or parent_fabric_wwn is
      supplied in the storage pool XML.
      bb74a7ff
    • J
      nodedev: Add the ability to create vHBA by parent wwnn/wwpn or fabric_wwn · 2b13361b
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1349696
      
      When creating a vHBA, the process is to feed XML to nodeDeviceCreateXML
      that lists the <parent> scsi_hostX to use to create the vHBA. However,
      between reboots, it's possible that the <parent> changes its scsi_hostX
      to scsi_hostY and saved XML to perform the creation will either fail or
      create a vHBA using the wrong parent.
      
      So add the ability to provide "wwnn" and "wwpn" or "fabric_wwn" to
      the <parent> instead of a name of the scsi_hostN that is the parent.
      The allowed XML will thus be:
      
        <parent>scsi_host3</parent>  (current)
      
      or
      
        <parent wwnn='$WWNN' wwpn='$WWPN'/>
      
      or
      
        <parent fabric_wwn='$WWNN'/>
      
      Using the wwnn/wwpn or fabric_wwn ensures the same 'scsi_hostN' is
      selected between hardware reconfigs or host reboots. The fabric_wwn
      Using the wwnn/wwpn pair will provide the most specific search option,
      while fabric_wwn will at least ensure usage of the same SAN, but maybe
      not the same scsi_hostN.
      
      This patch will add the new fields to the nodedev.rng for input purposes
      only since the input XML is essentially thrown away, no need to Format
      the values since they'd already be printed as part of the scsi_host
      data block.
      
      New API virNodeDeviceGetParentHostByWWNs will take the parent "wwnn" and
      "wwpn" in order to search the list of devices for matching capability
      data fields wwnn and wwpn.
      
      New API virNodeDeviceGetParentHostByFabricWWN will take the parent "fabric_wwn"
      in order to search the list of devices for matching capability data field
      fabric_wwn.
      2b13361b
  5. 05 1月, 2017 2 次提交
  6. 21 12月, 2016 1 次提交
    • J
      conf: Display <physical> in output of voldef · 78661cb1
      John Ferlan 提交于
      Although the virStorageBackendUpdateVolTargetInfo will update the
      target.physical value, there is no way to provide that information
      via the virStorageGetVolInfo API since it only returns the capacity
      and allocation of a volume. So as described in commit id '0282ca45',
      it should be possible to generate an output only <physical> value
      for that purpose.
      
      This patch generates the <physical> value in the volume XML output
      for the sole purpose of being able to view/see the value to allow
      someone to parse the XML in order to obtain the value.
      
      Update the documentation to describe the output only nature.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      78661cb1
  7. 20 12月, 2016 2 次提交
  8. 19 12月, 2016 2 次提交
  9. 17 12月, 2016 1 次提交
  10. 09 12月, 2016 2 次提交
    • N
      qemu: Fix xml dump of autogenerated websocket · 61a0026a
      Nikolay Shirokovskiy 提交于
      When save/migrate a domain and we autogenerated a port, then if we
      print the inactive domain config, write out a -1 for the socket value;
      otherwise, it's possible that the subsequent start will fail if the
      autogenerated websocket used conflicts with an existing running config
      that also used autogenerated websockets.
      
      Examples:
      
      == A. Can not restore domain with autoconfigured websocket.
      
      domain 1 and 2 have autoconfigured websocket.
      
      1. domain 1 is started then, saved
      2. domain 2 is started
      3. domain 1 restoration is failed:
      
      error: internal error: qemu unexpectedly closed the monitor: 2016-11-21T10:23:11.356687Z
      qemu-kvm: -vnc 0.0.0.0:2,websocket=5700: Failed to start VNC server on `(null)':
      Failed to bind socket: Address already in use
      
      == B. Can not migrate domain with autoconfigured websocket.
      
      domain 1 on host A, domain 2 on host B, both have autoconfigured websocket
      
      1. domain 1 started, domain 2 started
      2. domain 1 migration to host B is failed with the above error.
      61a0026a
    • N
      qemu: mark user defined websocket as used · 1215965a
      Nikolay Shirokovskiy 提交于
      We need extra state variable to distinguish between autogenerated
      and user defined cases after auto generation is done.
      1215965a
  11. 08 12月, 2016 2 次提交
  12. 06 12月, 2016 4 次提交
  13. 05 12月, 2016 3 次提交
  14. 30 11月, 2016 2 次提交
  15. 26 11月, 2016 1 次提交
    • J
      qemu: Avoid reporting "host" as a supported CPU model · 73411a7f
      Jiri Denemark 提交于
      "host" CPU model is supported by a special host-passthrough CPU mode and
      users is not allowed to specify this model directly with custom mode.
      Thus we should not advertise "host" CPU model in domain capabilities.
      This worked well on architectures for which libvirt provides a list of
      supported CPU models in cpu_map.xml (since "host" is not in the list).
      But we need to explicitly filter "host" model out for all other
      architectures.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      73411a7f
  16. 25 11月, 2016 3 次提交
    • M
      virstring: Unify string list function names · c2a5a4e7
      Michal Privoznik 提交于
      We have couple of functions that operate over NULL terminated
      lits of strings. However, our naming sucks:
      
      virStringJoin
      virStringFreeList
      virStringFreeListCount
      virStringArrayHasString
      virStringGetFirstWithPrefix
      
      We can do better:
      
      virStringListJoin
      virStringListFree
      virStringListFreeCount
      virStringListHasString
      virStringListGetFirstWithPrefix
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      c2a5a4e7
    • E
      conf: Wire up the vhost-scsi connection from/to XML · ae5d30a0
      Eric Farman 提交于
      With the QEMU components in place, provide the XML parsing to
      invoke that code when given the following XML snippet:
      
          <hostdev mode='subsystem' type='scsi_host'>
            <source protocol='vhost' wwpn='naa.501234567890abcd'/>
          </hostdev>
      
      An optional address element can be specified within the hostdev
      (pick CCW or PCI as necessary):
      
          <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0625'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
      
      Add basic vhost-scsi tests which were cloned from hostdev-scsi-virtio-scsi
      in both xml2argv and xml2xml. Added ones for both vhost-scsi-ccw and
      vhost-scsi-pci since the syntaxes are slightly different between them.
      
      Also adjusted the docs to describe the changes.
      Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
      Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
      ae5d30a0
    • E
      Introduce framework for a hostdev SCSI_host subsystem type · fc0e627b
      Eric Farman 提交于
      We already have a "scsi" hostdev subsys type, which refers to a single
      LUN that is passed through to a guest.  But what of things where
      multiple LUNs are passed through via a single SCSI HBA, such as with
      the vhost-scsi target?  Create a new hostdev subsys type that will
      carry this.
      Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
      fc0e627b
  17. 22 11月, 2016 1 次提交
  18. 16 11月, 2016 1 次提交
  19. 15 11月, 2016 3 次提交
    • J
      cpu: Avoid adding <vendor> to custom CPUs · 98b7c37d
      Jiri Denemark 提交于
      Guest CPU definitions with mode='custom' and missing <vendor> are
      expected to run on a host CPU from any vendor as long as the required
      CPU model can be used as a guest CPU on the host. But even though no CPU
      vendor was explicitly requested we would sometimes force it due to a bug
      in virCPUUpdate and virCPUTranslate.
      
      The bug would effectively forbid cross vendor migrations even if they
      were previously working just fine.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      98b7c37d
    • L
      qemu: auto-add pcie-root-port/dmi-to-pci-bridge controllers as needed · 0702f48e
      Laine Stump 提交于
      Previously libvirt would only add pci-bridge devices automatically
      when an address was requested for a device that required a legacy PCI
      slot and none was available. This patch expands that support to
      dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a
      machine with a pcie-root), and pcie-root-port (which is needed to add
      a hotpluggable PCIe device). It does *not* automatically add
      pcie-switch-upstream-ports or pcie-switch-downstream-ports (and
      currently there are no plans for that).
      
      Given the existing code to auto-add pci-bridge devices, automatically
      adding pcie-root-ports is fairly straightforward. The
      dmi-to-pci-bridge support is a bit tricky though, for a few reasons:
      
      1) Although the only reason to add a dmi-to-pci-bridge is so that
         there is a reasonable place to plug in a pci-bridge controller,
         most of the time it's not the presence of a pci-bridge *in the
         config* that triggers the requirement to add a dmi-to-pci-bridge.
         Rather, it is the presence of a legacy-PCI device in the config,
         which triggers auto-add of a pci-bridge, which triggers auto-add of
         a dmi-to-pci-bridge (this is handled in
         virDomainPCIAddressSetGrow() - if there's a request to add a
         pci-bridge we'll check if there is a suitable bus to plug it into;
         if not, we first add a dmi-to-pci-bridge).
      
      2) Once there is already a single dmi-to-pci-bridge on the system,
         there won't be a need for any more, even if it's full, as long as
         there is a pci-bridge with an open slot - you can also plug
         pci-bridges into existing pci-bridges. So we have to make sure we
         don't add a dmi-to-pci-bridge unless there aren't any
         dmi-to-pci-bridges *or* any pci-bridges.
      
      3) Although it is strongly discouraged, it is legal for a pci-bridge
         to be directly plugged into pcie-root, and we don't want to
         auto-add a dmi-to-pci-bridge if there is already a pci-bridge
         that's been forced directly into pcie-root.
      
      Although libvirt will now automatically create a dmi-to-pci-bridge
      when it's needed, the code still remains for now that forces a
      dmi-to-pci-bridge on all domains with pcie-root (in
      qemuDomainDefAddDefaultDevices()). That will be removed in a future
      patch.
      
      For now, the pcie-root-ports are added one to a slot, which is a bit
      wasteful and means it will fail after 31 total PCIe devices (30 if
      there are also some PCI devices), but helps keep the changeset down
      for this patch. A future patch will have 8 pcie-root-ports sharing the
      functions on a single slot.
      0702f48e
    • L
      qemu: set pciConnectFlags to 0 instead of PCI|HOTPLUGGABLE if device isn't PCI · b27375a9
      Laine Stump 提交于
      This patch cleans up the connect flags for certain types/models of
      devices that aren't PCI to return 0. In the future that may be used as
      an indicator to the caller about whether or not a device needs a PCI
      address. For now it's just ignored, except for in
      virDomainPCIAddressEnsureAddr() - called during device hotplug - (and
      in some cases actually needs to be re-set to PCI|HOTPLUGGABLE just in
      case someone (in some old config) has manually set a PCI address for a
      device that isn't PCI.
      b27375a9