- 20 4月, 2016 1 次提交
-
-
由 Andrea Bolognani 提交于
-
- 15 4月, 2016 4 次提交
-
-
由 Laine Stump 提交于
This controller provides a single PCIe port on a new root. It is similar to pci-expander-bus, intended to provide a bus that can be associated with a guest-identifiable NUMA node, but is for machinetypes with PCIe rather than PCI (e.g. q35-based machinetypes). Aside from PCIe vs. PCI, the other main difference is that a pci-expander-bus has a companion pci-bridge that is automatically attached along with it, but pcie-expander-bus has only a single port, and that port will only connect to a pcie-root-port, or to a pcie-switch-upstream-port. In order for the bus to be of any use in the guest, it must have either a pcie-root-port or a pcie-switch-upstream-port attached (and one or more pcie-switch-downstream-ports attached to the pcie-switch-upstream-port).
-
由 Laine Stump 提交于
This is a standard PCI root bus (not a bridge) that can be added to a 440fx-based domain. Although it uses a PCI slot, this is *not* how it is connected into the PCI bus hierarchy, but is only used for control. Each pci-expander-bus provides 32 slots (0-31) that can accept hotplug of standard PCI devices. The usefulness of pci-expander-bus relative to a pci-bridge is that the NUMA node of the bus can be specified with the <node> subelement of <target>. This gives guest-side visibility to the NUMA node of attached devices (presuming that management apps only assign a device to a bus that has a NUMA node number matching the node number of the device on the host). Each pci-expander-bus also has a "busNr" attribute. The expander-bus itself will take the busNr specified, and all buses that are connected to this bus (including the pci-bridge that is automatically added to any expander bus of model "pxb" (see the next commit)) will use busNr+1, busNr+2, etc, and the pci-root (or the expander-bus with next lower busNr) will use bus numbers lower than busNr.
-
由 Cole Robinson 提交于
clarify what version initial support was added, and when libvirt started supporting it for the qemu driver https://bugzilla.redhat.com/show_bug.cgi?id=657931
-
由 Cole Robinson 提交于
Added with commit 3b431929 in v1.2.2 but never documented https://bugzilla.redhat.com/show_bug.cgi?id=1313613
-
- 08 4月, 2016 2 次提交
-
-
由 Pavel Hrdina 提交于
This cleanups the documentation, reformat some of the paragraphs to use <p> instead of </br> and rewrites the listen part to be more extendable. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Vasiliy Tolstov 提交于
Signed-off-by: NVasiliy Tolstov <v.tolstov@selfip.ru>
-
- 05 4月, 2016 1 次提交
-
-
由 Boris Fiuczynski 提交于
Signed-off-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
- 30 3月, 2016 1 次提交
-
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 29 3月, 2016 2 次提交
-
-
由 John Ferlan 提交于
In order to follow recent comments which indicate support for specific feature bits are supported by a specific QEMU version add the version from whence the relaxed, vapic, and spinlocks support was added.
-
由 Maxim Nestratov 提交于
This patch adds support for "vpindex", "runtime", "synic", "stimer", and "vendor_id" features available in qemu 2.5+. - When Hyper-V "vpindex" is on, guest can use MSR HV_X64_MSR_VP_INDEX to get virtual processor ID. - Hyper-V "runtime" enlightement feature allows to use MSR HV_X64_MSR_VP_RUNTIME to get the time the virtual processor consumes running guest code, as well as the time the hypervisor spends running code on behalf of that guest. - Hyper-V "synic" stands for Synthetic Interrupt Controller, which is lapic extension controlled via MSRs. - Hyper-V "stimer" switches on Hyper-V SynIC timers MSR's support. Guest can setup and use fired by host events (SynIC interrupt and appropriate timer expiration message) as guest clock events - Hyper-V "reset" allows guest to reset VM. - Hyper-V "vendor_id" exposes hypervisor vendor id to guest. Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 21 3月, 2016 1 次提交
-
-
由 Jim Fehlig 提交于
Most hypervisors use Hardware Assisted Paging by default and don't require specifying the feature in domain conf. But some hypervisors support disabling HAP on a per-domain basis. To enable HAP by default yet provide a knob to disable it, extend the <hap> feature with a 'state=on|off' attribute, similar to <pvspinlock> and <vmport> features. In the absence of <hap>, the hypervisor default (on) is used. <hap> without the state attribute would be the same as <hap state='on'/> for backwards compatibility. And of course <hap state='off'/> disables hap. Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-
- 17 3月, 2016 1 次提交
-
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 10 3月, 2016 1 次提交
-
-
由 Daniel P. Berrange 提交于
Extend the chardev source XML so that there is a new optional <log/> element, which is applicable to all character device backend types. For example, to log output of a TCP backed serial port <serial type='tcp'> <source mode='connect' host='127.0.0.1' service='9999'/> <protocol type='raw'/> <log file='/var/log/libvirt/qemu/demo-serial0.log' append='on'/> <target port='0'/> </serial> Not all hypervisors will support use of logfiles. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 01 3月, 2016 4 次提交
-
-
由 Jiri Denemark 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1313314Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Pavel Hrdina 提交于
This attribute is used to extend secondary PCI bar and expose it to the guest as 64bit memory. It works like this: attribute vram is there to set size of secondary PCI bar and guest sees it as 32bit memory, attribute vram64 can extend this secondary PCI bar. If both attributes are used, guest sees two memory bars, both address the same memory, with the difference that the 32bit bar can address only the first part of the whole memory. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1260749Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Marc-André Lureau 提交于
Add Spice graphics gl attribute. qemu 2.6 should have -spice gl=on argument to enable opengl rendering context (patches on the ML). This is necessary to actually enable virgl rendering. Add a qemuxml2argv test for virtio-gpu + spice with virgl. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 26 2月, 2016 1 次提交
-
-
由 Richard W.M. Jones 提交于
Trivial documentation fix. Signed-off-by: NRichard W.M. Jones <rjones@redhat.com>
-
- 20 2月, 2016 1 次提交
-
-
由 Andrea Bolognani 提交于
Recent changes to the handling of GIC version, specifically commit 2a7b11ea, have clearly defined what values are acceptable for the version attribute of the <gic> element. Update the documentation accordingly.
-
- 15 2月, 2016 1 次提交
-
-
由 Ján Tomko 提交于
Replace all occurrences of VMWare outside the news.
-
- 12 1月, 2016 1 次提交
-
-
由 Dmitry Andreev 提交于
Excessive memory balloon inflation can cause invocation of OOM-killer, when Linux is under severe memory pressure. QEMU memballoon device has a feature to release some memory at the last moment before some process will be get killed by OOM-killer. Introduce a new optional balloon device attribute 'autodeflate' to enable or disable this feature.
-
- 06 1月, 2016 1 次提交
-
-
由 Wido den Hollander 提交于
If no port number was provided for a storage pool libvirt defaults to port 6789; however, librbd/librados already default to 6789 when no port number is provided. In the future Ceph will switch to a new port for the Ceph monitors since port 6789 is already assigned to a different application by IANA. Port 6789 is assigned to SMC-HTTPS and Ceph now has port 3300 assigned as the 'Ceph monitor' port. In this case it is the best solution to not hardcode any port number into libvirt and let librados handle the connection. Only if a user specifies a different port number we pass it down to librados, otherwise we leave it blank. Signed-off-by: NWido den Hollander <wido@widodh.nl> merge
-
- 05 1月, 2016 1 次提交
-
-
由 Dmitry Mishin 提交于
Signed-off-by: NDmitry Mishin <dim@virtuozzo.com>
-
- 30 11月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Add xml for the new virtio-input-host-pci device: <input type='passthrough' bus='virtio'> <source evdev='/dev/input/event1234'/> </input> https://bugzilla.redhat.com/show_bug.cgi?id=1231114
-
由 Ján Tomko 提交于
To be used by the family of virtio input devices: <input type='mouse' bus='virtio'/> <input type='tablet' bus='virtio'/> <input type='keyboard' bus='virtio'/> https://bugzilla.redhat.com/show_bug.cgi?id=1231114
-
- 27 11月, 2015 1 次提交
-
-
由 Marc-André Lureau 提交于
qemu 2.5 provides virtio video device. It can be used with -device virtio-vga for primary devices, or -device virtio-gpu for non-vga devices. However, only the primary device (VGA) is supported with this patch. Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1195176Signed-off-by: NMarc-André Lureau <marcandre.lureau@gmail.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 25 11月, 2015 2 次提交
-
-
由 Dmitry Andreev 提交于
'model' attribute was added to a panic device but only one panic device is allowed. This patch changes panic device presence from 'optional' to 'zeroOrMore'.
-
由 Dmitry Andreev 提交于
Libvirt already has two types of panic devices - pvpanic and pSeries firmware. This patch introduces the 'model' attribute and a new type of panic device. 'isa' model is for ISA pvpanic device. 'pseries' model is a default value for pSeries guests. 'hyperv' model is the new type. It's used for Hyper-V crash. Schema and docs are updated for the new attribute.
-
- 18 11月, 2015 1 次提交
-
-
由 Peter Krempa 提交于
Adjust the config code so that it does not enforce that target memory node is specified. To avoid breakage, adjust the qemu memory hotplug config checker to disallow such config for now.
-
- 06 10月, 2015 1 次提交
-
-
由 Cole Robinson 提交于
The example pvspinlock XML is: <pvspinlock/> While this is accepted by libvirt and works correctly, it's currently always output as a tristate like <pvspinlock state='on'/> So document that format instead
-
- 02 9月, 2015 1 次提交
-
-
由 Jonathan Toppins 提交于
Adds a new interface type using UDP sockets, this seems only applicable to QEMU but have edited tree-wide to support the new interface type. The interface type required the addition of a "localaddr" (local address), this then maps into the following xml and qemu call. <interface type='udp'> <mac address='52:54:00:5c:67:56'/> <source address='127.0.0.1' port='11112'> <local address='127.0.0.1' port='22222'/> </source> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> QEMU call: -net socket,udp=127.0.0.1:11112,localaddr=127.0.0.1:22222 Notice the xml "local" entry becomes the "localaddr" for the qemu call. reference: http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg00629.htmlSigned-off-by: NJonathan Toppins <jtoppins@cumulusnetworks.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 25 8月, 2015 1 次提交
-
-
由 Sergey Bronnikov 提交于
Parallels driver was renamed to Virtuozzo. Replace old name by new one for libvirt docs and schemas.
-
- 10 8月, 2015 6 次提交
-
-
由 Martin Kletzander 提交于
This will be used with a virtio-scsi controller later on. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Laine Stump 提交于
This controller can be connected only to a port on a pcie-switch-upstream-port. It provides a single hotpluggable port that will accept any PCI or PCIe device, as well as any device requiring a pcie-*-port (the only current example of such a device is the pcie-switch-upstream-port).
-
由 Laine Stump 提交于
This controller can be connected only to a pcie-root-port or a pcie-switch-downstream-port (which will be added in a later patch), which is the reason for the new connect type VIR_PCI_CONNECT_TYPE_PCIE_PORT. A pcie-switch-upstream-port provides 32 ports (slot=0 to slot=31) on the downstream side, which can only have pci controllers of model "pcie-switch-downstream-port" plugged into them, which is the reason for the other new connect type VIR_PCI_CONNECT_TYPE_PCIE_SWITCH.
-
由 Laine Stump 提交于
This controller can be connected (at domain startup time only - not hotpluggable) only to a port on the pcie root complex ("pcie-root" in libvirt config), hence the new connect type VIR_PCI_CONNECT_TYPE_PCIE_ROOT. It provides a hotpluggable port that will accept any PCI or PCIe device. New attributes must be added to the controller <target> subelement for this - chassis and port are guest-visible option values that will be set by libvirt with values derived from the controller's index and pci address information.
-
由 Laine Stump 提交于
There are some configuration options to some types of pci controllers that are currently automatically derived from other parts of the controller's configuration. For example, in qemu a pci-bridge controller has an option that is called "chassis_nr"; up until now libvirt has always set chassis_nr to the index of the pci-bridge. So this: <controller type='pci' model='pci-bridge' index='2'/> will always result in: -device pci-bridge,chassis_nr=2,... on the qemu commandline. In the future we may decide there is a better way to derive that option, but even in that case we will need for existing domains to retain the same chassis_nr they were using in the past - that is something that is visible to the guest so it is part of the guest ABI and changing it would lead to problems for migrating guests (or just guests with very picky OSes). The <target> subelement has been added as a place to put the new "chassisNr" attribute that will be filled in by libvirt when it auto-generates the chassisNr; it will be saved in the config, then reused any time the domain is started: <controller type='pci' model='pci-bridge' index='2'> <model type='pci-bridge'/> <target chassisNr='2'/> </controller> The one oddity of all this is that if the controller configuration is changed (for example to change the index or the pci address where the controller is plugged in), the items in <target> will *not* be re-generated, which might lead to conflict. I can't really see any way around this, but fortunately if there is a material conflict qemu will let us know and we will pass that on to the user.
-
由 Laine Stump 提交于
This new subelement is used in PCI controllers: the toplevel *attribute* "model" of a controller denotes what kind of PCI controller is being described, e.g. a "dmi-to-pci-bridge", "pci-bridge", or "pci-root". But in the future there will be different implementations of some of those types of PCI controllers, which behave similarly from libvirt's point of view (and so should have the same model), but use a different device in qemu (and present themselves as a different piece of hardware in the guest). In an ideal world we (i.e. "I") would have thought of that back when the pci controllers were added, and used some sort of type/class/model notation (where class was used in the way we are now using model, and model was used for the actual manufacturer's model number of a particular family of PCI controller), but that opportunity is long past, so as an alternative, this patch allows selecting a particular implementation of a pci controller with the "name" attribute of the <model> subelement, e.g.: <controller type='pci' model='dmi-to-pci-bridge' index='1'> <model name='i82801b11-bridge'/> </controller> In this case, "dmi-to-pci-bridge" is the kind of controller (one that has a single PCIe port upstream, and 32 standard PCI ports downstream, which are not hotpluggable), and the qemu device to be used to implement this kind of controller is named "i82801b11-bridge". Implementing the above now will allow us in the future to add a new kind of dmi-to-pci-bridge that doesn't use qemu's i82801b11-bridge device, but instead uses something else (which doesn't yet exist, but qemu people have been discussing it), all without breaking existing configs. (note that for the existing "pci-bridge" type of PCI controller, both the model attribute and <model> name are 'pci-bridge'. This is just a coincidence, since it turns out that in this case the device name in qemu really is a generic 'pci-bridge' rather than being the name of some real-world chip)
-
- 04 8月, 2015 1 次提交
-
-
由 John Ferlan 提交于
"Further" clarification (and testing) shows that using a SCSI Fibre Channel NPIV device/lun from a storage pool as a <disk type='volume' device'lun'> will work. So just add that to the allowable options Related to: https://bugzilla.redhat.com/show_bug.cgi?id=1230179
-