From 701e2b656e44364eb3d081053a591795c098dfef Mon Sep 17 00:00:00 2001
From: Katerina Koukiou vcpus
id
attribute specifies the vCPU id as used by libvirt
- in other places such as vcpu pinning, scheduler information and NUMA
- assignment. Note that the vcpu ID as seen in the guest may differ from
- libvirt ID in certain cases. Valid IDs are from 0 to the maximum vcpu
+ in other places such as vCPU pinning, scheduler information and NUMA
+ assignment. Note that the vCPU ID as seen in the guest may differ from
+ libvirt ID in certain cases. Valid IDs are from 0 to the maximum vCPU
count as set by the vcpu
element minus 1.
The enabled
attribute allows to control the state of the
- vcpu. Valid values are yes
and no
.
+ vCPU. Valid values are yes
and no
.
- hotpluggable
controls whether given vcpu can be hotplugged
- and hotunplugged in cases when the cpu is enabled at boot. Note that
- all disabled vcpus must be hotpluggable. Valid values are
+ hotpluggable
controls whether given vCPU can be hotplugged
+ and hotunplugged in cases when the CPU is enabled at boot. Note that
+ all disabled vCPUs must be hotpluggable. Valid values are
yes
and no
.
- order
allows to specify the order to add the online vcpus.
- For hypervisors/platforms that require to insert multiple vcpus at once
- the order may be duplicated across all vcpus that need to be
- enabled at once. Specifying order is not necessary, vcpus are then
+ order
allows to specify the order to add the online vCPUs.
+ For hypervisors/platforms that require to insert multiple vCPUs at once
+ the order may be duplicated across all vCPUs that need to be
+ enabled at once. Specifying order is not necessary, vCPUs are then
added in an arbitrary order. If order info is used, it must be used for
- all online vcpus. Hypervisors may clear or update ordering information
+ all online vCPUs. Hypervisors may clear or update ordering information
during certain operations to assure valid configuration.
- Note that hypervisors may create hotpluggable vcpus differently from
- boot vcpus thus special initialization may be necessary.
+ Note that hypervisors may create hotpluggable vCPUs differently from
+ boot vCPUs thus special initialization may be necessary.
- Hypervisors may require that vcpus enabled on boot which are not
+ Hypervisors may require that vCPUs enabled on boot which are not
hotpluggable are clustered at the beginning starting with ID 0. It may
- be also required that vcpu 0 is always present and non-hotpluggable.
+ be also required that vCPU 0 is always present and non-hotpluggable.
- Note that providing state for individual cpus may be necessary to enable
+ Note that providing state for individual CPUs may be necessary to enable
support of addressable vCPU hotplug and this feature may not be
supported by all hypervisors.
- For QEMU the following conditions are required. Vcpu 0 needs to be
- enabled and non-hotpluggable. On PPC64 along with it vcpus that are in
- the same core need to be enabled as well. All non-hotpluggable cpus
- present at boot need to be grouped after vcpu 0.
+ For QEMU the following conditions are required. vCPU 0 needs to be
+ enabled and non-hotpluggable. On PPC64 along with it vCPUs that are in
+ the same core need to be enabled as well. All non-hotpluggable CPUs
+ present at boot need to be grouped after vCPU 0.
Since 2.2.0 (QEMU only)
cputune
cputune
element provides details
- regarding the cpu tunable parameters for the domain.
+ regarding the CPU tunable parameters for the domain.
Since 0.9.0
vcpupin
vcpupin
element specifies which of host's
- physical CPUs the domain VCPU will be pinned to. If this is omitted,
+ physical CPUs the domain vCPU will be pinned to. If this is omitted,
and attribute cpuset
of element vcpu
is
not specified, the vCPU is pinned to all the physical CPUs by default.
It contains two required attributes, the attribute vcpu
- specifies vcpu id, and the attribute cpuset
is same as
+ specifies vCPU id, and the attribute cpuset
is same as
attribute cpuset
of element vcpu
.
(NB: Only qemu driver support)
Since 0.9.0
@@ -786,7 +786,7 @@
emulatorpin
emulatorpin
element specifies which of host
- physical CPUs the "emulator", a subset of a domain not including vcpu
+ physical CPUs the "emulator", a subset of a domain not including vCPU
or iothreads will be pinned to. If this is omitted, and attribute
cpuset
of element vcpu
is not specified,
"emulator" is pinned to all the physical CPUs by default. It contains
@@ -820,7 +820,7 @@
period
period
element specifies the enforcement
- interval(unit: microseconds). Within period
, each vcpu of
+ interval(unit: microseconds). Within period
, each vCPU of
the domain will not be allowed to consume more than quota
worth of runtime. The value should be in range [1000, 1000000]. A period
with value 0 means no value.
@@ -835,7 +835,7 @@
vCPU threads, which means that it is not bandwidth controlled. The value
should be in range [1000, 18446744073709551] or less than 0. A quota
with value 0 means no value. You can use this feature to ensure that all
- vcpus run at the same speed.
+ vCPUs run at the same speed.
Only QEMU driver support since 0.9.4, LXC since
0.9.10
emulator_period
element specifies the enforcement
interval(unit: microseconds). Within emulator_period
, emulator
- threads(those excluding vcpus) of the domain will not be allowed to consume
+ threads(those excluding vCPUs) of the domain will not be allowed to consume
more than emulator_quota
worth of runtime. The value should be
in range [1000, 1000000]. A period with value 0 means no value.
Only QEMU driver support since 0.10.0
@@ -873,9 +873,9 @@
emulator_quota
element specifies the maximum
allowed bandwidth(unit: microseconds) for domain's emulator threads(those
- excluding vcpus). A domain with emulator_quota
as any negative
+ excluding vCPUs). A domain with emulator_quota
as any negative
value indicates that the domain has infinite bandwidth for emulator threads
- (those excluding vcpus), which means that it is not bandwidth controlled.
+ (those excluding vCPUs), which means that it is not bandwidth controlled.
The value should be in range [1000, 18446744073709551] or less than 0. A
quota with value 0 means no value.
Only QEMU driver support since 0.10.0
@@ -2131,13 +2131,13 @@
QEMU, the user-configurable extended TSEG feature was unavailable up
to and including pc-q35-2.9
. Starting with
pc-q35-2.10
the feature is available, with default size
- 16 MiB. That should suffice for up to roughly 272 VCPUs, 5 GiB guest
+ 16 MiB. That should suffice for up to roughly 272 vCPUs, 5 GiB guest
RAM in total, no hotplug memory range, and 32 GiB of 64-bit PCI MMIO
- aperture. Or for 48 VCPUs, with 1TB of guest RAM, no hotplug DIMM
+ aperture. Or for 48 vCPUs, with 1TB of guest RAM, no hotplug DIMM
range, and 32GB of 64-bit PCI MMIO aperture. The values may also vary
based on the loader the VM is using.
- Additional size might be needed for significantly higher VCPU counts + Additional size might be needed for significantly higher vCPU counts or increased address space (that can be memory, maxMemory, 64-bit PCI MMIO aperture size; roughly 8 MiB of TSEG per 1 TiB of address space) which can also be rounded up. @@ -2147,7 +2147,7 @@ documentation of the guest OS or loader (if there is any), or test this by trial-and-error changing the value until the VM boots successfully. Yet another guiding value for users might be the fact - that 48 MiB should be enough for pretty large guests (240 VCPUs and + that 48 MiB should be enough for pretty large guests (240 vCPUs and 4TB guest RAM), but it is on purpose not set as default as 48 MiB of unavailable RAM might be too much for small guests (e.g. with 512 MiB of RAM). @@ -2425,7 +2425,7 @@
cpu_cycles
perf.cpu_cycles
stalled_cycles_frontend
perf.stalled_cycles_frontend
stalled_cycles_backend
perf.stalled_cycles_backend
ref_cpu_cycles
perf.ref_cpu_cycles
cpu_clock
perf.cpu_clock
cpu_migrations
perf.cpu_migrations