From 701e2b656e44364eb3d081053a591795c098dfef Mon Sep 17 00:00:00 2001 From: Katerina Koukiou Date: Wed, 18 Jul 2018 11:52:42 +0200 Subject: [PATCH] docs: formatdomain: unify naming for CPUs/vCPUs CPU is an acronym and should be written in uppercase when part of plain text and not refering to an element. Signed-off-by: Katerina Koukiou Reviewed-by: Erik Skultety --- docs/formatdomain.html.in | 86 +++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 43 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index b00971a945..679690d060 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -631,45 +631,45 @@
vcpus
- The vcpus element allows to control state of individual vcpus. + The vcpus element allows to control state of individual vCPUs. The id attribute specifies the vCPU id as used by libvirt - in other places such as vcpu pinning, scheduler information and NUMA - assignment. Note that the vcpu ID as seen in the guest may differ from - libvirt ID in certain cases. Valid IDs are from 0 to the maximum vcpu + in other places such as vCPU pinning, scheduler information and NUMA + assignment. Note that the vCPU ID as seen in the guest may differ from + libvirt ID in certain cases. Valid IDs are from 0 to the maximum vCPU count as set by the vcpu element minus 1. The enabled attribute allows to control the state of the - vcpu. Valid values are yes and no. + vCPU. Valid values are yes and no. - hotpluggable controls whether given vcpu can be hotplugged - and hotunplugged in cases when the cpu is enabled at boot. Note that - all disabled vcpus must be hotpluggable. Valid values are + hotpluggable controls whether given vCPU can be hotplugged + and hotunplugged in cases when the CPU is enabled at boot. Note that + all disabled vCPUs must be hotpluggable. Valid values are yes and no. - order allows to specify the order to add the online vcpus. - For hypervisors/platforms that require to insert multiple vcpus at once - the order may be duplicated across all vcpus that need to be - enabled at once. Specifying order is not necessary, vcpus are then + order allows to specify the order to add the online vCPUs. + For hypervisors/platforms that require to insert multiple vCPUs at once + the order may be duplicated across all vCPUs that need to be + enabled at once. Specifying order is not necessary, vCPUs are then added in an arbitrary order. If order info is used, it must be used for - all online vcpus. Hypervisors may clear or update ordering information + all online vCPUs. Hypervisors may clear or update ordering information during certain operations to assure valid configuration. - Note that hypervisors may create hotpluggable vcpus differently from - boot vcpus thus special initialization may be necessary. + Note that hypervisors may create hotpluggable vCPUs differently from + boot vCPUs thus special initialization may be necessary. - Hypervisors may require that vcpus enabled on boot which are not + Hypervisors may require that vCPUs enabled on boot which are not hotpluggable are clustered at the beginning starting with ID 0. It may - be also required that vcpu 0 is always present and non-hotpluggable. + be also required that vCPU 0 is always present and non-hotpluggable. - Note that providing state for individual cpus may be necessary to enable + Note that providing state for individual CPUs may be necessary to enable support of addressable vCPU hotplug and this feature may not be supported by all hypervisors. - For QEMU the following conditions are required. Vcpu 0 needs to be - enabled and non-hotpluggable. On PPC64 along with it vcpus that are in - the same core need to be enabled as well. All non-hotpluggable cpus - present at boot need to be grouped after vcpu 0. + For QEMU the following conditions are required. vCPU 0 needs to be + enabled and non-hotpluggable. On PPC64 along with it vCPUs that are in + the same core need to be enabled as well. All non-hotpluggable CPUs + present at boot need to be grouped after vCPU 0. Since 2.2.0 (QEMU only)
@@ -768,17 +768,17 @@
cputune
The optional cputune element provides details - regarding the cpu tunable parameters for the domain. + regarding the CPU tunable parameters for the domain. Since 0.9.0
vcpupin
The optional vcpupin element specifies which of host's - physical CPUs the domain VCPU will be pinned to. If this is omitted, + physical CPUs the domain vCPU will be pinned to. If this is omitted, and attribute cpuset of element vcpu is not specified, the vCPU is pinned to all the physical CPUs by default. It contains two required attributes, the attribute vcpu - specifies vcpu id, and the attribute cpuset is same as + specifies vCPU id, and the attribute cpuset is same as attribute cpuset of element vcpu. (NB: Only qemu driver support) Since 0.9.0 @@ -786,7 +786,7 @@
emulatorpin
The optional emulatorpin element specifies which of host - physical CPUs the "emulator", a subset of a domain not including vcpu + physical CPUs the "emulator", a subset of a domain not including vCPU or iothreads will be pinned to. If this is omitted, and attribute cpuset of element vcpu is not specified, "emulator" is pinned to all the physical CPUs by default. It contains @@ -820,7 +820,7 @@
period
The optional period element specifies the enforcement - interval(unit: microseconds). Within period, each vcpu of + interval(unit: microseconds). Within period, each vCPU of the domain will not be allowed to consume more than quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value. @@ -835,7 +835,7 @@ vCPU threads, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all - vcpus run at the same speed. + vCPUs run at the same speed. Only QEMU driver support since 0.9.4, LXC since 0.9.10
@@ -864,7 +864,7 @@
The optional emulator_period element specifies the enforcement interval(unit: microseconds). Within emulator_period, emulator - threads(those excluding vcpus) of the domain will not be allowed to consume + threads(those excluding vCPUs) of the domain will not be allowed to consume more than emulator_quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value. Only QEMU driver support since 0.10.0 @@ -873,9 +873,9 @@
The optional emulator_quota element specifies the maximum allowed bandwidth(unit: microseconds) for domain's emulator threads(those - excluding vcpus). A domain with emulator_quota as any negative + excluding vCPUs). A domain with emulator_quota as any negative value indicates that the domain has infinite bandwidth for emulator threads - (those excluding vcpus), which means that it is not bandwidth controlled. + (those excluding vCPUs), which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. Only QEMU driver support since 0.10.0 @@ -2131,13 +2131,13 @@ QEMU, the user-configurable extended TSEG feature was unavailable up to and including pc-q35-2.9. Starting with pc-q35-2.10 the feature is available, with default size - 16 MiB. That should suffice for up to roughly 272 VCPUs, 5 GiB guest + 16 MiB. That should suffice for up to roughly 272 vCPUs, 5 GiB guest RAM in total, no hotplug memory range, and 32 GiB of 64-bit PCI MMIO - aperture. Or for 48 VCPUs, with 1TB of guest RAM, no hotplug DIMM + aperture. Or for 48 vCPUs, with 1TB of guest RAM, no hotplug DIMM range, and 32GB of 64-bit PCI MMIO aperture. The values may also vary based on the loader the VM is using.

- Additional size might be needed for significantly higher VCPU counts + Additional size might be needed for significantly higher vCPU counts or increased address space (that can be memory, maxMemory, 64-bit PCI MMIO aperture size; roughly 8 MiB of TSEG per 1 TiB of address space) which can also be rounded up. @@ -2147,7 +2147,7 @@ documentation of the guest OS or loader (if there is any), or test this by trial-and-error changing the value until the VM boots successfully. Yet another guiding value for users might be the fact - that 48 MiB should be enough for pretty large guests (240 VCPUs and + that 48 MiB should be enough for pretty large guests (240 vCPUs and 4TB guest RAM), but it is on purpose not set as default as 48 MiB of unavailable RAM might be too much for small guests (e.g. with 512 MiB of RAM). @@ -2425,7 +2425,7 @@ cpu_cycles - the count of cpu cycles (total/elapsed) + the count of CPU cycles (total/elapsed) perf.cpu_cycles @@ -2460,25 +2460,25 @@ stalled_cycles_frontend - the count of stalled cpu cycles in the frontend of the instruction + the count of stalled CPU cycles in the frontend of the instruction processor pipeline by applications running on the platform perf.stalled_cycles_frontend stalled_cycles_backend - the count of stalled cpu cycles in the backend of the instruction + the count of stalled CPU cycles in the backend of the instruction processor pipeline by applications running on the platform perf.stalled_cycles_backend ref_cpu_cycles - the count of total cpu cycles not affected by CPU frequency scaling + the count of total CPU cycles not affected by CPU frequency scaling by applications running on the platform perf.ref_cpu_cycles cpu_clock - the count of cpu clock time, as measured by a monotonic + the count of CPU clock time, as measured by a monotonic high-resolution per-CPU timer, by applications running on the platform perf.cpu_clock @@ -2505,7 +2505,7 @@ cpu_migrations - the count of cpu migrations, that is, where the process + the count of CPU migrations, that is, where the process moved from one logical processor to another, by applications running on the platform perf.cpu_migrations @@ -5621,8 +5621,8 @@ qemu-kvm -net nic,model=? /dev/null The resulting difference, according to the qemu developer who added the option is: "bh makes tx more asynchronous and reduces latency, but potentially causes more processor bandwidth - contention since the cpu doing the tx isn't necessarily the - cpu where the guest generated the packets."

+ contention since the CPU doing the tx isn't necessarily the + CPU where the guest generated the packets."

In general you should leave this option alone, unless you are very certain you know what you are doing. -- GitLab