drvqemu.html.in 28.9 KB
Newer Older
1
<?xml version="1.0" encoding="UTF-8"?>
2
<!DOCTYPE html>
3
<html xmlns="http://www.w3.org/1999/xhtml">
4
  <body>
5
    <h1>KVM/QEMU hypervisor driver</h1>
6

7 8
    <ul id="toc"></ul>

9
    <p>
10
      The libvirt KVM/QEMU driver can manage any QEMU emulator from
11
      version 1.5.0 or later.
12 13
    </p>

14
    <h2><a id="project">Project Links</a></h2>
15 16 17

    <ul>
      <li>
18
        The <a href="https://www.linux-kvm.org/">KVM</a> Linux
19
        hypervisor
20
      </li>
21
      <li>
22
        The <a href="https://wiki.qemu.org/Index.html">QEMU</a> emulator
23 24 25
      </li>
    </ul>

26
    <h2><a id="prereq">Deployment pre-requisites</a></h2>
27 28 29

    <ul>
      <li>
M
Matthew Booth 已提交
30 31
        <strong>QEMU emulators</strong>: The driver will probe <code>/usr/bin</code>
        for the presence of <code>qemu</code>, <code>qemu-system-x86_64</code>,
32 33
        <code>qemu-system-microblaze</code>,
        <code>qemu-system-microblazeel</code>,
M
Matthew Booth 已提交
34 35 36
        <code>qemu-system-mips</code>,<code>qemu-system-mipsel</code>,
        <code>qemu-system-sparc</code>,<code>qemu-system-ppc</code>. The results
        of this can be seen from the capabilities XML output.
37 38
      </li>
      <li>
M
Matthew Booth 已提交
39 40
        <strong>KVM hypervisor</strong>: The driver will probe <code>/usr/bin</code>
        for the presence of <code>qemu-kvm</code> and <code>/dev/kvm</code> device
J
Ján Tomko 已提交
41
        node. If both are found, then KVM fully virtualized, hardware accelerated
M
Matthew Booth 已提交
42
        guests will be available.
43 44 45
      </li>
    </ul>

46
    <h2><a id="uris">Connections to QEMU driver</a></h2>
47 48 49 50

    <p>
    The libvirt QEMU driver is a multi-instance driver, providing a single
    system wide privileged driver (the "system" instance), and per-user
51
    unprivileged drivers (the "session" instance). The URI driver protocol
52
    is "qemu". Some example connection URIs for the libvirt driver are:
53 54
    </p>

55 56 57 58 59 60 61 62 63 64
<pre>
qemu:///session                      (local access to per-user instance)
qemu+unix:///session                 (local access to per-user instance)

qemu:///system                       (local access to system instance)
qemu+unix:///system                  (local access to system instance)
qemu://example.com/system            (remote access, TLS/x509)
qemu+tcp://example.com/system        (remote access, SASl/Kerberos)
qemu+ssh://root@example.com/system   (remote access, SSH tunnelled)
</pre>
65

66 67 68
    <h3><a id="uriembedded">Embedded driver</a></h3>

    <p>
69
      Since 6.1.0 the QEMU driver has experimental support for operating
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
      in an embedded mode. In this scenario, rather than connecting to
      the libvirtd daemon, the QEMU driver runs in the client application
      process directly. To use this the client application must have
      registered &amp; be running an instance of the event loop. To open
      the driver in embedded mode the app use the new URI path and specify
      a virtual root directory under which the driver will create content.
    </p>

    <pre>
      qemu:///embed?root=/some/dir
    </pre>

    <p>
      Broadly speaking the range of functionality is intended to be
      on a par with that seen when using the traditional system or
      session libvirt connections to QEMU. The features will of course
      differ depending on whether the application using the embedded
      driver is running privileged or unprivileged. For example PCI
      device assignment or TAP based networking are only available
      when running privileged. While the embedded mode is still classed
      as experimental some features may change their default settings
      between releases.
    </p>

    <p>
      By default if the application uses any APIs associated with
      secondary drivers, these will result in a connection being
      opened to the corresponding driver in libvirtd. For example,
      this allows a virtual machine from the embedded QEMU to connect
      its NIC to a virtual network or connect its disk to a storage
      volume. Some of the secondary drivers will also be able to support
      running in embedded mode. Currently this is supported by the
      secrets driver, to allow for use of VMs with encrypted disks
    </p>

    <h4><a id="embedTree">Directory tree</a></h4>

    <p>
      Under the specified root directory the following locations will
      be used
    </p>

    <pre>
/some/dir
  |
  +- log
  |   |
  |   +- qemu
  |   +- swtpm
  |
  +- etc
  |   |
  |   +- qemu
  |   +- pki
  |       |
  |       +- qemu
  |
  +- run
  |   |
  |   +- qemu
  |   +- swtpm
  |
  +- cache
  |   |
  |   +- qemu
  |
  +- lib
      |
      +- qemu
      +- swtpm
    </pre>

    <p>
      Note that UNIX domain sockets used for QEMU virtual machines had
      a maximum filename length of 108 characters. Bear this in mind
      when picking a root directory to avoid risk of exhausting the
      filename space. The application is responsible for recursively
      purging the contents of this directory tree once they no longer
      require a connection, though it can also be left intact for reuse
      when opening a future connection.
    </p>

152
    <h4><a id="embedAPI">API usage with event loop</a></h4>
153 154 155 156 157 158 159 160 161 162 163 164

    <p>
      To use the QEMU driver in embedded mode the application must
      register an event loop with libvirt. Many of the QEMU driver
      API calls will rely on the event loop processing data. With this
      in mind, applications must <strong>NEVER</strong> invoke API
      calls from the event loop thread itself, only other threads.
      Not following this rule will lead to deadlocks in the API.
      This restriction is intended to be lifted in a future release
      of libvirt, once QMP processing moves to a dedicated thread.
    </p>

165
    <h2><a id="security">Driver security architecture</a></h2>
166 167 168 169 170 171

    <p>
      There are multiple layers to security in the QEMU driver, allowing for
      flexibility in the use of QEMU based virtual machines.
    </p>

172
    <h3><a id="securitydriver">Driver instances</a></h3>
173 174 175 176 177 178 179 180

    <p>
      As explained above there are two ways to access the QEMU driver
      in libvirt. The "qemu:///session" family of URIs connect to a
      libvirtd instance running as the same user/group ID as the client
      application. Thus the QEMU instances spawned from this driver will
      share the same privileges as the client application. The intended
      use case for this driver is desktop virtualization, with virtual
E
Eric Blake 已提交
181
      machines storing their disk images in the user's home directory and
182 183 184 185 186 187 188 189 190 191 192 193 194 195
      being managed from the local desktop login session.
    </p>

    <p>
      The "qemu:///system" family of URIs connect to a
      libvirtd instance running as the privileged system account 'root'.
      Thus the QEMU instances spawned from this driver may have much
      higher privileges than the client application managing them.
      The intended use case for this driver is server virtualization,
      where the virtual machines may need to be connected to host
      resources (block, PCI, USB, network devices) whose access requires
      elevated privileges.
    </p>

196
    <h3><a id="securitydac">POSIX users/groups</a></h3>
197 198

    <p>
199 200
      In the "session" instance, the POSIX users/groups model restricts QEMU
      virtual machines (and libvirtd in general) to only have access to resources
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226
      with the same user/group ID as the client application. There is no
      finer level of configuration possible for the "session" instances.
    </p>

    <p>
      In the "system" instance, libvirt releases from 0.7.0 onwards allow
      control over the user/group that the QEMU virtual machines are run
      as. A build of libvirt with no configuration parameters set will
      still run QEMU processes as root:root. It is possible to change
      this default by using the --with-qemu-user=$USERNAME and
      --with-qemu-group=$GROUPNAME arguments to 'configure' during
      build. It is strongly recommended that vendors build with both
      of these arguments set to 'qemu'. Regardless of this build time
      default, administrators can set a per-host default setting in
      the <code>/etc/libvirt/qemu.conf</code> configuration file via
      the <code>user=$USERNAME</code> and <code>group=$GROUPNAME</code>
      parameters. When a non-root user or group is configured, the
      libvirt QEMU driver will change uid/gid to match immediately
      before executing the QEMU binary for a virtual machine.
    </p>

    <p>
      If QEMU virtual machines from the "system" instance are being
      run as non-root, there will be greater restrictions on what
      host resources the QEMU process will be able to access. The
      libvirtd daemon will attempt to manage permissions on resources
227
      to minimise the likelihood of unintentional security denials,
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248
      but the administrator / application developer must be aware of
      some of the consequences / restrictions.
    </p>

    <ul>
      <li>
        <p>
          The directories <code>/var/run/libvirt/qemu/</code>,
          <code>/var/lib/libvirt/qemu/</code> and
          <code>/var/cache/libvirt/qemu/</code> must all have their
          ownership set to match the user / group ID that QEMU
          guests will be run as. If the vendor has set a non-root
          user/group for the QEMU driver at build time, the
          permissions should be set automatically at install time.
          If a host administrator customizes user/group in
          <code>/etc/libvirt/qemu.conf</code>, they will need to
          manually set the ownership on these directories.
        </p>
      </li>
      <li>
        <p>
249
          When attaching USB and PCI devices to a QEMU guest,
250
          QEMU will need to access files in <code>/dev/bus/usb</code>
251
          and <code>/sys/bus/pci/devices</code> respectively. The libvirtd daemon
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272
          will automatically set the ownership on specific devices
          that are assigned to a guest at start time. There should
          not be any need for administrator changes in this respect.
        </p>
      </li>
      <li>
        <p>
          Any files/devices used as guest disk images must be
          accessible to the user/group ID that QEMU guests are
          configured to run as. The libvirtd daemon will automatically
          set the ownership of the file/device path to the correct
          user/group ID. Applications / administrators must be aware
          though that the parent directory permissions may still
          deny access. The directories containing disk images
          must either have their ownership set to match the user/group
          configured for QEMU, or their UNIX file permissions must
          have the 'execute/search' bit enabled for 'others'.
        </p>
        <p>
          The simplest option is the latter one, of just enabling
          the 'execute/search' bit. For any directory to be used
273
          for storing disk images, this can be achieved by running
274 275 276 277
          the following command on the directory itself, and any
          parent directories
        </p>
<pre>
278
chmod o+x /path/to/directory
279 280 281 282 283 284 285 286 287 288 289
</pre>
        <p>
          In particular note that if using the "system" instance
          and attempting to store disk images in a user home
          directory, the default permissions on $HOME are typically
          too restrictive to allow access.
        </p>
      </li>
    </ul>

    <p>
290 291 292 293 294
      The libvirt maintainers <strong>strongly recommend against</strong>
      running QEMU as the root user/group. This should not be required
      in most supported usage scenarios, as libvirt will generally do the
      right thing to grant QEMU access to files it is permitted to
      use when it is running non-root.
295 296
    </p>

297 298
    <h3><a id="securitycap">Linux process capabilities</a></h3>

299
    <p>
300 301 302 303 304 305 306
      In versions of libvirt prior to 6.0.0, even if QEMU was configured
      to run as the root user / group, libvirt would strip all process
      capabilities. This meant that QEMU could only read/write files
      owned by root, or with open permissions. In reality, stripping
      capabilities did not have any security benefit, as it was trivial
      to get commands to run in another context with full capabilities,
      for example, by creating a cronjob.
307 308
    </p>
    <p>
309 310 311
      Thus since 6.0.0, if QEMU is running as root, it will keep all
      process capabilities. Behaviour when QEMU is running non-root
      is unchanged, it still has no capabilities.
312 313
    </p>

314
    <h3><a id="securityselinux">SELinux basic confinement</a></h3>
315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344

    <p>
      The basic SELinux protection for QEMU virtual machines is intended to
      protect the host OS from a compromised virtual machine process. There
      is no protection between guests.
    </p>

    <p>
      In the basic model, all QEMU virtual machines run under the confined
      domain <code>root:system_r:qemu_t</code>. It is required that any
      disk image assigned to a QEMU virtual machine is labelled with
      <code>system_u:object_r:virt_image_t</code>. In a default deployment,
      package vendors/distributor will typically ensure that the directory
      <code>/var/lib/libvirt/images</code> has this label, such that any
      disk images created in this directory will automatically inherit the
      correct labelling. If attempting to use disk images in another
      location, the user/administrator must ensure the directory has be
      given this requisite label. Likewise physical block devices must
      be labelled <code>system_u:object_r:virt_image_t</code>.
    </p>
    <p>
      Not all filesystems allow for labelling of individual files. In
      particular NFS, VFat and NTFS have no support for labelling. In
      these cases administrators must use the 'context' option when
      mounting the filesystem to set the default label to
      <code>system_u:object_r:virt_image_t</code>. In the case of
      NFS, there is an alternative option, of enabling the <code>virt_use_nfs</code>
      SELinux boolean.
    </p>

345
    <h3><a id="securitysvirt">SELinux sVirt confinement</a></h3>
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384

    <p>
      The SELinux sVirt protection for QEMU virtual machines builds to the
      basic level of protection, to also allow individual guests to be
      protected from each other.
    </p>

    <p>
      In the sVirt model, each QEMU virtual machine runs under its own
      confined domain, which is based on <code>system_u:system_r:svirt_t:s0</code>
      with a unique category appended, eg, <code>system_u:system_r:svirt_t:s0:c34,c44</code>.
      The rules are setup such that a domain can only access files which are
      labelled with the matching category level, eg
      <code>system_u:object_r:svirt_image_t:s0:c34,c44</code>. This prevents one
      QEMU process accessing any file resources that are prevent to another QEMU
      process.
    </p>

    <p>
      There are two ways of assigning labels to virtual machines under sVirt.
      In the default setup, if sVirt is enabled, guests will get an automatically
      assigned unique label each time they are booted. The libvirtd daemon will
      also automatically relabel exclusive access disk images to match this
      label.  Disks that are marked as &lt;shared&gt; will get a generic
      label <code>system_u:system_r:svirt_image_t:s0</code> allowing all guests
      read/write access them, while disks marked as &lt;readonly&gt; will
      get a generic label <code>system_u:system_r:svirt_content_t:s0</code>
      which allows all guests read-only access.
    </p>

    <p>
      With statically assigned labels, the application should include the
      desired guest and file labels in the XML at time of creating the
      guest with libvirt. In this scenario the application is responsible
      for ensuring the disk images &amp; similar resources are suitably
      labelled to match, libvirtd will not attempt any relabelling.
    </p>

    <p>
385
      If the sVirt security model is active, then the node capabilities
386 387 388 389 390 391 392 393 394
      XML will include its details. If a virtual machine is currently
      protected by the security model, then the guest XML will include
      its assigned labels. If enabled at compile time, the sVirt security
      model will always be activated if SELinux is available on the host
      OS. To disable sVirt, and revert to the basic level of SELinux
      protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
      file can be used to change the setting to <code>security_driver="none"</code>
    </p>

395
    <h3><a id="securitysvirtaa">AppArmor sVirt confinement</a></h3>
396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461

    <p>
      When using basic AppArmor protection for the libvirtd daemon and
      QEMU virtual machines, the intention is to protect the host OS
      from a compromised virtual machine process. There is no protection
      between guests.
    </p>

    <p>
      The AppArmor sVirt protection for QEMU virtual machines builds on
      this basic level of protection, to also allow individual guests to
      be protected from each other.
    </p>

    <p>
      In the sVirt model, if a profile is loaded for the libvirtd daemon,
      then each <code>qemu:///system</code> QEMU virtual machine will have
      a profile created for it when the virtual machine is started if one
      does not already exist. This generated profile uses a profile name
      based on the UUID of the QEMU virtual machine and contains rules
      allowing access to only the files it needs to run, such as its disks,
      pid file and log files. Just before the QEMU virtual machine is
      started, the libvirtd daemon will change into this unique profile,
      preventing the QEMU process from accessing any file resources that
      are present in another QEMU process or the host machine.
    </p>

    <p>
      The AppArmor sVirt implementation is flexible in that it allows an
      administrator to customize the template file in
      <code>/etc/apparmor.d/libvirt/TEMPLATE</code> for site-specific
      access for all newly created QEMU virtual machines. Also, when a new
      profile is generated, two files are created:
      <code>/etc/apparmor.d/libvirt/libvirt-&lt;uuid&gt;</code> and
      <code>/etc/apparmor.d/libvirt/libvirt-&lt;uuid&gt;.files</code>. The
      former can be fine-tuned by the administrator to allow custom access
      for this particular QEMU virtual machine, and the latter will be
      updated appropriately when required file access changes, such as when
      a disk is added. This flexibility allows for situations such as
      having one virtual machine in complain mode with all others in
      enforce mode.
    </p>

    <p>
      While users can define their own AppArmor profile scheme, a typical
      configuration will include a profile for <code>/usr/sbin/libvirtd</code>,
      <code>/usr/lib/libvirt/virt-aa-helper</code> (a helper program which the
      libvirtd daemon uses instead of manipulating AppArmor directly), and
      an abstraction to be included by <code>/etc/apparmor.d/libvirt/TEMPLATE</code>
      (typically <code>/etc/apparmor.d/abstractions/libvirt-qemu</code>).
      An example profile scheme can be found in the examples/apparmor
      directory of the source distribution.
    </p>

    <p>
      If the sVirt security model is active, then the node capabilities
      XML will include its details. If a virtual machine is currently
      protected by the security model, then the guest XML will include
      its assigned profile name. If enabled at compile time, the sVirt
      security model will be activated if AppArmor is available on the host
      OS and a profile for the libvirtd daemon is loaded when libvirtd is
      started. To disable sVirt, and revert to the basic level of AppArmor
      protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
      file can be used to change the setting to <code>security_driver="none"</code>.
    </p>

462

463
    <h3><a id="securityacl">Cgroups device ACLs</a></h3>
464 465

    <p>
466
      Linux kernels have a capability known as "cgroups" which is used
467 468 469 470 471 472 473 474 475 476 477 478 479 480 481
      for resource management. It is implemented via a number of "controllers",
      each controller covering a specific task/functional area. One of the
      available controllers is the "devices" controller, which is able to
      setup whitelists of block/character devices that a cgroup should be
      allowed to access. If the "devices" controller is mounted on a host,
      then libvirt will automatically create a dedicated cgroup for each
      QEMU virtual machine and setup the device whitelist so that the QEMU
      process can only access shared devices, and explicitly disks images
      backed by block devices.
    </p>

    <p>
      The list of shared devices a guest is allowed access to is
    </p>

482 483 484
<pre>
/dev/null, /dev/full, /dev/zero,
/dev/random, /dev/urandom,
485
/dev/ptmx, /dev/kvm,
486
/dev/rtc, /dev/hpet
487
</pre>
488 489 490 491 492 493 494 495

    <p>
      In the event of unanticipated needs arising, this can be customized
      via the <code>/etc/libvirt/qemu.conf</code> file.
      To mount the cgroups device controller, the following command
      should be run as root, prior to starting libvirtd
    </p>

496 497 498 499
<pre>
mkdir /dev/cgroup
mount -t cgroup none /dev/cgroup -o devices
</pre>
500 501 502 503 504 505

    <p>
      libvirt will then place each virtual machine in a cgroup at
      <code>/dev/cgroup/libvirt/qemu/$VMNAME/</code>
    </p>

506
    <h2><a id="imex">Import and export of libvirt domain XML configs</a></h2>
507 508 509 510 511 512 513

    <p>The QEMU driver currently supports a single native
      config format known as <code>qemu-argv</code>. The data for this format
      is expected to be a single line first a list of environment variables,
      then the QEMu binary name, finally followed by the QEMU command line
      arguments</p>

514
    <h3><a id="xmlimport">Converting from QEMU args to domain XML</a></h3>
515

516
    <p>
517
      <b>Note:</b> this operation is <span class="removed"> deleted as of
518 519
        5.5.0</span> and will return an error.
    </p>
520
    <p>
521 522 523 524 525 526 527 528 529 530
      The <code>virsh domxml-from-native</code> provides a way to
      convert an existing set of QEMU args into a guest description
      using libvirt Domain XML that can then be used by libvirt.
      Please note that this command is intended to be used to convert
      existing qemu guests previously started from the command line to
      be managed through libvirt.  It should not be used a method of
      creating new guests from scratch.  New guests should be created
      using an application calling the libvirt APIs (see
      the <a href="apps.html">libvirt applications page</a> for some
      examples) or by manually crafting XML to pass to virsh.
531 532
    </p>

533
    <h3><a id="xmlexport">Converting from domain XML to QEMU args</a></h3>
534 535 536 537

    <p>
      The <code>virsh domxml-to-native</code> provides a way to convert a
      guest description using libvirt Domain XML, into a set of QEMU args
538 539 540 541
      that can be run manually. Note that currently the command line formatted
      by libvirt is no longer suited for manually running qemu as the
      configuration expects various resources and open file descriptors passed
      to the process which are usually prepared by libvirtd.
542 543
    </p>

544
    <h2><a id="qemucommand">Pass-through of arbitrary qemu
E
Eric Blake 已提交
545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567
    commands</a></h2>

    <p>Libvirt provides an XML namespace and an optional
      library <code>libvirt-qemu.so</code> for dealing specifically
      with qemu.  When used correctly, these extensions allow testing
      specific qemu features that have not yet been ported to the
      generic libvirt XML and API interfaces.  However, they
      are <b>unsupported</b>, in that the library is not guaranteed to
      have a stable API, abusing the library or XML may result in
      inconsistent state the crashes libvirtd, and upgrading either
      qemu-kvm or libvirtd may break behavior of a domain that was
      relying on a qemu-specific pass-through.  If you find yourself
      needing to use them to access a particular qemu feature, then
      please post an RFE to the libvirt mailing list to get that
      feature incorporated into the stable libvirt XML and API
      interfaces.
    </p>
    <p>The library provides two
      API: <code>virDomainQemuMonitorCommand</code>, for sending an
      arbitrary monitor command (in either HMP or QMP format) to a
      qemu guest (<span class="since">Since 0.8.3</span>),
      and <code>virDomainQemuAttach</code>, for registering a qemu
      domain that was manually started so that it can then be managed
568 569
      by libvirtd (<span class="since">Since 0.9.4</span>,
      <span class="removed">removed as of 5.5.0</span>).
E
Eric Blake 已提交
570 571 572 573 574 575
    </p>
    <p>Additionally, the following XML additions allow fine-tuning of
      the command line given to qemu when starting a domain
      (<span class="since">Since 0.8.3</span>).  In order to use the
      XML additions, it is necessary to issue an XML namespace request
      (the special <code>xmlns:<i>name</i></code> attribute) that
576
      pulls in <code>http://libvirt.org/schemas/domain/qemu/1.0</code>;
E
Eric Blake 已提交
577 578 579 580 581
      typically, the namespace is given the name
      of <code>qemu</code>.  With the namespace in place, it is then
      possible to add an element <code>&lt;qemu:commandline&gt;</code>
      under <code>driver</code>, with the following sub-elements
      repeated as often as needed:
582
    </p>
E
Eric Blake 已提交
583 584 585 586 587 588 589 590 591 592 593 594 595
      <dl>
        <dt><code>qemu:arg</code></dt>
        <dd>Add an additional command-line argument to the qemu
          process when starting the domain, given by the value of the
          attribute <code>value</code>.
        </dd>
        <dt><code>qemu:env</code></dt>
        <dd>Add an additional environment variable to the qemu
          process when starting the domain, given with the name-value
          pair recorded in the attributes <code>name</code>
          and optional <code>value</code>.</dd>
      </dl>
      <p>Example:</p><pre>
596
&lt;domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
J
Ján Tomko 已提交
597
  &lt;name&gt;QEMU-fedora-i686&lt;/name&gt;
E
Eric Blake 已提交
598 599 600 601 602 603 604 605 606 607 608 609
  &lt;memory&gt;219200&lt;/memory&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
  &lt;/os&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
  &lt;/devices&gt;
  &lt;qemu:commandline&gt;
    &lt;qemu:arg value='-newarg'/&gt;
    &lt;qemu:env name='QEMU_ENV' value='VAL'/&gt;
  &lt;/qemu:commandline&gt;
&lt;/domain&gt;
610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639
</pre>

    <h2><a id="xmlnsfeatures">QEMU feature configuration for testing</a></h2>

      <p>
        In some cases e.g. when developing a new feature or for testing it may
        be required to control a given qemu feature (or qemu capability) to test
        it before it's complete or disable it for debugging purposes.
        <span class="since">Since 5.5.0</span> it's possible to use the same
        special qemu namespace as above
        (<code>http://libvirt.org/schemas/domain/qemu/1.0</code>) and use
        <code>&lt;qemu:capabilities&gt;</code> element to add
        (<code>&lt;qemu:add capability="capname"/&gt;</code>) or remove
        (<code>&lt;qemu:del capability="capname"/&gt;</code>) capability bits.
        The naming of the feature bits is the same libvirt uses in the status
        XML. Note that this feature is meant for experiments only and should
        _not_ be used in production.
      </p>

      <p>Example:</p><pre>
&lt;domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
  &lt;name&gt;testvm&lt;/name&gt;

   [...]

  &lt;qemu:capabilities&gt;
    &lt;qemu:add capability='blockdev'/&gt;
    &lt;qemu:del capability='drive'/&gt;
  &lt;/qemu:capabilities&gt;
&lt;/domain&gt;
E
Eric Blake 已提交
640 641
</pre>

642
    <h2><a id="xmlconfig">Example domain XML config</a></h2>
643 644 645 646

    <h3>QEMU emulated guest on x86_64</h3>

        <pre>&lt;domain type='qemu'&gt;
J
Ján Tomko 已提交
647
  &lt;name&gt;QEMU-fedora-i686&lt;/name&gt;
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667
  &lt;uuid&gt;c7a5fdbd-cdaf-9455-926a-d65c16db1809&lt;/uuid&gt;
  &lt;memory&gt;219200&lt;/memory&gt;
  &lt;currentMemory&gt;219200&lt;/currentMemory&gt;
  &lt;vcpu&gt;2&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
    &lt;boot dev='cdrom'/&gt;
  &lt;/os&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
    &lt;disk type='file' device='cdrom'&gt;
      &lt;source file='/home/user/boot.iso'/&gt;
      &lt;target dev='hdc'/&gt;
      &lt;readonly/&gt;
    &lt;/disk&gt;
    &lt;disk type='file' device='disk'&gt;
      &lt;source file='/home/user/fedora.img'/&gt;
      &lt;target dev='hda'/&gt;
    &lt;/disk&gt;
    &lt;interface type='network'&gt;
668
      &lt;source network='default'/&gt;
669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694
    &lt;/interface&gt;
    &lt;graphics type='vnc' port='-1'/&gt;
  &lt;/devices&gt;
&lt;/domain&gt;</pre>

    <h3>KVM hardware accelerated guest on i686</h3>

        <pre>&lt;domain type='kvm'&gt;
  &lt;name&gt;demo2&lt;/name&gt;
  &lt;uuid&gt;4dea24b3-1d52-d8f3-2516-782e98a23fa0&lt;/uuid&gt;
  &lt;memory&gt;131072&lt;/memory&gt;
  &lt;vcpu&gt;1&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch="i686"&gt;hvm&lt;/type&gt;
  &lt;/os&gt;
  &lt;clock sync="localtime"/&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-kvm&lt;/emulator&gt;
    &lt;disk type='file' device='disk'&gt;
      &lt;source file='/var/lib/libvirt/images/demo2.img'/&gt;
      &lt;target dev='hda'/&gt;
    &lt;/disk&gt;
    &lt;interface type='network'&gt;
      &lt;source network='default'/&gt;
      &lt;mac address='24:42:53:21:52:45'/&gt;
    &lt;/interface&gt;
G
Guido Günther 已提交
695
    &lt;graphics type='vnc' port='-1' keymap='de'/&gt;
696 697 698 699 700
  &lt;/devices&gt;
&lt;/domain&gt;</pre>

  </body>
</html>