From 6f7f84a77297970af539b0e9cde00e6ac30eba90 Mon Sep 17 00:00:00 2001 From: Daniel Veillard Date: Thu, 15 Mar 2007 14:27:09 +0000 Subject: [PATCH] * src/virsh.c src/xen_internal.c: applied patch from Atsushi SAKAI to better handle the case where there is no limit in the domain upper memory size * docs/architecture.html docs/format.html docs/intro.html docs/libvir.html: started to update the documentation to reflect the current state Daniel --- ChangeLog | 9 ++++ docs/architecture.html | 56 +++++++++++++++++-------- docs/format.html | 10 +++-- docs/intro.html | 8 ++-- docs/libvir.html | 93 ++++++++++++++++++++++++++++++------------ src/virsh.c | 7 +++- src/xen_internal.c | 4 +- 7 files changed, 136 insertions(+), 51 deletions(-) diff --git a/ChangeLog b/ChangeLog index 09c84578dc..d5cc28619f 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,12 @@ +Thu Mar 15 15:26:20 CET 2007 Daniel Veillard + + * src/virsh.c src/xen_internal.c: applied patch from Atsushi SAKAI + to better handle the case where there is no limit in the domain + upper memory size + * docs/architecture.html docs/format.html docs/intro.html + docs/libvir.html: started to update the documentation to reflect + the current state + Thu Mar 15 08:40:33 CET 2007 Daniel Veillard * configure.in proxy/Makefile.am proxy/libvirt_proxy.c diff --git a/docs/architecture.html b/docs/architecture.html index fbbf3c0ed3..ac1948411f 100644 --- a/docs/architecture.html +++ b/docs/architecture.html @@ -1,7 +1,10 @@ -libvirt architecture

libvirt architecture

This is in a large part Xen specific since this is the only hypervisor -supported at the moment

When running in a Xen environment, programs using libvirt have to execute +libvirt architecture

libvirt architecture

Currently libvirt supports 2 kind of virtualization, and its internal +structure is based on a driver model which simplifies adding new engines:

Libvirt Xen support

When running in a Xen environment, programs using libvirt have to execute in "Domain 0", which is the primary Linux OS loaded on the machine. That OS kernel provides most if not all of the actual drivers used by the set of domains. It also runs the Xen Store, a database of informations shared by the @@ -11,35 +14,54 @@ drivers, kernels and daemons communicate though a shared system bus implemented in the hypervisor. The figure below tries to provide a view of this environment:

The Xen architecture

The library can be initialized in 2 ways depending on the level of priviledge of the embedding program. If it runs with root access, -virConnectOpen() can be used, it will use three different ways to connect to +virConnectOpen() can be used, it will use different ways to connect to the Xen infrastructure:

  • a connection to the Xen Daemon though an HTTP RPC layer
  • a read/write connection to the Xen Store
  • use Xen Hypervisor calls
  • +
  • when used as non-root libvirt connect to a proxy daemon running + as root and providing read-only support

The library will usually interact with the Xen daemon for any operation changing the state of the system, but for performance and accuracy reasons may talk directly to the hypervisor when gathering state informations at least when possible (i.e. when the running program using libvirt has root priviledge access).

If it runs without root access virConnectOpenReadOnly() should be used to -connect to initialize the library. It will try to open the read-only socket -/var/run/xenstored/socket_ro to connect to the Xen Store and -also try to use the RPC to the Xen daemon. In this case use of hypervisor -calls and write to the Xen Store will not be possible, restraining the amount -of APIs available and slowing down information gathering about domains.

Internal architecture

As the previous section explains, libvirt can communicate using different -channels with the current hypervisor, and should also be able to use -different kind of hypervisor. To simplify the internal design, code, ease +connect to initialize the library. It will then fork a libvirt_proxy program +running as root and providing read_only access to the API, this is then +only useful for reporting and monitoring.

Libvirt QEmu and KVM support

The model for QEmu and KVM is completely similar, basically KVM is +based on QEmu for the process controlling a new domain, only small details +differs between the two. In both case the libvirt API is provided +by a controlling process forked by libvirt in the background and +which launch and control the QEmu or KVM process. That program called +libvirt_qemud talks though a specific protocol to the library, and +connects to the console of the QEmu process in order to control and +report on its status. Libvirt tries to expose all the emulations +models of QEmu, the selection is done when creating the new domain, +by specifying the architecture and machine type targetted.

The code controlling the QEmu process is available in the +qemud/ subdirectory.

the driver based architecture

As the previous section explains, libvirt can communicate using different +channels with the Xen hypervisor, and is also able to use different kind +of hypervisor. To simplify the internal design, code, ease maintainance and simplify the support of other virtualization engine the internals have been structured as one core component, the libvirt.c module acting as a front-end for the library API and a set of hypvisor drivers defining a common set of routines. That way the Xen Daemon accces, the Xen Store one, the Hypervisor hypercall are all isolated in separate C modules implementing at least a subset of the common operations defined by the -drivers present in driver.h:

  • xend_internal: implements the driver functions though the Xen - Daemon
  • +drivers present in driver.h. The driver architecture is used to add support +for other virtualization engines and

    • xend_internal: implements the driver functions though the Xen + Daemon.
    • xs_internal: implements the subset of the driver availble though the - Xen Store
    • + Xen Store.
    • xen_internal: provide the implementation of the functions possible via - direct hypervisor access
    • + direct Xen hypervisor access. +
    • proxy_internal: provide read-only Xen access via a proxy, the proxy + code is in the proxy/ sub directory.
    • +
    • xm_internal: provide support for Xen defined but not running domains.
    • +
    • qemu_internal: implement the driver functions for QEmu and KVM + virtualization engines. It also uses a qemud/ specific daemon which + interracts with the QEmu process to implement libvirt API.
    • +
    • test: this is a test driver useful for regression tests of the + front-end part of libvirt.

    Note that a given driver may only implement a subset of those functions, -for example saving a domain state to disk and restoring it is only possible -though the Xen Daemon, on the other hand all interfaces allow to query the -runtime state of a given domain.

+for example saving a Xen domain state to disk and restoring it is only possible +though the Xen Daemon, in that case the driver entry points are initialized to +NULL.

diff --git a/docs/format.html b/docs/format.html index ed71ceeacd..fd18950780 100644 --- a/docs/format.html +++ b/docs/format.html @@ -2,9 +2,13 @@ XML Format

XML Format

This section describes the XML format used to represent domains, there are variations on the format based on the kind of domains run and the options -used to launch them:

Normal paravirtualized Xen domains

Fully virtualized Xen domains

The formats try as much as possible to follow the same structure and reuse +used to launch them:

The formats try as much as possible to follow the same structure and reuse elements and attributes where it makes sense.

Normal paravirtualized Xen -guests:

The library use an XML format to describe domains, as input to virDomainCreateLinux() +domains:

The library use an XML format to describe domains, as input to virDomainCreateLinux() and as the output of virDomainGetXMLDesc(), the following is an example of the format as returned by the shell command virsh xmldump fc4 , where fc4 was one of the running domains:

<domain type='xen' id='18'>
@@ -175,4 +179,4 @@ systems:

<domain type='xen' id='3'>
 

It is likely that the HVM description gets additional optional elements and attributes as the support for fully virtualized domain expands, especially for the variety of devices emulated and the graphic support -options offered.

+options offered.

KVM domain

QEmu domain

diff --git a/docs/intro.html b/docs/intro.html index 72b10394f5..d432e2ad36 100644 --- a/docs/intro.html +++ b/docs/intro.html @@ -10,8 +10,8 @@ some of the specific concepts used in libvirt documentation:

  • a a domain is an instance of an operating system running on a virtualized machine provided by the hypervisor

Hypervisor and domains running on a node

Now we can define the goal of libvirt: to provide the lowest possible -generic and stable layer to manage domains on a node.

This implies the following:

  • the API should not be targetted to a single virtualization environment - though Xen is the current default, which also means that some very +generic and stable layer to manage domains on a node.

    This implies the following:

    • the API is not targetted to a single virtualization environment, it + currently supports Xen and QEmu/KVM. This also implies that some very specific capabilities which are not generic enough may not be provided as libvirt APIs
    • the API should allow to do efficiently and cleanly all the operations @@ -27,4 +27,6 @@ and for applications focusing on virtualization of a single node (the only exception being domain migration between node capabilities which may need to be added at the libvirt level). Where possible libvirt should be extendable to be able to provide the same API for remote nodes, however this is not the -case at the moment, the code currently handle only local node accesses.

      +case at the moment, the code currently handle only local node accesses +(extension for remote access support is being worked on, see +the mailing list discussions about it).

      diff --git a/docs/libvir.html b/docs/libvir.html index c1526ad383..01615ba2c2 100644 --- a/docs/libvir.html +++ b/docs/libvir.html @@ -277,8 +277,8 @@ generic and stable layer to manage domains on a node.

      This implies the following:

        -
      • the API should not be targetted to a single virtualization environment - though Xen is the current default, which also means that some very +
      • the API is not targetted to a single virtualization environment, it + currently supports Xen and QEmu/KVM. This also implies that some very specific capabilities which are not generic enough may not be provided as libvirt APIs
      • the API should allow to do efficiently and cleanly all the operations @@ -296,12 +296,21 @@ and for applications focusing on virtualization of a single node (the only exception being domain migration between node capabilities which may need to be added at the libvirt level). Where possible libvirt should be extendable to be able to provide the same API for remote nodes, however this is not the -case at the moment, the code currently handle only local node accesses.

        +case at the moment, the code currently handle only local node accesses +(extension for remote access support is being worked on, see +the mailing list discussions about it).

        libvirt architecture

        -

        This is in a large part Xen specific since this is the only hypervisor -supported at the moment

        +

        Currently libvirt supports 2 kind of virtualization, and its internal +structure is based on a driver model which simplifies adding new engines:

        + + +

        Libvirt Xen support

        When running in a Xen environment, programs using libvirt have to execute in "Domain 0", which is the primary Linux OS loaded on the machine. That OS @@ -316,12 +325,14 @@ this environment:

        The library can be initialized in 2 ways depending on the level of priviledge of the embedding program. If it runs with root access, -virConnectOpen() can be used, it will use three different ways to connect to +virConnectOpen() can be used, it will use different ways to connect to the Xen infrastructure:

        • a connection to the Xen Daemon though an HTTP RPC layer
        • a read/write connection to the Xen Store
        • use Xen Hypervisor calls
        • +
        • when used as non-root libvirt connect to a proxy daemon running + as root and providing read-only support

        The library will usually interact with the Xen daemon for any operation @@ -331,37 +342,58 @@ least when possible (i.e. when the running program using libvirt has root priviledge access).

        If it runs without root access virConnectOpenReadOnly() should be used to -connect to initialize the library. It will try to open the read-only socket -/var/run/xenstored/socket_ro to connect to the Xen Store and -also try to use the RPC to the Xen daemon. In this case use of hypervisor -calls and write to the Xen Store will not be possible, restraining the amount -of APIs available and slowing down information gathering about domains.

        - -

        Internal architecture

        +connect to initialize the library. It will then fork a libvirt_proxy program +running as root and providing read_only access to the API, this is then +only useful for reporting and monitoring.

        + +

        Libvirt QEmu and KVM support

        +

        The model for QEmu and KVM is completely similar, basically KVM is +based on QEmu for the process controlling a new domain, only small details +differs between the two. In both case the libvirt API is provided +by a controlling process forked by libvirt in the background and +which launch and control the QEmu or KVM process. That program called +libvirt_qemud talks though a specific protocol to the library, and +connects to the console of the QEmu process in order to control and +report on its status. Libvirt tries to expose all the emulations +models of QEmu, the selection is done when creating the new domain, +by specifying the architecture and machine type targetted.

        +

        The code controlling the QEmu process is available in the +qemud/ subdirectory.

        + +

        the driver based architecture

        As the previous section explains, libvirt can communicate using different -channels with the current hypervisor, and should also be able to use -different kind of hypervisor. To simplify the internal design, code, ease +channels with the Xen hypervisor, and is also able to use different kind +of hypervisor. To simplify the internal design, code, ease maintainance and simplify the support of other virtualization engine the internals have been structured as one core component, the libvirt.c module acting as a front-end for the library API and a set of hypvisor drivers defining a common set of routines. That way the Xen Daemon accces, the Xen Store one, the Hypervisor hypercall are all isolated in separate C modules implementing at least a subset of the common operations defined by the -drivers present in driver.h:

        +drivers present in driver.h. The driver architecture is used to add support +for other virtualization engines and

        • xend_internal: implements the driver functions though the Xen - Daemon
        • + Daemon.
        • xs_internal: implements the subset of the driver availble though the - Xen Store
        • + Xen Store.
        • xen_internal: provide the implementation of the functions possible via - direct hypervisor access
        • + direct Xen hypervisor access. +
        • proxy_internal: provide read-only Xen access via a proxy, the proxy + code is in the proxy/ sub directory.
        • +
        • xm_internal: provide support for Xen defined but not running domains.
        • +
        • qemu_internal: implement the driver functions for QEmu and KVM + virtualization engines. It also uses a qemud/ specific daemon which + interracts with the QEmu process to implement libvirt API.
        • +
        • test: this is a test driver useful for regression tests of the + front-end part of libvirt.

        Note that a given driver may only implement a subset of those functions, -for example saving a domain state to disk and restoring it is only possible -though the Xen Daemon, on the other hand all interfaces allow to query the -runtime state of a given domain.

        +for example saving a Xen domain state to disk and restoring it is only possible +though the Xen Daemon, in that case the driver entry points are initialized to +NULL.

        @@ -396,15 +428,18 @@ available except commiting to the base.

        variations on the format based on the kind of domains run and the options used to launch them:

        -

        Normal paravirtualized Xen domains

        - -

        Fully virtualized Xen domains

        +

        The formats try as much as possible to follow the same structure and reuse elements and attributes where it makes sense.

        Normal paravirtualized Xen -guests:

        +domains:

        The library use an XML format to describe domains, as input to virDomainCreateLinux() @@ -625,6 +660,12 @@ and attributes as the support for fully virtualized domain expands, especially for the variety of devices emulated and the graphic support options offered.

        +

        KVM domain

        +

        + +

        QEmu domain

        +

        +

        Binding for Python

        Libvirt comes with direct support for the Python language (just make sure diff --git a/src/virsh.c b/src/virsh.c index b146b97dcc..2e39fe8718 100644 --- a/src/virsh.c +++ b/src/virsh.c @@ -1186,8 +1186,13 @@ cmdDominfo(vshControl * ctl, vshCmd * cmd) vshPrint(ctl, "%-15s %.1lfs\n", _("CPU time:"), cpuUsed); } - vshPrint(ctl, "%-15s %lu kB\n", _("Max memory:"), + if (info.maxMem != UINT_MAX) + vshPrint(ctl, "%-15s %lu kB\n", _("Max memory:"), info.maxMem); + else + vshPrint(ctl, "%-15s %-15s\n", _("Max memory:"), + _("no limit")); + vshPrint(ctl, "%-15s %lu kB\n", _("Used memory:"), info.memory); diff --git a/src/xen_internal.c b/src/xen_internal.c index 6ebb4ff4c7..f32cd5fd8f 100644 --- a/src/xen_internal.c +++ b/src/xen_internal.c @@ -1598,7 +1598,9 @@ xenHypervisorGetDomInfo(virConnectPtr conn, int id, virDomainInfoPtr info) */ info->cpuTime = XEN_GETDOMAININFO_CPUTIME(dominfo); info->memory = XEN_GETDOMAININFO_TOT_PAGES(dominfo) * kb_per_pages; - info->maxMem = XEN_GETDOMAININFO_MAX_PAGES(dominfo) * kb_per_pages; + info->maxMem = XEN_GETDOMAININFO_MAX_PAGES(dominfo); + if(info->maxMem != UINT_MAX) + info->maxMem *= kb_per_pages; info->nrVirtCpu = XEN_GETDOMAININFO_CPUCOUNT(dominfo); return (0); } -- GitLab