From b782d66763f347fe25994f8e4e28930b12de25a7 Mon Sep 17 00:00:00 2001 From: Daniel Veillard Date: Thu, 26 Apr 2007 10:20:57 +0000 Subject: [PATCH] * src/virsh.c: fix virshStrdup to not crash if NULL is passed. Daniel --- ChangeLog | 4 + NEWS | 70 ------------ docs/FAQ.html | 2 +- docs/architecture.html | 68 ++++++------ docs/bugs.html | 2 +- docs/downloads.html | 2 +- docs/errors.html | 2 +- docs/format.html | 243 +---------------------------------------- docs/index.html | 7 +- docs/intro.html | 7 +- docs/news.html | 68 +----------- docs/python.html | 2 +- src/virsh.c | 2 + 13 files changed, 58 insertions(+), 421 deletions(-) diff --git a/ChangeLog b/ChangeLog index dfafbab3fd..a3b8526b2f 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,7 @@ +Thu Apr 26 12:20:35 CEST 2007 Daniel Veillard + + * src/virsh.c: fix virshStrdup to not crash if NULL is passed. + Tue Apr 24 15:43:04 CEST 2007 Daniel Veillard * src/internal.h src/xend_internal.c: a better fix from Shigeki Sakamoto diff --git a/NEWS b/NEWS index e515a9d0e0..6a6a9cd928 100644 --- a/NEWS +++ b/NEWS @@ -5,76 +5,6 @@ http://libvirt.org/news.html Releases -0.2.2: Apr 17 2007: - - Documentation: fix errors due to Amaya (with Simon Hernandez), - virsh uses kB not bytes (Atsushi SAKAI), add command line help to - qemud (Richard Jones), xenUnifiedRegister docs (Atsushi SAKAI), - strings typos (Nikolay Sivov), ilocalization probalem raised by - Thomas Canniot - - Bug fixes: virsh memory values test (Masayuki Sunou), operations without - libvirt_qemud (Atsushi SAKAI), fix spec file (Florian La Roche, Jeremy - Katz, Michael Schwendt), - direct hypervisor call (Atsushi SAKAI), buffer overflow on qemu - networking command (Daniel Berrange), buffer overflow in quemud (Daniel - Berrange), virsh vcpupin bug (Masayuki Sunou), host PAE detections - and strcuctures size (Richard Jones), Xen PAE flag handling (Daniel - Berrange), bridged config configuration (Daniel Berrange), erroneous - XEN_V2_OP_SETMAXMEM value (Masayuki Sunou), memory free error (Mark - McLoughlin), set VIR_CONNECT_RO on read-only connections (S.Sakamoto), - avoid memory explosion bug (Daniel Berrange), integer overflow - for qemu CPU time (Daniel Berrange), QEMU binary path check (Daniel - Berrange) - - Cleanups: remove some global variables (Jim Meyering), printf-style - functions checks (Jim Meyering), better virsh error messages, increase - compiler checkings and security (Daniel Berrange), virBufferGrow usage - and docs, use calloc instead of malloc/memset, replace all sprintf by - snprintf, avoid configure clobbering user's CTAGS (Jim Meyering), - signal handler error cleanup (Richard Jones), iptables internal code - claenup (Mark McLoughlin), unified Xen driver (Richard Jones), - cleanup XPath libxml2 calls, IPTables rules tightening (Daniel - Berrange), - - Improvements: more regression tests on XML (Daniel Berrange), Python - bindings now generate exception in error cases (Richard Jones), - Python bindings for vir*GetAutoStart (Daniel Berrange), - handling of CD-Rom device without device name (Nobuhiro Itou), - fix hypervisor call to work with Xen 3.0.5 (Daniel Berrange), - DomainGetOSType for inactive domains (Daniel Berrange), multiple boot - devices for HVM (Daniel Berrange), - - - -0.2.1: Mar 16 2007: - - Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin) - - Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt - config directory (Daniel Berrange and Mark McLoughlin), memory leak - in qemud (Mark), various fixes on network support (Mark), avoid Xen - domain zombies on device hotplug errors (Daniel Berrange), various - fixes on qemud (Mark), args parsing (Richard Jones), virsh -t argument - (Saori Fukuta), avoid virsh crash on TAB key (Daniel Berrange), detect - xend operation failures (Kazuki Mizushima), don't listen on null socket - (Rich Jones), read-only socket cleanup (Rich Jones), use of vnc port 5900 - (Nobuhiro Itou), assorted networking fixes (Daniel Berrange), shutoff and - shutdown mismatches (Kazuki Mizushima), unlimited memory handling - (Atsushi SAKAI), python binding fixes (Tatsuro Enokura) - - Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies - and build (Daniel Berrange), fix xend port detection (Daniel - Berrange), icompile time warnings (Mark), avoid const related - compiler warnings (Daniel Berrange), automated builds (Daniel - Berrange), pointer/int mismatch (Richard Jones), configure time - selection of drivers, libvirt spec hacking (Daniel Berrange) - - Add support for network autostart and init scripts (Mark McLoughlin) - - New API virConnectGetCapabilities() to detect the virtualization - capabilities of a host (Richard Jones) - - Minor improvements: qemud signal handling (Mark), don't shutdown or reboot - domain0 (Kazuki Mizushima), QEmu version autodetection (Daniel Berrange), - network UUIDs (Mark), speed up UUID domain lookups (Tatsuro Enokura and - Daniel Berrange), support for paused QEmu CPU (Daniel Berrange), keymap - VNC attribute support (Takahashi Tomohiro and Daniel Berrange), maximum - number of virtual CPU (Masayuki Sunou), virtsh --readonly option (Rich - Jones), python bindings for new functions (Daniel Berrange) - - Documentation updates especially on the XML formats - - 0.2.0: Feb 14 2007: - Various internal cleanups (Mark McLoughlin, Richard Jones, Daniel Berrange, Karel Zak) diff --git a/docs/FAQ.html b/docs/FAQ.html index 778063c1f8..f541e975de 100644 --- a/docs/FAQ.html +++ b/docs/FAQ.html @@ -77,4 +77,4 @@ via the pkg-config command line tool, like:

pkg-config libvirt --libs

-

+

diff --git a/docs/architecture.html b/docs/architecture.html index 79ab7b9457..ac1948411f 100644 --- a/docs/architecture.html +++ b/docs/architecture.html @@ -1,10 +1,9 @@ -libvirt architecture

libvirt architecture

Currently libvirt supports 2 kind of virtualization, and its -internal structure is based on a driver model which simplifies adding new -engines:

  • Xen hypervisor
  • -
  • QEmu and KVM based virtualization
  • -
  • the driver architecture
  • +libvirt architecture

    libvirt architecture

    Currently libvirt supports 2 kind of virtualization, and its internal +structure is based on a driver model which simplifies adding new engines:

    Libvirt Xen support

    When running in a Xen environment, programs using libvirt have to execute in "Domain 0", which is the primary Linux OS loaded on the machine. That OS kernel provides most if not all of the actual drivers used by the set of @@ -15,7 +14,7 @@ drivers, kernels and daemons communicate though a shared system bus implemented in the hypervisor. The figure below tries to provide a view of this environment:

    The Xen architecture

    The library can be initialized in 2 ways depending on the level of priviledge of the embedding program. If it runs with root access, -virConnectOpen() can be used, it will use three different ways to connect to +virConnectOpen() can be used, it will use different ways to connect to the Xen infrastructure:

    • a connection to the Xen Daemon though an HTTP RPC layer
    • a read/write connection to the Xen Store
    • use Xen Hypervisor calls
    • @@ -26,42 +25,43 @@ changing the state of the system, but for performance and accuracy reasons may talk directly to the hypervisor when gathering state informations at least when possible (i.e. when the running program using libvirt has root priviledge access).

      If it runs without root access virConnectOpenReadOnly() should be used to -connect to initialize the library. It will then fork a libvirt_proxy -program running as root and providing read_only access to the API, this is -then only useful for reporting and monitoring.

      Libvirt QEmu and KVM support

      The model for QEmu and KVM is completely similar, basically KVM is based -on QEmu for the process controlling a new domain, only small details differs -between the two. In both case the libvirt API is provided by a controlling -process forked by libvirt in the background and which launch and control the -QEmu or KVM process. That program called libvirt_qemud talks though a specific -protocol to the library, and connects to the console of the QEmu process in -order to control and report on its status. Libvirt tries to expose all the -emulations models of QEmu, the selection is done when creating the new -domain, by specifying the architecture and machine type targetted.

      The code controlling the QEmu process is available in the -qemud/ directory.

      the driver based architecture

      As the previous section explains, libvirt can communicate using different -channels with the current hypervisor, and should also be able to use -different kind of hypervisor. To simplify the internal design, code, ease +connect to initialize the library. It will then fork a libvirt_proxy program +running as root and providing read_only access to the API, this is then +only useful for reporting and monitoring.

      Libvirt QEmu and KVM support

      The model for QEmu and KVM is completely similar, basically KVM is +based on QEmu for the process controlling a new domain, only small details +differs between the two. In both case the libvirt API is provided +by a controlling process forked by libvirt in the background and +which launch and control the QEmu or KVM process. That program called +libvirt_qemud talks though a specific protocol to the library, and +connects to the console of the QEmu process in order to control and +report on its status. Libvirt tries to expose all the emulations +models of QEmu, the selection is done when creating the new domain, +by specifying the architecture and machine type targetted.

      The code controlling the QEmu process is available in the +qemud/ subdirectory.

      the driver based architecture

      As the previous section explains, libvirt can communicate using different +channels with the Xen hypervisor, and is also able to use different kind +of hypervisor. To simplify the internal design, code, ease maintainance and simplify the support of other virtualization engine the internals have been structured as one core component, the libvirt.c module acting as a front-end for the library API and a set of hypvisor drivers defining a common set of routines. That way the Xen Daemon accces, the Xen Store one, the Hypervisor hypercall are all isolated in separate C modules implementing at least a subset of the common operations defined by the -drivers present in driver.h:

      • xend_internal: implements the driver functions though the Xen - Daemon
      • +drivers present in driver.h. The driver architecture is used to add support +for other virtualization engines and

        • xend_internal: implements the driver functions though the Xen + Daemon.
        • xs_internal: implements the subset of the driver availble though the - Xen Store
        • + Xen Store.
        • xen_internal: provide the implementation of the functions possible via - direct hypervisor access
        • -
        • proxy_internal: provide read-only Xen access via a proxy, the proxy code - is in the proxy/directory.
        • -
        • xm_internal: provide support for Xen defined but not running - domains.
        • -
        • qemu_internal: implement the driver functions for QEmu and - KVM virtualization engines. It also uses a qemud/ specific daemon - which interracts with the QEmu process to implement libvirt API.
        • + direct Xen hypervisor access. +
        • proxy_internal: provide read-only Xen access via a proxy, the proxy + code is in the proxy/ sub directory.
        • +
        • xm_internal: provide support for Xen defined but not running domains.
        • +
        • qemu_internal: implement the driver functions for QEmu and KVM + virtualization engines. It also uses a qemud/ specific daemon which + interracts with the QEmu process to implement libvirt API.
        • test: this is a test driver useful for regression tests of the front-end part of libvirt.

        Note that a given driver may only implement a subset of those functions, -(for example saving a Xen domain state to disk and restoring it is only -possible though the Xen Daemon), in that case the driver entry points for -unsupported functions are initialized to NULL.

    +for example saving a Xen domain state to disk and restoring it is only possible +though the Xen Daemon, in that case the driver entry points are initialized to +NULL.

diff --git a/docs/bugs.html b/docs/bugs.html index c48a0084be..1a7a94e5f1 100644 --- a/docs/bugs.html +++ b/docs/bugs.html @@ -9,4 +9,4 @@ If possible generate the patches by using cvs diff -u in a CVS checkout.

W bug, please check the existing open bugs, then if yours isn't a duplicate of an existing bug, log a new bug. It may be good to post to the mailing-list -too if the issue looks serious, thanks !

+too if the issue looks serious, thanks !

diff --git a/docs/downloads.html b/docs/downloads.html index 65cc87201d..c92bf51155 100644 --- a/docs/downloads.html +++ b/docs/downloads.html @@ -7,4 +7,4 @@ available, first register onto the server:

cvs -d :pserver:anoncvs@l checkout the development tree with:

cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt

Use ./autogen.sh to configure the local checkout, then make and make install, as usual. All normal cvs commands are now -available except commiting to the base.

+available except commiting to the base.

diff --git a/docs/errors.html b/docs/errors.html index af1a7f4586..a518a1b3ad 100644 --- a/docs/errors.html +++ b/docs/errors.html @@ -66,4 +66,4 @@ this point, see the error.py example about it:

def handler(ctxt, err):
 
 libvirt.registerErrorHandler(handler, 'context') 

the second argument to the registerErrorHandler function is passed as the fist argument of the callback like in the C version. The error is a tuple -containing the same field as a virError in C, but cast to Python.

+containing the same field as a virError in C, but cast to Python.

diff --git a/docs/format.html b/docs/format.html index 587349dfae..fd18950780 100644 --- a/docs/format.html +++ b/docs/format.html @@ -3,14 +3,12 @@ XML Format

XML Format

This section describes the XML format used to represent domains, there are variations on the format based on the kind of domains run and the options used to launch them:

The formats try as much as possible to follow the same structure and reuse elements and attributes where it makes sense.

Normal paravirtualized Xen -guests:

The library use an XML format to describe domains, as input to virDomainCreateLinux() +domains:

The library use an XML format to describe domains, as input to virDomainCreateLinux() and as the output of virDomainGetXMLDesc(), the following is an example of the format as returned by the shell command virsh xmldump fc4 , where fc4 was one of the running domains:

<domain type='xen' id='18'>
@@ -181,235 +179,4 @@ systems:

<domain type='xen' id='3'>
 

It is likely that the HVM description gets additional optional elements and attributes as the support for fully virtualized domain expands, especially for the variety of devices emulated and the graphic support -options offered.

KVM domain (added in 0.2.0)

Support for the KVM virtualization -is provided in recent Linux kernels (2.6.20 and onward). This requires -specific hardware with acceleration support and the availability of the -special version of the QEmu binary. Since this -relies on QEmu for the machine emulation like fully virtualized guests the -XML description is quite similar, here is a simple example:

<domain type='kvm'>
-  <name>demo2</name>
-  <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
-  <memory>131072</memory>
-  <vcpu>1</vcpu>
-  <os>
-    <type>hvm</type>
-  </os>
-  <devices>
-    <emulator>/home/user/usr/kvm-devel/bin/qemu-system-x86_64</emulator>
-    <disk type='file' device='disk'>
-      <source file='/home/user/fedora/diskboot.img'/>
-      <target dev='hda'/>
-    </disk>
-    <interface type='user'>
-      <mac address='24:42:53:21:52:45'/>
-    </interface>
-    <graphics type='vnc' port='-1'/>
-  </devices>
-</domain>

The specific points to note if using KVM are:

  • the top level domain element carries a type of 'kvm'
  • -
  • the <devices> emulator points to the special qemu binary required - for KVM
  • -
  • networking interface definitions definitions are somewhat different due - to a different model from Xen see below
  • -

except those points the options should be quite similar to Xen HVM -ones.

Networking options for QEmu and KVM (added in 0.2.0)

The networking support in the QEmu and KVM case is more flexible, and -support a variety of options:

  1. Userspace SLIRP stack -

    Provides a virtual LAN with NAT to the outside world. The virtual - network has DHCP & DNS services and will give the guest VM addresses - starting from 10.0.2.15. The default router will be - 10.0.2.2 and the DNS server will be 10.0.2.3. - This networking is the only option for unprivileged users who need their - VMs to have outgoing access. Example configs are:

    -
    <interface type='user'/>
    -
    -<interface type='user'>                                                  
    -  <mac address="11:22:33:44:55:66:/>                                     
    -</interface>
    -    
    -
  2. -
  3. Virtual network -

    Provides a virtual network using a bridge device in the host. - Depending on the virtual network configuration, the network may be - totally isolated,NAT'ing to aan explicit network device, or NAT'ing to - the default route. DHCP and DNS are provided on the virtual network in - all cases and the IP range can be determined by examining the virtual - network config with 'virsh net-dumpxml <network - name>'. There is one virtual network called'default' setup out - of the box which does NAT'ing to the default route and has an IP range of - 192.168.22.0/255.255.255.0. Each guest will have an - associated tun device created with a name of vnetN, which can also be - overriden with the <target> element. Example configs are:

    -
    <interface type='network'>
    -  <source network='default'/>
    -</interface>
    -
    -<interface type='network'>
    -  <source network='default'/>
    -  <target dev='vnet7'/>
    -  <mac address="11:22:33:44:55:66:/>
    -</interface>
    -    
    -
  4. -
  5. Bridge to to LAN -

    Provides a bridge from the VM directly onto the LAN. This assumes - there is a bridge device on the host which has one or more of the hosts - physical NICs enslaved. The guest VM will have an associated tun device - created with a name of vnetN, which can also be overriden with the - <target> element. The tun device will be enslaved to the bridge. - The IP range / network configuration is whatever is used on the LAN. This - provides the guest VM full incoming & outgoing net access just like a - physical machine. Examples include:

    -
    <interface type='bridge'>
    - <source dev='br0'/>
    -</interface>
    -
    -<interface type='bridge'>
    -  <source dev='br0'/>
    -  <target dev='vnet7'/>
    -  <mac address="11:22:33:44:55:66:/>
    -</interface>       <interface type='bridge'>
    -         <source dev='br0'/>
    -         <target dev='vnet7'/>
    -         <mac address="11:22:33:44:55:66:/>
    -       </interface>
    -
  6. -
  7. Generic connection to LAN -

    Provides a means for the administrator to execute an arbitrary script - to connect the guest's network to the LAN. The guest will have a tun - device created with a name of vnetN, which can also be overriden with the - <target> element. After creating the tun device a shell script will - be run which is expected to do whatever host network integration is - required. By default this script is called /etc/qemu-ifup but can be - overriden.

    -
    <interface type='ethernet'/>
    -
    -<interface type='ethernet'>
    -  <target dev='vnet7'/>
    -  <script path='/etc/qemu-ifup-mynet'/>
    -</interface>
    -
  8. -
  9. Multicast tunnel -

    A multicast group is setup to represent a virtual network. Any VMs - whose network devices are in the same multicast group can talk to each - other even across hosts. This mode is also available to unprivileged - users. There is no default DNS or DHCP support and no outgoing network - access. To provide outgoing network access, one of the VMs should have a - 2nd NIC which is connected to one of the first 4 network types and do the - appropriate routing. The multicast protocol is compatible with that used - by user mode linux guests too. The source address used must be from the - multicast address block.

    -
    <interface type='mcast'>
    -  <source address='230.0.0.1' port='5558'/>
    -</interface>
    -
  10. -
  11. TCP tunnel -

    A TCP client/server architecture provides a virtual network. One VM - provides the server end of the netowrk, all other VMS are configured as - clients. All network traffic is routed between the VMs via the server. - This mode is also available to unprivileged users. There is no default - DNS or DHCP support and no outgoing network access. To provide outgoing - network access, one of the VMs should have a 2nd NIC which is connected - to one of the first 4 network types and do the appropriate routing.

    -

    Example server config:

    -
    <interface type='server'>
    -  <source address='192.168.0.1' port='5558'/>
    -</interface>
    -

    Example client config:

    -
    <interface type='client'>
    -  <source address='192.168.0.1' port='5558'/>
    -</interface>
    -
  12. -

To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is -possible to use these configs to have networking with both Xen & -QEMU/KVMs connected to each other.

QEmu domain (added in 0.2.0)

Libvirt support for KVM and QEmu is the same code base with only minor -changes. The configuration is as a result nearly identical, the only changes -are related to QEmu ability to emulate various CPU type and hardware -platforms, and kqemu support (QEmu own kernel accelerator when the -emulated CPU is i686 as well as the target machine):

<domain type='qemu'>
-  <name>QEmu-fedora-i686</name>
-  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
-  <memory>219200</memory>
-  <currentMemory>219200</currentMemory>
-  <vcpu>2</vcpu>
-  <os>
-    <type arch='i686' machine='pc'>hvm</type>
-    <boot dev='cdrom'/>
-  </os>
-  <devices>
-    <emulator>/usr/bin/qemu</emulator>
-    <disk type='file' device='cdrom'>
-      <source file='/home/user/boot.iso'/>
-      <target dev='hdc'/>
-      <readonly/>
-    </disk>
-    <disk type='file' device='disk'>
-      <source file='/home/user/fedora.img'/>
-      <target dev='hda'/>
-    </disk>
-    <interface type='network'>
-      <source name='default'/>
-    </interface>
-    <graphics type='vnc' port='-1'/>
-  </devices>
-</domain>

The difference here are:

  • the value of type on top-level domain, it's 'qemu' or kqemu if asking - for kernel assisted - acceleration
  • -
  • the os type block defines the architecture to be emulated, and - optionally the machine type, see the discovery API below
  • -
  • the emulator string must point to the right emulator for that - architecture
  • -

Discovering virtualization capabilities (Added in 0.2.1)

As new virtualization engine support gets added to libvirt, and to handle -cases like QEmu supporting a variety of emulations, a query interface has -been added in 0.2.1 allowing to list the set of supported virtualization -capabilities on the host:

    char * virConnectGetCapabilities (virConnectPtr conn);

The value returned is an XML document listing the virtualization -capabilities of the host and virtualization engine to which -@conn is connected. One can test it using virsh -command line tool command 'capabilities', it dumps the XML -associated to the current connection. For example in the case of a 64 bits -machine with hardware virtualization capabilities enabled in the chip and -BIOS you will see

<capabilities>
-  <host>
-    <cpu>
-      <arch>x86_64</arch>
-      <features>
-        <vmx/>
-      </features>
-    </cpu>
-  </host>
-
-  <!-- xen-3.0-x86_64 -->
-  <guest>
-    <os_type>xen</os_type>
-    <arch name="x86_64">
-      <wordsize>64</wordsize>
-      <domain type="xen"></domain>
-      <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
-    </arch>
-    <features>
-    </features>
-  </guest>
-
-  <!-- hvm-3.0-x86_32 -->
-  <guest>
-    <os_type>hvm</os_type>
-    <arch name="i686">
-      <wordsize>32</wordsize>
-      <domain type="xen"></domain>
-      <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
-      <machine>pc</machine>
-      <machine>isapc</machine>
-      <loader>/usr/lib/xen/boot/hvmloader</loader>
-    </arch>
-    <features>
-    </features>
-  </guest>
-  ...
-</capabilities>

The fist block (in red) indicates the host hardware capbilities, currently -it is limited to the CPU properties but other information may be available, -it shows the CPU architecture, and the features of the chip (the feature -block is similar to what you will find in a Xen fully virtualized domain -description).

The second block (in blue) indicates the paravirtualization support of the -Xen support, you will see the os_type of xen to indicate a paravirtual -kernel, then architecture informations and potential features.

The third block (in green) gives similar informations but when running a -32 bit OS fully virtualized with Xen using the hvm support.

This section is likely to be updated and augmented in the future, see the -discussion which led to the capabilities format in the mailing-list -archives.

+options offered.

KVM domain

QEmu domain

diff --git a/docs/index.html b/docs/index.html index c4ed756524..cb40b2b7be 100644 --- a/docs/index.html +++ b/docs/index.html @@ -23,8 +23,8 @@ System means the ability to run multiple instances of Operating Systems concurently on a single hardware system where the basic resources are driven by a Linux instance. The library aim at providing long term stable C API initially for the Xen -paravirtualization but should be able to integrate other -virtualization mechanisms, it currently also support QEmu and KVM.

+paravirtualization but should be able to integrate other virtualization +mechanisms if needed.

+(extension for remote access support is being worked on, see +the mailing list discussions about it).

diff --git a/docs/news.html b/docs/news.html index d1cbacf13e..e1cb428684 100644 --- a/docs/news.html +++ b/docs/news.html @@ -2,71 +2,7 @@ Releases

Releases

Here is the list of official releases, however since it is early on in the development of libvirt, it is preferable when possible to just use the CVS version or snapshot, contact the mailing list -and check the ChangeLog to gauge progresses.

0.2.2: Apr 17 2007

  • Documentation: fix errors due to Amaya (with Simon Hernandez), - virsh uses kB not bytes (Atsushi SAKAI), add command line help to - qemud (Richard Jones), xenUnifiedRegister docs (Atsushi SAKAI), - strings typos (Nikolay Sivov), ilocalization probalem raised by - Thomas Canniot
  • -
  • Bug fixes: virsh memory values test (Masayuki Sunou), operations without - libvirt_qemud (Atsushi SAKAI), fix spec file (Florian La Roche, Jeremy - Katz, Michael Schwendt), - direct hypervisor call (Atsushi SAKAI), buffer overflow on qemu - networking command (Daniel Berrange), buffer overflow in quemud (Daniel - Berrange), virsh vcpupin bug (Masayuki Sunou), host PAE detections - and strcuctures size (Richard Jones), Xen PAE flag handling (Daniel - Berrange), bridged config configuration (Daniel Berrange), erroneous - XEN_V2_OP_SETMAXMEM value (Masayuki Sunou), memory free error (Mark - McLoughlin), set VIR_CONNECT_RO on read-only connections (S.Sakamoto), - avoid memory explosion bug (Daniel Berrange), integer overflow - for qemu CPU time (Daniel Berrange), QEMU binary path check (Daniel - Berrange)
  • -
  • Cleanups: remove some global variables (Jim Meyering), printf-style - functions checks (Jim Meyering), better virsh error messages, increase - compiler checkings and security (Daniel Berrange), virBufferGrow usage - and docs, use calloc instead of malloc/memset, replace all sprintf by - snprintf, avoid configure clobbering user's CTAGS (Jim Meyering), - signal handler error cleanup (Richard Jones), iptables internal code - claenup (Mark McLoughlin), unified Xen driver (Richard Jones), - cleanup XPath libxml2 calls, IPTables rules tightening (Daniel - Berrange),
  • -
  • Improvements: more regression tests on XML (Daniel Berrange), Python - bindings now generate exception in error cases (Richard Jones), - Python bindings for vir*GetAutoStart (Daniel Berrange), - handling of CD-Rom device without device name (Nobuhiro Itou), - fix hypervisor call to work with Xen 3.0.5 (Daniel Berrange), - DomainGetOSType for inactive domains (Daniel Berrange), multiple boot - devices for HVM (Daniel Berrange), -
  • -

0.2.1: Mar 16 2007

  • Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin)
  • -
  • Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt - config directory (Daniel Berrange and Mark McLoughlin), memory leak - in qemud (Mark), various fixes on network support (Mark), avoid Xen - domain zombies on device hotplug errors (Daniel Berrange), various - fixes on qemud (Mark), args parsing (Richard Jones), virsh -t argument - (Saori Fukuta), avoid virsh crash on TAB key (Daniel Berrange), detect - xend operation failures (Kazuki Mizushima), don't listen on null socket - (Rich Jones), read-only socket cleanup (Rich Jones), use of vnc port 5900 - (Nobuhiro Itou), assorted networking fixes (Daniel Berrange), shutoff and - shutdown mismatches (Kazuki Mizushima), unlimited memory handling - (Atsushi SAKAI), python binding fixes (Tatsuro Enokura)
  • -
  • Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies - and build (Daniel Berrange), fix xend port detection (Daniel - Berrange), icompile time warnings (Mark), avoid const related - compiler warnings (Daniel Berrange), automated builds (Daniel - Berrange), pointer/int mismatch (Richard Jones), configure time - selection of drivers, libvirt spec hacking (Daniel Berrange)
  • -
  • Add support for network autostart and init scripts (Mark McLoughlin)
  • -
  • New API virConnectGetCapabilities() to detect the virtualization - capabilities of a host (Richard Jones)
  • -
  • Minor improvements: qemud signal handling (Mark), don't shutdown or reboot - domain0 (Kazuki Mizushima), QEmu version autodetection (Daniel Berrange), - network UUIDs (Mark), speed up UUID domain lookups (Tatsuro Enokura and - Daniel Berrange), support for paused QEmu CPU (Daniel Berrange), keymap - VNC attribute support (Takahashi Tomohiro and Daniel Berrange), maximum - number of virtual CPU (Masayuki Sunou), virtsh --readonly option (Rich - Jones), python bindings for new functions (Daniel Berrange)
  • -
  • Documentation updates especially on the XML formats
  • -

0.2.0: Feb 14 2007

  • Various internal cleanups (Mark McLoughlin, Richard Jones, +and check the ChangeLog to gauge progresses.

    0.2.0: Feb 14 2007

    • Various internal cleanups (Mark McLoughlin, Richard Jones, Daniel Berrange, Karel Zak)
    • Bug fixes: avoid a crash in connect (Daniel Berrange), virsh args parsing (Richard Jones)
    • @@ -213,4 +149,4 @@ and check the ChangeLog to gauge progresses.

      0.0.1: Dec 19 2005

      • First release
      • Basic management of existing Xen domains
      • Minimal autogenerated Python bindings
      • -

+

diff --git a/docs/python.html b/docs/python.html index a20d570b32..26679f9106 100644 --- a/docs/python.html +++ b/docs/python.html @@ -50,4 +50,4 @@ from the C API, the only points to notice are:

  • the import of the modu
  • extracting and printing some informations about the domain using various methods associated to the virDomain class.
  • -

+

diff --git a/src/virsh.c b/src/virsh.c index 4ad428c6b0..b33981a8e0 100644 --- a/src/virsh.c +++ b/src/virsh.c @@ -3174,6 +3174,8 @@ _vshStrdup(vshControl * ctl, const char *s, const char *filename, int line) { char *x; + if (s == NULL) + return(NULL); if ((x = strdup(s))) return x; vshError(ctl, TRUE, _("%s: %d: failed to allocate %lu bytes"), -- GitLab