- 13 12月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
The virtlockd daemon will maintain locks on behalf of libvirtd. There are two reasons for it to be separate - Avoid risk of other libvirtd threads accidentally releasing fcntl() locks by opening + closing a file that is locked - Ensure locks can be preserved across libvirtd restarts. virtlockd will need to be able to re-exec itself while maintaining locks. This is simpler to achieve if its sole job is maintaining locks Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 12 12月, 2012 2 次提交
-
-
由 Cole Robinson 提交于
Most of this deals with moving the libvirt-guests.sh script which does all the work to /usr/libexec, so it can be shared by both systemd and traditional init. Previously systemd depended on the script being in /etc/init.d Required to fix https://bugzilla.redhat.com/show_bug.cgi?id=789747
-
由 Michal Privoznik 提交于
These set bridge part of QoS when bringing domain's interface up. Long story short, if there's a 'floor' set, a new QoS class is created. ClassID MUST be unique within the bridge and should be kept for unplug phase.
-
- 11 12月, 2012 2 次提交
-
-
由 Dmitry Guryanov 提交于
Parallels Cloud Server uses virtual networks model for network configuration. It uses own tools for virtual network management. So add network driver, which will be responsible for listing virtual networks and performing different operations on them (in consequent patched). This patch only allows listing virtual network names, without any parameters like DHCP server settings. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
由 Dmitry Guryanov 提交于
This macro will be used in another file in the next patch, so move it to common header file. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
- 04 12月, 2012 1 次提交
-
-
由 Ata E Husain Bohra 提交于
The patch adds the backend driver to support iSCSI format storage pools and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt is as follows: 1. ESX static iSCSI target <------> Libvirt Storage Pools 2. ESX iSCSI LUNs <------> Libvirt Storage Volumes. The above understanding is based on http://libvirt.org/storage.html. The operation supported on iSCSI pools includes: 1. List storage pools & volumes. 2. Get XML descriptor operaion on pools & volumes. 3. Lookup operation on pools & volumes by name, UUID and path (if applicable). iSCSI pools does not support operations such as: Create / remove pools and volumes.
-
- 01 12月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
To be able todo controlled shutdown/reboot of containers an API to talk to init via /dev/initctl is required. Fortunately this is quite straightforward to implement, and is supported by both sysvinit and systemd. Upstart support for /dev/initctl is unclear. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 28 11月, 2012 1 次提交
-
-
由 Gao feng 提交于
this patch addes fuse support for libvirt lxc. we can use fuse filesystem to generate sysinfo dynamically, So we can isolate /proc/meminfo,cpuinfo and so on through fuse filesystem. we mount fuse filesystem for every container. the mount name is libvirt,mount point is localstatedir/run/libvirt/lxc/containername. Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com>
-
- 27 11月, 2012 1 次提交
-
-
由 Ata E Husain Bohra 提交于
The patch refactors the current ESX storage driver due to following reasons: 1. Given most of the public APIs exposed by the storage driver in Libvirt remains same, ESX storage driver should not implement logic specific for only one supported format (current implementation only supports VMFS). 2. Decoupling interface from specific storage implementation gives us an extensible design to hook implementation for other supported storage formats. This patch refactors the current driver to implement it as a facade pattern i.e. the driver exposes all the public libvirt APIs, but uses backend drivers to get the required task done. The backend drivers provide implementation specific to the type of storage device. File changes: ------------------ esx_storage_driver.c ----> esx_storage_driver.c (base storage driver) | |---> esx_storage_backend_vmfs.c (VMFS backend)
-
- 17 10月, 2012 1 次提交
-
-
由 Li Zhang 提交于
Currently, the CPU model driver is not implemented for PowerPC. Host's CPU information is needed to exposed to guests' XML file some time. This patch is to implement the callback functions of CPU model driver. Signed-off-by: NLi Zhang <zhlcindy@linux.vnet.ibm.com> Acked-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 16 10月, 2012 2 次提交
-
-
由 Daniel P. Berrange 提交于
Add two new APIs virNetServerServiceNewPostExecRestart and virNetServerServicePreExecRestart which allow a virNetServerServicePtr object to be created from a JSON object and saved to a JSON object, for the purpose of re-exec'ing a process. This includes serialization of the listening sockets associated with the service Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
The previously introduced virFile{Lock,Unlock} APIs provide a way to acquire/release fcntl() locks on individual files. For unknown reason though, the POSIX spec says that fcntl() locks are released when *any* file handle referring to the same path is closed. In the following sequence threadA: fd1 = open("foo") threadB: fd2 = open("foo") threadA: virFileLock(fd1) threadB: virFileLock(fd2) threadB: close(fd2) you'd expect threadA to come out holding a lock on 'foo', and indeed it does hold a lock for a very short time. Unfortunately when threadB does close(fd2) this releases the lock associated with fd1. For the current libvirt use case for virFileLock - pidfiles - this doesn't matter since the lock is acquired at startup while single threaded an never released until exit. To provide a more generally useful API though, it is necessary to introduce a slightly higher level abstraction, which is to be referred to as a "lockspace". This is to be provided by a virLockSpacePtr object in src/util/virlockspace.{c,h}. The core idea is that the lockspace keeps track of what files are already open+locked. This means that when a 2nd thread comes along and tries to acquire a lock, it doesn't end up opening and closing a new FD. The lockspace just checks the current list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY. NB, the API as it stands is designed on the basis that the files being locked are not being otherwise opened and used by the application code. One approach to using this API is to acquire locks based on a hash of the filepath. eg to lock /var/lib/libvirt/images/foo.img the application might do virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks"); lockname = md5sum("/var/lib/libvirt/images/foo.img"); virLockSpaceAcquireLock(lockspace, lockname); NB, in this example, the caller should ensure that the path is canonicalized before calculating the checksum. It is also possible to do locks directly on resources by using a NULL lockspace directory and then using the file path as the lock name eg virLockSpacePtr lockspace = virLockSpaceNew(NULL); virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img"); This is only safe to do though if no other part of the process will be opening the files. This will be the case when this code is used inside the soon-to-be-reposted virlockd daemon Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 11 10月, 2012 1 次提交
-
-
由 Jiri Denemark 提交于
While the changes to sanlock driver should be stable, the actual implementation of sanlock_helper is supposed to be replaced in the future. However, before we can implement a better sanlock_helper, we need an administrative interface to libvirtd so that the helper can just pass a "leases lost" event to the particular libvirt driver and everything else will be taken care of internally. This approach will also allow libvirt to pass such event to applications and use appropriate reasons when changing domain states. The temporary implementation handles all actions directly by calling appropriate libvirt APIs (which among other things means that it needs to know the credentials required to connect to libvirtd).
-
- 09 10月, 2012 1 次提交
-
-
由 Doug Goldstein 提交于
Add a read-only udev based backend for virInterface. Useful for distros that do not have netcf support yet. Multiple libvirt based utilities use a HAL based fallback when virInterface is not available which is less than ideal. This implements: * virConnectNumOfInterfaces() * virConnectListInterfaces() * virConnectNumOfDefinedInterfaces() * virConnectListDefinedInterfaces() * virConnectListAllInterfaces() * virConnectInterfaceLookupByName() * virConnectInterfaceLookupByMACString()
-
- 26 9月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
Continue consolidation of process functions by moving some helpers out of command.{c,h} into virprocess.{c,h} Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 19 9月, 2012 1 次提交
-
-
由 Doug Goldstein 提交于
Based exclusively on work by Eric Blake in a patch posted with the same subject. However some modifications related to comments and my plans to add another backend. Added WITH_INTERFACE as the only automake variable deciding whether to build the driver and using WITH_NETCF to identify that we're wanting to use the netcf library as the backend. * configure.ac: Added with_interface * src/interface/netcf_driver.c: Renamed.. * src/interface/interface_backend_netcf.c: ..to this to match storage. * src/interface/netcf_driver.h: Renamed.. * src/interface/interface_driver.h: ..to this. * daemon/Makefile.am: Respect WITH_INTERFACE and WITH_NETCF. * libvirt.spec.in: Add RPM support for --with-interface
-
- 24 8月, 2012 1 次提交
-
-
由 Eric Blake 提交于
This has several benefits: 1. Future snapshot-related code has a definite place to go (and I _will_ be adding some) 2. Snapshot errors now use the VIR_FROM_DOMAIN_SNAPSHOT error classification, which has been underutilized (previously only in libvirt.c) * src/conf/domain_conf.h, domain_conf.c: Split... * src/conf/snapshot_conf.h, snapshot_conf.c: ...into new files. * src/Makefile.am (DOMAIN_CONF_SOURCES): Build new files. * po/POTFILES.in: Mark new file for translation. * src/vbox/vbox_tmpl.c: Update caller. * src/esx/esx_driver.c: Likewise. * src/qemu/qemu_command.c: Likewise. * src/qemu/qemu_domain.h: Likewise.
-
- 21 8月, 2012 1 次提交
-
-
由 Peter Krempa 提交于
This patch adds helper functions that enable us to use libssh2 in conjunction with libvirt's virNetSockets for ssh transport instead of spawning "ssh" client process. This implemetation supports tunneled plaintext, keyboard-interactive, private key, ssh agent based and null authentication. Libvirt's Auth callback is used for interaction with the user. (Keyboard interactive authentication, adding of host keys, private key passphrases). This enables seamless integration into the application using libvirt. No helpers as "ssh-askpass" are needed. Reading and writing of OpenSSH style "known_hosts" files is supported. Communication is done using SSH exec channel, where the user may specify arbitrary command to be executed on the remote side and reads and writes to/from stdin/out are sent through the ssh channel. Usage of stderr is not (yet) supported.
-
- 18 8月, 2012 1 次提交
-
-
由 Shradha Shah 提交于
Move the functions the parse/format, and validate PCI addresses to their own file so they can be conveniently used in other places besides device_conf.c Refactoring existing code without causing any functional changes to prepare for new code. This patch makes the code reusable. Signed-off-by: NShradha Shah <sshah@solarflare.com>
-
- 16 8月, 2012 1 次提交
-
-
由 Laine Stump 提交于
The following config elements now support a <vlan> subelements: within a domain: <interface>, and the <actual> subelement of <interface> within a network: the toplevel, as well as any <portgroup> Each vlan element must have one or more <tag id='n'/> subelements. If there is more than one tag, it is assumed that vlan trunking is being requested. If trunking is required with only a single tag, the attribute "trunk='yes'" should be added to the toplevel <vlan> element. Some examples: <interface type='hostdev'/> <vlan> <tag id='42'/> </vlan> <mac address='52:54:00:12:34:56'/> ... </interface> <network> <name>vlan-net</name> <vlan trunk='yes'> <tag id='30'/> </vlan> <virtualport type='openvswitch'/> </network> <interface type='network'/> <source network='vlan-net'/> ... </interface> <network> <name>trunk-vlan</name> <vlan> <tag id='42'/> <tag id='43'/> </vlan> ... </network> <network> <name>multi</name> ... <portgroup name='production'/> <vlan> <tag id='42'/> </vlan> </portgroup> <portgroup name='test'/> <vlan> <tag id='666'/> </vlan> </portgroup> </network> <interface type='network'/> <source network='multi' portgroup='test'/> ... </interface> IMPORTANT NOTE: As of this patch there is no backend support for the vlan element for *any* network device type. When support is added in later patches, it will only be for those select network types that support setting up a vlan on the host side, without the guest's involvement. (For example, it will be possible to configure a vlan for a guest connected to an openvswitch bridge, but it won't be possible to do that for one that is connected to a standard Linux host bridge.)
-
- 10 8月, 2012 1 次提交
-
-
由 Matthias Bolte 提交于
An ESX server has one or more PhysicalNics that represent the actual hardware NICs. Those can be listed via the interface driver. A libvirt virtual network is mapped to a HostVirtualSwitch. On the physical side a HostVirtualSwitch can be connected to PhysicalNics. On the virtual side a HostVirtualSwitch has HostPortGroups that are mapped to libvirt virtual network's portgroups. Typically there is HostPortGroups named 'VM Network' that is used to connect virtual machines to a HostVirtualSwitch. A second HostPortGroup typically named 'Management Network' is used to connect the hypervisor itself to the HostVirtualSwitch. This one is not mapped to a libvirt virtual network's portgroup. There can be more HostPortGroups than those typical two on a HostVirtualSwitch. +---------------+-------------------+ ...---| | | +-------------+ | HostPortGroup | |---| PhysicalNic | | VM Network | | | vmnic0 | ...---| | | +-------------+ +---------------+ HostVirtualSwitch | | vSwitch0 | +---------------+ | | HostPortGroup | | ...---| Management | | | Network | | +---------------+-------------------+ The virtual counterparts of the PhysicalNic is the HostVirtualNic for the hypervisor and the VirtualEthernetCard for the virtual machines that are grouped into HostPortGroups. +---------------------+ +---------------+---... | VirtualEthernetCard |---| | +---------------------+ | HostPortGroup | +---------------------+ | VM Network | | VirtualEthernetCard |---| | +---------------------+ +---------------+ | +---------------+ +---------------------+ | HostPortGroup | | HostVirtualNic |---| Management | +---------------------+ | Network | +---------------+---... The currently implemented network driver can list, define and undefine HostVirtualSwitches including HostPortGroups for virtual machines. Existing HostVirtualSwitches cannot be edited yet. This will be added in a followup patch.
-
- 01 8月, 2012 3 次提交
-
-
由 Dmitry Guryanov 提交于
Parallels Cloud Server has one serious discrepancy with libvirt: libvirt stores domain configuration files in one place, and storage files in other places (with the API of storage pools and storage volumes). Parallels Cloud Server stores all domain data in a single directory, for example, you may have domain with name fedora-15, which will be located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be in '/var/parallels/fedora-15.pvm/harddisk1.hdd'. I've decided to create storage driver, which produces pseudo-volumes (xml files with volume description), and they will be 'converted' to real disk images after attaching to a VM. So if someone creates VM with one hard disk using virt-manager, at first virt-manager creates a new volume, and then defines a domain. We can lookup a volume by path in XML domain definition and find out location of new domain and size of its hard disk. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
由 Dmitry Guryanov 提交于
Parallels driver is 'stateless', like vmware or openvz drivers. It collects information about domains during startup using command-line utility prlctl. VMs in Parallels are identified by UUIDs or unique names, which can be used as respective fields in virDomainDef structure. Currently only basic info, like description, virtual cpus number and memory amount, is implemented. Querying devices information will be added in the next patches. Parallels doesn't support non-persistent domains - you can't run a domain having only disk image, it must always be registered in system. Functions for querying domain info have been just copied from test driver with some changes - they extract needed data from previously created list of virDomainObj objects. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
由 Dmitry Guryanov 提交于
Parallels Cloud Server is a cloud-ready virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server. More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
- 30 7月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
Move the code that handles the LXC monitor out of the lxc_process.c file and into lxc_monitor.{c,h} Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 26 7月, 2012 11 次提交
-
-
由 Osier Yang 提交于
Commands in node device group moved from virsh.c to virsh-nodedev.c, * virsh.c: Remove commands in node device group. * virsh-nodedev.c: New file, filled with commands in node device group * po/POTFILES.in: Add virsh-nodedev.c * cfg.mk: Skip to check config.h including for virsh-nodedev.c
-
由 Osier Yang 提交于
Commands in host group moved from virsh.c to virsh-host.c, * virsh.c: Remove commands in host group. * virsh-host.c: New file, filled with commands in host group * po/POTFILES.in: Add virsh-host.c * cfg.mk: Skip to check config.h including for virsh-host.c
-
由 Osier Yang 提交于
Commands to manage domain snapshot are moved from virsh.c to virsh-snapshot.c. * virsh.c: Remove domain snapshot commands. * virsh-snapshot.c: New file, filled with domain snapshot commands. * po/POTFILES.in: Add virsh-snapshot.c * cfg.mk: Skip strcase and config.h including checking for virsh-snapshot.c
-
由 Osier Yang 提交于
Commands to manage secret are moved from virsh.c to virsh-secret.c, with a few helpers for secret command use. * virsh.c: Remove secret commands and a few helpers. (vshCommandOptSecret, and vshCommandOptSecretBy) * virsh-secret.c: New file, filled with secret commands and its helpers. * po/POTFILES.in: Add virsh-secret.c * cfg.mk: Skip to check config.h including for virsh-secret.c
-
由 Osier Yang 提交于
Commands to manage network filter are moved from virsh.c to virsh-nwfilter.c, with a few helpers for network filter command use. * virsh.c: Remove network filter commands and a few helpers. (vshCommandOptNWFilter, and vshCommandOptNWFilterBy) * virsh-nwfilter.c: New file, filled with network filter commands and its helpers. * po/POTFILES.in: Add virsh-nwfilter.c * cfg.mk: Skip to check config.h including for virsh-nwfilter.c
-
由 Osier Yang 提交于
Commands to manage host interface are moved from virsh.c to virsh-interface.c, with a few helpers for interface command use. * virsh.c: Remove interface commands and a few helpers. (vshCommandOptInterface, vshCommandOptInterfaceBy) * virsh-interface.c: New file, filled with interface commands and its helpers. * cfg.mk: Skip to check config.h including for virsh-interface.c * po/POTFILES.in: Add virsh-interface.c
-
由 Osier Yang 提交于
Commands to manage network are moved from virsh.c to virsh-network.c, with a few helpers for network command use. * virsh.c: Remove network commands and a few helpers. * virsh-network.c: New file, filled with network commands and its helpers. * po/POTFILES.in: Add virsh-network.c * cfg.mk: Skip to check config.h including for virsh-network.c
-
由 Osier Yang 提交于
This splits commands of storage pool group into virsh-pool.c, The helpers not for common use are moved too. Standard copyright is added for the new file. * tools/virsh.c: Remove commands for storage storage pool and a few helpers. (vshCommandOptVol, vshCommandOptVolBy). * tools/virsh-pool.c: New file, filled with commands of storage pool group and its helpers. * po/POTFILES.in: Add virsh-pool.c * cfg.mk: Skip to check config.h including for virsh-pool.c
-
由 Osier Yang 提交于
This splits commands of storage volume group into virsh-volume.c, The helpers not for common use are moved too. Standard copyright is added for the new file. * tools/virsh.c: Remove commands for storage storage volume and a few helpers. (vshCommandOptVol, vshCommandOptVolBy). * tools/virsh-volume.c: New file, filled with commands of storage volume group and its helpers. * po/POTFILES.in: Add virsh-volume.c * cfg.mk: Skip to check config.h including for virsh-volume.c
-
由 Osier Yang 提交于
This splits commands to manage domain into virsh-domain.c,The helpers not for common use are moved into them too. Standard copyright is added for the new file. * tools/virsh.c: - Remove commands for domain group, and one helper (vshDomainVcpuStateToString) - vshStreamSink is moved before commands's definition for it's also used by commands not of domain group, such as volUpload. * tools/virsh-domain.c: - New file, commands for domain group and the one helper are moved into it. * po/POTFILES.in: - Add virsh-domain.c * cfg.mk: - Skip to check config.h including for virsh-domain.c
-
由 Osier Yang 提交于
This splits commands commands to monitor domain status into virsh-domain-monitor.c. The helpers not for common use are moved too. Standard copyright is added. * tools/virsh.c: - Remove commands for domain monitoring group and a few helpers ( vshDomainIOErrorToString, vshGetDomainDescription, vshDomainControlStateToString, vshDomainStateToString) not for common use. - Remove (incldue "intprops.h"). * tools/virsh-domain-monitor.c: - New file, filled with commands of domain monitor group. - Add "intprops.h". * cfg.mk: - Skip strcase checking for virsh-domain-monitor.c - Skip to check config.h including for virsh-domain-monitor.c * po/POTFILES.in - Add virsh-domain-monitor.c
-
- 19 7月, 2012 3 次提交
-
-
由 Daniel P. Berrange 提交于
Move all the code that manages stop/start of LXC processes into separate lxc_process.{c,h} file to make the lxc_driver.c file smaller Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
Move the cgroup setup code out of the lxc_controller.c file and into lxc_cgroup.{c,h}. This reduces the size of the lxc_controller.c file and paves the way to invoke cgroup setup from lxc_driver.c instead of lxc_controller.c in the future Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Sebastian Wiedenroth 提交于
This patch brings support to manage sheepdog pools and volumes to libvirt. It uses the "collie" command-line utility that comes with sheepdog for that. A sheepdog pool in libvirt maps to a sheepdog cluster. It needs a host and port to connect to, which in most cases is just going to be the default of localhost on port 7000. A sheepdog volume in libvirt maps to a sheepdog vdi. To create one specify the pool, a name and the capacity. Volumes can also be resized later. In the volume XML the vdi name has to be put into the <target><path>. To use the volume as a disk source for virtual machines specify the vdi name as "name" attribute of the <source>. The host and port information from the pool are specified inside the host tag. <disk type='network'> ... <source protocol="sheepdog" name="vdi_name"> <host name="localhost" port="7000"/> </source> </disk> To work right this patch parses the output of collie, so it relies on the raw output option. There recently was a bug which caused size information to be reported wrong. This is fixed upstream already and will be in the next release. Signed-off-by: NSebastian Wiedenroth <wiedi@frubar.net>
-
- 18 7月, 2012 1 次提交
-
-
由 Daniel P. Berrange 提交于
The virnetdevtap.c and viruri.c files had two error report messages which were not annotated with _(...) Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-