1. 28 11月, 2012 29 次提交
  2. 27 11月, 2012 6 次提交
    • E
      storage: fix device detach regression with cgroup ACLs · 1b2ebf95
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=876828
      
      Commit 38c4a9cc introduced a regression in hot unplugging of disks
      from qemu, where cgroup device ACLs were no longer being revoked
      (thankfully not a security hole: cgroup ACLs only prevent open()
      of the disk; so reverting the ACL prevents future abuse but doesn't
      stop abuse from an fd that was already opened before the ACL change).
      
      The actual regression is due to a latent bug.  The hot unplug code
      was computing the set of files needing cgroup ACL revocation based
      on the XML passed in by the user, rather than based on the domain's
      details on which disk was being deleted.  As long as the revoke
      path was always recomputing the backing chain, this didn't really
      matter; but now that we want to compute the chain exactly once and
      remember that computation, we need to hang on to the backing chain
      until after the revoke has happened.
      
      * src/qemu/qemu_hotplug.c (qemuDomainDetachPciDiskDevice):
      Transfer backing chain before deletion.
      1b2ebf95
    • H
    • H
      qemu: Add support for gluster protocol based network storage backend. · c33c36d2
      Harsh Prateek Bora 提交于
      Qemu accepts gluster protocol as supported storage backend beside others.
      Signed-off-by: NHarsh Prateek Bora <harsh@linux.vnet.ibm.com>
      c33c36d2
    • H
      Add Gluster protocol as supported network disk backend · a2d2b80f
      Harsh Prateek Bora 提交于
      This patch introduces the RNG schema and updates necessary data strucutures
      to allow various hypervisors to make use of Gluster protocol as one of the
      supported network disk backend. Next patch will add support to make use of
      this feature in Qemu since it now supports Gluster protocol as one of the
      network based storage backend.
      
      Two new optional attributes for <host> element are introduced - 'transport'
      and 'socket'. Valid transport values are tcp, unix or rdma. If none specified,
      tcp is assumed. If transport is unix, socket specifies path to unix socket.
      
      This patch allows users to specify disks on gluster backends like this:
      
          <disk type='network' device='disk'>
            <driver name='qemu' type='raw'/>
            <source protocol='gluster' name='Volume1/image'>
              <host name='example.org' port='6000' transport='tcp'/>
            </source>
            <target dev='vda' bus='virtio'/>
          </disk>
      
          <disk type='network' device='disk'>
            <driver name='qemu' type='raw'/>
            <source protocol='gluster' name='Volume2/image'>
              <host transport='unix' socket='/path/to/sock'/>
            </source>
            <target dev='vdb' bus='virtio'/>
          </disk>
      Signed-off-by: NHarsh Prateek Bora <harsh@linux.vnet.ibm.com>
      a2d2b80f
    • E
      build: avoid C99 for loop · 7e5aa78d
      Eric Blake 提交于
      Although we require various C99 features, we don't yet require a
      complete C99 compiler.  On RHEL 5, compilation complained:
      
      qemu/qemu_command.c: In function 'qemuBuildGraphicsCommandLine':
      qemu/qemu_command.c:4688: error: 'for' loop initial declaration used outside C99 mode
      
      * src/qemu/qemu_command.c (qemuBuildGraphicsCommandLine): Declare
      variable sooner.
      * src/qemu/qemu_process.c (qemuProcessInitPasswords): Likewise.
      7e5aa78d
    • A
      Refactor ESX storage driver to implement facade pattern · 067e83eb
      Ata E Husain Bohra 提交于
      The patch refactors the current ESX storage driver due to following reasons:
      
      1. Given most of the public APIs exposed by the storage driver in Libvirt
      remains same, ESX storage driver should not implement logic specific
      for only one supported format (current implementation only supports VMFS).
      2. Decoupling interface from specific storage implementation gives us an
      extensible design to hook implementation for other supported storage
      formats.
      
      This patch refactors the current driver to implement it as a facade pattern i.e.
      the driver exposes all the public libvirt APIs, but uses backend drivers to get
      the required task done. The backend drivers provide implementation specific to
      the type of storage device.
      
      File changes:
      ------------------
      esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
                           |
                           |---> esx_storage_backend_vmfs.c (VMFS backend)
      067e83eb
  3. 26 11月, 2012 5 次提交
    • P
      lxc: Don't crash if no security driver is specified in libvirt_lxc · 99a388e6
      Peter Krempa 提交于
      When no security driver is specified libvirt_lxc segfaults as a debug
      message tries to access security labels for the container that are not
      present.
      
      This problem was introduced in commit 6c3cf57d.
      99a388e6
    • P
      lxc: Avoid segfault of libvirt_lxc helper on early cleanup paths · 81efb13b
      Peter Krempa 提交于
      Early jumps to the cleanup label caused a crash of the libvirt_lxc
      container helper as the cleanup section called
      virLXCControllerDeleteInterfaces(ctrl) without checking the ctrl argument
      for NULL. The argument was de-referenced soon after.
      
      $ /usr/libexec/libvirt_lxc
      /usr/libexec/libvirt_lxc: missing --name argument for configuration
      Segmentation fault
      81efb13b
    • A
      Add private data pointer to virStoragePool and virStorageVol · 2b121dbc
      Ata E Husain Bohra 提交于
      This will simplify the refactoring of the ESX storage driver to support
      a VMFS and an iSCSI backend.
      
      One of the tasks the storage driver needs to do is to decide which backend
      driver needs to be invoked for a given request. This approach extends
      virStoragePool and virStorageVol to store extra parameters:
      
      1. privateData: stores pointer to respective backend storage driver.
      2. privateDataFreeFunc: stores cleanup function pointer.
      
      virGetStoragePool and virGetStorageVol are modfied to accept these extra
      parameters as user params. virStoragePoolDispose and virStorageVolDispose
      checks for cleanup operation if available.
      
      The private data pointer allows the ESX storage driver to store a pointer
      to the used backend with each storage pool and volume. This avoids the need
      to detect the correct backend in each storage driver function call.
      2b121dbc
    • P
      cpu: Add Intel Haswell cpu model · bb2704e7
      Peter Krempa 提交于
      The new model supports following features in addition to those supported
      by SandyBridge:
      
      fma, pcid, movbe, fsgsbase, bmi1, hle, avx2, smep, bmi2, erms, invpcid,
      rtm
      bb2704e7
    • J
      storage: fix logical volume cloning · 70f0bbe8
      Ján Tomko 提交于
      Commit 258e06c8 removed setting of the volume type to
      VIR_STORAGE_VOL_BLOCK, which leads to failures in
      storageVolumeCreateXMLFrom.
      
      The type (and target.format) of the volume was set to zero. In
      virStorageBackendGetBuildVolFromFunction, this gets interpreted as
      VIR_STORAGE_FILE_NONE and the qemu-img tool is called with unknown
      "none" format.
      
      Bug: https://bugzilla.redhat.com/show_bug.cgi?id=879780
      70f0bbe8