1. 03 9月, 2014 1 次提交
  2. 02 9月, 2014 5 次提交
  3. 01 9月, 2014 3 次提交
    • E
      blockcopy: allow larger buf-size · 0e4b49a0
      Eric Blake 提交于
      While qemu definitely caps granularity to 64 MiB, it places no
      limits on buf-size.  On a machine beefy enough for lots of
      memory, a buf-size larger than 2 GiB is feasible, so we should
      pass a 64-bit parameter.
      
      * include/libvirt/libvirt.h.in (VIR_DOMAIN_BLOCK_COPY_BUF_SIZE):
      Allow 64 bits.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      0e4b49a0
    • M
      selinux: properly label tap FDs with imagelabel · a4431931
      Martin Kletzander 提交于
      The cleanup in commit cf976d9d used secdef->label to label the tap
      FDs, but that is not possible since it's process-only label (svirt_t)
      and not a object label (e.g. svirt_image_t).  Starting a domain failed
      with EPERM, but simply using secdef->imagelabel instead of
      secdef->label fixes it.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      a4431931
    • C
      Fix connection to already running session libvirtd · 0f03ca6d
      Christophe Fergeau 提交于
      Since 1b807f92, connecting with virsh to an already running session
      libvirtd fails with:
      $ virsh list --all
      error: failed to connect to the hypervisor
      error: no valid connection
      error: Failed to connect socket to
      '/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
      connected
      
      This is caused by a logic error in virNetSocketNewConnectUnix: even if
      the connection to the daemon socket succeeded, we still try to spawn the
      daemon and then connect to it.
      This commit changes the logic to not try to spawn libvirtd if we
      successfully connected to its socket.
      
      Most of this commit is whitespace changes, use of -w is recommended to
      look at it.
      0f03ca6d
  4. 30 8月, 2014 1 次提交
    • R
      storage: zfs: fix double listing of new volumes · c4d2a102
      Roman Bogorodskiy 提交于
      Currently, after calling commands to create a new volumes,
      virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
      calls virStorageBackendZFSParseVol.
      
      virStorageBackendZFSParseVol checks if a volume already exists by
      trying to get it using virStorageVolDefFindByName.
      
      For a just created volume it returns NULL, so volume is reported as
      new and appended to pool->volumes. This causes a volume to be listed
      twice as storageVolCreateXML appends this new volume to the list as
      well.
      
      Fix that by passing a new volume definition to
      virStorageBackendZFSParseVol so it could determine if it needs to add
      this volume to the list.
      c4d2a102
  5. 29 8月, 2014 7 次提交
    • J
      qemu_driver: Resolve Coverity FORWARD_NULL · 5c0dad7b
      John Ferlan 提交于
      In qemuDomainSnapshotCreateDiskActive() if we jumped to cleanup from a
      failed actions = virJSONValueNewArray(), then 'cfg' would be NULL.
      
      So just return -1, which in turn removes the need for cleanup:
      5c0dad7b
    • J
      virnetserverservice: Resolve Coverity ARRAY_VS_SINGLETON · e387f4c1
      John Ferlan 提交于
      Coverity complained about the following:
      
      (3) Event ptr_arith:
         Performing pointer arithmetic on "cur_fd" in expression "cur_fd++".
      130             return virNetServerServiceNewFD(*cur_fd++,
      
      The complaint is that pointer arithmetic taking place instead of the
      expected auto increment of the variable...  Adding some well placed
      parentheses ensures our order of operation.
      e387f4c1
    • J
      qemu: Allow use of iothreads for disk definitions · ef8da2ad
      John Ferlan 提交于
      For virtio-blk-pci disks with the disk iothread attribute that are
      running the correct emulator, add the "iothread=iothread#" to the
      -device command line in order to enable iothreads for the disk as
      long as the command is available, the disk iothread value provided is
      valid, and is supported for the disk device being added
      ef8da2ad
    • J
      domain_conf: Add support for iothreads in disk definition · e2523de5
      John Ferlan 提交于
      Add a new disk "driver" attribute "iothread" to be parsed as the thread
      number for the disk to use. In order to more easily facilitate the usage
      and configuration of the iothread, a "zero" for the attribute indicates
      iothreads are not supported for the device and a positive value indicates
      the specific thread to try and use.
      e2523de5
    • J
      qemu: Add support for iothreads · 72edaae7
      John Ferlan 提交于
      Add a new capability to ensure the iothreads feature exists for the qemu
      emulator being run - requires the "query-iothreads" QMP command. Using the
      domain XML add correspoding command argument in order to generate the
      threads. The iothreads will use a name space "iothread#" where, the
      future patch to add support for using an iothread to a disk definition to
      merely define which of the available threads to use.
      
      Add tests to ensure the xml/argv processing is correct.  Note that no
      change was made to qemuargv2xmltest.c as processing the -object element
      would require knowing more than just iothreads.
      72edaae7
    • J
      domain_conf: Introduce iothreads XML · ee3a9620
      John Ferlan 提交于
      Introduce XML to allowing adding iothreads to the domain. These can be
      used by virtio-blk-pci devices in order to assign a specific thread to
      handle the workload for the device.  The iothreads are the official
      implementation of the virtio-blk Data Plane that's been in tech preview
      for QEMU.
      ee3a9620
    • J
      libxl_migration: Resolve Coverity NULL_RETURNS · 0322643e
      John Ferlan 提交于
      Coverity noted that all callers to libxlDomainEventQueue() could ensure
      the second parameter (event) was true before calling except this case.
      As I look at the code and how events are used - it seems that prior to
      generating an event for the dom == NULL condition, the resume/suspend
      event should be queue'd after the virDomainSaveStatus() call which will
      goto cleanup and queue the saved event anyway.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      0322643e
  6. 28 8月, 2014 23 次提交