1. 15 12月, 2016 1 次提交
  2. 14 12月, 2016 3 次提交
    • D
      Avoid variable named 'stat' · a81cfb64
      Daniel P. Berrange 提交于
      Using a variable named 'stat' clashes with the system function
      'stat()' causing compiler warnings on some platforms
      
      cc1: warnings being treated as errors
      ../../src/qemu/qemu_monitor_text.c: In function 'parseMemoryStat':
      ../../src/qemu/qemu_monitor_text.c:604: error: declaration of 'stat' shadows a global declaration [-Wshadow]
      /usr/include/sys/stat.h:455: error: shadowed declaration is here [-Wshadow]
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      a81cfb64
    • V
      qemu: Allow use of hot plugged host CPUs if no affinity set · 283e2904
      Viktor Mihajlovski 提交于
      If the cpuset cgroup controller is disabled in /etc/libvirt/qemu.conf
      QEMU virtual machines can in principle use all host CPUs, even if they
      are hot plugged, if they have no explicit CPU affinity defined.
      
      However, there's libvirt code supposed to handle the situation where
      the libvirt daemon itself is not using all host CPUs. The code in
      qemuProcessInitCpuAffinity attempts to set an affinity mask including
      all defined host CPUs. Unfortunately, the resulting affinity mask for
      the process will not contain the offline CPUs. See also the
      sched_setaffinity(2) man page.
      
      That means that even if the host CPUs come online again, they won't be
      used by the QEMU process anymore. The same is true for newly hot
      plugged CPUs. So we are effectively preventing that QEMU uses all
      processors instead of enabling it to use them.
      
      It only makes sense to set the QEMU process affinity if we're able
      to actually grow the set of usable CPUs, i.e. if the process affinity
      is a subset of the online host CPUs.
      
      There's still the chance that for some reason the deliberately chosen
      libvirtd affinity matches the online host CPU mask by accident. In this
      case the behavior remains as it was before (CPUs offline while setting
      the affinity will not be used if they show up later on).
      Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
      Tested-by: NMatthew Rosato <mjrosato@linux.vnet.ibm.com>
      283e2904
    • J
      qemu: Fix virQEMUCapsFindTarget on ppc64le · f00c0047
      Jiri Denemark 提交于
      virQEMUCapsFindTarget is supposed to find an alternative QEMU binary if
      qemu-system-$GUEST_ARCH doesn't exist. The alternative is using host
      architecture when it is compatible with $GUEST_ARCH. But a special
      treatment has to be applied for ppc64le since the QEMU binary is always
      called qemu-system-ppc64.
      
      Broken by me in v2.2.0-171-gf2e71550.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1403745Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      f00c0047
  3. 13 12月, 2016 11 次提交
    • N
      perf: add branch_misses perf event support · 8981d792
      Nitesh Konkar 提交于
      This patch adds support and documentation
      for the branch_misses perf event.
      Signed-off-by: NNitesh Konkar <nitkon12@linux.vnet.ibm.com>
      8981d792
    • N
      qemu: agent: take monitor lock in qemuAgentNotifyEvent · cdd68193
      Nikolay Shirokovskiy 提交于
      qemuAgentNotifyEvent accesses monitor structure and is called on qemu
      reset/shutdown/suspend events under domain lock. Other monitor
      functions on the other hand take monitor lock and don't hold domain lock.
      Thus it is possible to have risky simultaneous access to the structure
      from 2 threads. Let's take monitor lock here to make access exclusive.
      cdd68193
    • N
      qemu: don't use vm when lock is dropped in qemuDomainGetFSInfo · c9a191fc
      Nikolay Shirokovskiy 提交于
      Current call to qemuAgentGetFSInfo in qemuDomainGetFSInfo is
      unsafe. Domain lock is dropped and we use vm->def. Let's make
      def copy to fix that.
      c9a191fc
    • N
      qemu: agent: fix uninitialized var case in qemuAgentGetFSInfo · 3ab9652a
      Nikolay Shirokovskiy 提交于
      In case of 0 filesystems *info is not set while according
      to virDomainGetFSInfo contract user should call free on it even
      in case of 0 filesystems. Thus we need to properly set
      it. NULL will be enough as free eats NULLs ok.
      3ab9652a
    • J
      qemu: Fix GetBlockInfo setting allocation from wr_highest_offset · cf436a56
      John Ferlan 提交于
      The libvirt-domain.h documentation indicates that for a qcow2 file
      in a filesystem being used for a backing store should report the disk
      space occupied by a file; however, commit id '15fa84ac' altered the
      code to trust that the wr_highest_offset should be used whenever
      wr_highest_offset_valid was set.
      
      As it turns out this will lead to indeterminite results. For an active
      domain when qemu hasn't yet had the need to find the wr_highest_offset
      value, qemu will report 0 even though qemu-img will report the proper
      disk size. This causes reporting of the following XML:
      
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2'/>
          <source file='/path/to/test-1g.qcow2'/>
      
      to be as follows:
      
      Capacity:       1073741824
      Allocation:     0
      Physical:       1074139136
      
      with qemu-img indicating:
      
      image: /path/to/test-1g.qcow2
      file format: qcow2
      virtual size: 1.0G (1073741824 bytes)
      disk size: 1.0G
      
      Once the backing source file is opened on the guest, then wr_highest_offset
      is updated, but only to the high water mark and not the size of the file.
      
      This patch will adjust the logic to check for the file backed qcow2 image
      and enforce setting the allocation to the returned 'physical' value, which
      is the 'actual-size' value from a 'query-block' operation.
      
      NB: The other consumer of the wr_highest_offset output (GetAllDomainStats)
      has a contract that indicates 'allocation' is the offset of the highest
      written sector, so it doesn't need adjustment.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      cf436a56
    • J
      util: Introduce virStorageSourceUpdateCapacity · 9d734b60
      John Ferlan 提交于
      Instead of having duplicated code in qemuStorageLimitsRefresh and
      virStorageBackendUpdateVolTargetInfo to get capacity specific data
      about the storage backing source or volume -- create a common API
      to handle the details for both.
      
      As a side effect, virStorageFileProbeFormatFromBuf returns to being
      a local/static helper to virstoragefile.c
      
      For the QEMU code - if the probe is done, then the format is saved so
      as to avoid future such probes.
      
      For the storage backend code, there is no need to deal with the probe
      since we cannot call the new API if target->format == NONE.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      9d734b60
    • J
      util: Introduce virStorageSourceUpdateBackingSizes · 3039ec96
      John Ferlan 提交于
      Instead of having duplicated code in qemuStorageLimitsRefresh and
      virStorageBackendUpdateVolTargetInfoFD to fill in the storage backing
      source or volume allocation, capacity, and physical values - create a
      common API that will handle the details for both.
      
      The common API will fill in "default" capacity values as well - although
      those more than likely will be overridden by subsequent code. Having just
      one place to make the determination of what the values should be will
      make things be more consistent.
      
      For the QEMU code - the data filled in will be for inactive domains
      for the GetBlockInfo and DomainGetStatsOneBlock API's. For the storage
      backend code - the data will be filled in during the volume updates.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      3039ec96
    • J
      util: Introduce virStorageSourceUpdatePhysicalSize · c5f61513
      John Ferlan 提交于
      Commit id '8dc27259' introduced virStorageSourceUpdateBlockPhysicalSize
      in order to retrieve the physical size for a block backed source device
      for an active domain since commit id '15fa84ac' changed to use the
      qemuMonitorGetAllBlockStatsInfo and qemuMonitorBlockStatsUpdateCapacity
      API's to (essentially) retrieve the "actual-size" from a 'query-block'
      operation for the source device.
      
      However, the code only was made functional for a BLOCK backing type
      and it neglected to use qemuOpenFile, instead using just open. After
      the open the block lseek would find the end of the block and set the
      physical value, close the fd and return.
      
      Since the code would return 0 immediately if the source device wasn't
      a BLOCK backed device, the physical would be displayed incorrectly,
      such as follows in domblkinfo for a file backed source device:
      
      Capacity:       1073741824
      Allocation:     0
      Physical:       0
      
      This patch will modify the algorithm to get the physical size for other
      backing types and it will make use of the qemuDomainStorageOpenStat
      helper in order to open/stat the source file depending on its type.
      The qemuDomainGetStatsOneBlock will no longer inhibit printing errors,
      but it will still ignore them leaving the physical value set to 0.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      c5f61513
    • J
      qemu: Introduce helper qemuDomainStorageUpdatePhysical · a7fea19f
      John Ferlan 提交于
      Currently just a shim to call virStorageSourceUpdateBlockPhysicalSize
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      a7fea19f
    • J
      qemu: Add helpers to handle stat data for qemuStorageLimitsRefresh · 732af77c
      John Ferlan 提交于
      Split out the opening of the file and fetch of the stat buffer into a
      helper qemuDomainStorageOpenStat. This will handle either opening the
      local or remote storage.
      
      Additionally split out the cleanup of that into a separate helper
      qemuDomainStorageCloseStat which will either close the file or
      call the virStorageFileDeinit function.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      732af77c
    • J
      qemu: Clean up description for qemuStorageLimitsRefresh · 7149d169
      John Ferlan 提交于
      Originally added by commit id '89646e69' prior to commit id '15fa84ac'
      and '71d2c172' which ensured that qemuStorageLimitsRefresh was only called
      for inactive domains.
      
      Adjust the comment describing the need for FIXME and move all the text
      to the function description.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      7149d169
  4. 09 12月, 2016 7 次提交
  5. 08 12月, 2016 3 次提交
  6. 07 12月, 2016 2 次提交
  7. 06 12月, 2016 6 次提交
  8. 02 12月, 2016 1 次提交
  9. 01 12月, 2016 4 次提交
    • G
      qemuDomainAttachNetDevice: pass mq and vectors for vhost-user with multiqueue · f81b33b5
      gaohaifeng 提交于
      Two reasons:
      1.in none hotplug, we will pass it. We can see from libvirt function
      qemuBuildVhostuserCommandLine
      2.qemu will use this vetcor num to init msix table. If we don't pass, qemu
      will use default value, this will cause VM can only use default value
      interrupts at most.
      Signed-off-by: Ngaohaifeng <gaohaifeng.gao@huawei.com>
      f81b33b5
    • E
      qemu: Prevent detaching SCSI controller used by hostdev · 655429a0
      Eric Farman 提交于
      Consider the following XML snippets:
      
        $ cat scsicontroller.xml
            <controller type='scsi' model='virtio-scsi' index='0'/>
        $ cat scsihostdev.xml
            <hostdev mode='subsystem' type='scsi'>
              <source>
                <adapter name='scsi_host0'/>
                <address bus='0' target='8' unit='1074151456'/>
              </source>
            </hostdev>
      
      If we create a guest that includes the contents of scsihostdev.xml,
      but forget the virtio-scsi controller described in scsicontroller.xml,
      one is silently created for us.  The same holds true when attaching
      a hostdev before the matching virtio-scsi controller.
      (See qemuDomainFindOrCreateSCSIDiskController for context.)
      
      Detaching the hostdev, followed by the controller, works well and the
      guest behaves appropriately.
      
      If we detach the virtio-scsi controller device first, any associated
      hostdevs are detached for us by the underlying virtio-scsi code (this
      is fine, since the connection is broken).  But all is not well, as the
      guest is unable to receive new virtio-scsi devices (the attach commands
      succeed, but devices never appear within the guest), nor even be
      shutdown, after this point.
      
      While this is not libvirt's problem, we can prevent falling into this
      scenario by checking if a controller is being used by any hostdev
      devices.  The same is already done for disk elements today.
      
      Applying this patch and then using the XML snippets from earlier:
      
        $ virsh detach-device guest_01 scsicontroller.xml
        error: Failed to detach device from scsicontroller.xml
        error: operation failed: device cannot be detached: device is busy
      
        $ virsh detach-device guest_01 scsihostdev.xml
        Device detached successfully
      
        $ virsh detach-device guest_01 scsicontroller.xml
        Device detached successfully
      Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
      Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com>
      Reviewed-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
      655429a0
    • L
      qemu: assign VFIO devices to PCIe addresses when appropriate · 70249927
      Laine Stump 提交于
      Although nearly all host devices that are assigned to guests using
      VFIO ("<hostdev>" devices in libvirt) are physically PCI Express
      devices, until now libvirt's PCI address assignment has always
      assigned them addresses on legacy PCI controllers in the guest, even
      if the guest's machinetype has a PCIe root bus (e.g. q35 and
      aarch64/virt).
      
      This patch tries to assign them to an address on a PCIe controller
      instead, when appropriate. First we do some preliminary checks that
      might allow setting the flags without doing any extra work, and if
      those conditions aren't met (and if libvirt is running privileged so
      that it has proper permissions), we perform the (relatively) time
      consuming task of reading the device's PCI config to see if it is an
      Express device. If this is successful, the connect flags are set based
      on the result, but if we aren't able to read the PCI config (most
      likely due to the device not being present on the system at the time
      of the check) we assume it is (or will be) an Express device, since
      that is almost always the case anyway.
      70249927
    • L
      qemu: propagate virQEMUDriver object to qemuDomainDeviceCalculatePCIConnectFlags · 9b0848d5
      Laine Stump 提交于
      If libvirtd is running unprivileged, it can open a device's PCI config
      data in sysfs, but can only read the first 64 bytes. But as part of
      determining whether a device is Express or legacy PCI,
      qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future
      patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond
      the first 64 bytes of the PCI config data and fails with an error log
      if the read is unsuccessful.
      
      In order to avoid creating a parallel "quiet" version of
      virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down
      through all the call chains that initialize the
      qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver
      pointer with the rest of the iterdata so that it can be used by
      qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used
      yet, but will be used in an upcoming patch (that detects Express vs
      legacy PCI for VFIO assigned devices) to examine driver->privileged.
      9b0848d5
  10. 29 11月, 2016 2 次提交