1. 31 5月, 2015 3 次提交
  2. 30 4月, 2015 1 次提交
  3. 28 4月, 2015 4 次提交
    • S
      virtio-scsi: Move DEFINE_VIRTIO_SCSI_FEATURES to virtio-scsi · da2f84d1
      Shannon Zhao 提交于
      So far virtio-scsi-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_SCSI_FEATURES on
      backend or transport.
      
      The host features belong to the backends while virtio-scsi-pci,
      virtio-scsi-s390 and virtio-scsi-ccw set the DEFINE_VIRTIO_SCSI_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Move DEFINE_VIRTIO_SCSI_FEATURES to the backend virtio-scsi. The
      transports just sync the host features from backends.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da2f84d1
    • S
      virtio-net: Move DEFINE_VIRTIO_NET_FEATURES to virtio-net · da3e8a23
      Shannon Zhao 提交于
      So far virtio-net-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_NET_FEATURES on
      backend or transport. So the performance is low.
      
      The host features belong to the backend while virtio-net-pci,
      virtio-net-s390 and virtio-net-ccw set the DEFINE_VIRTIO_NET_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Here we move DEFINE_VIRTIO_NET_FEATURES to the backend virtio-net. The
      transports just sync the host features from backend. Meanwhile move
      virtio_net_set_config_size to virtio-net to make sure the config size
      is correct and don't expose it.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da3e8a23
    • J
      virtio-pci: speedup MSI-X masking and unmasking · 851c2a75
      Jason Wang 提交于
      This patch tries to speed up the MSI-X masking and unmasking through
      the mapping between vector and queues. With this patch it will there's
      no need to go through all possible virtqueues, which may help to
      reduce the time spent when doing MSI-X masking/unmasking a single
      vector when more than hundreds or even thousands of virtqueues were
      supported.
      
      Tested with 80 queue pairs virito-net-pci by changing the smp affinity
      in the background and doing netperf in the same time:
      
      Before the patch:
      5711.70 Gbits/sec
      After the patch:
      6830.98 Gbits/sec
      
      About 19.6% improvements in throughput.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      851c2a75
    • J
      virtio: introduce vector to virtqueues mapping · e0d686bf
      Jason Wang 提交于
      Currently we will try to traverse all virtqueues to find a subset that
      using a specific vector. This is sub optimal when we will support
      hundreds or even thousands of virtqueues. So this patch introduces a
      method which could be used by transport to get all virtqueues that
      using a same vector. This is done through QLISTs and the number of
      QLISTs was queried through a transport specific method. When guest
      setting vectors, the virtqueue will be linked and helpers for traverse
      the list was also introduced.
      
      The first user will be virtio pci which will use this to speed up
      MSI-X masking and unmasking handling.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      e0d686bf
  4. 25 4月, 2015 1 次提交
    • L
      balloon: improve error msg when adding second device · 46abb812
      Luiz Capitulino 提交于
      A VM supports only one balloon device, but due to several changes
      in infrastructure the error message got messed up when trying
      to add a second device. Fix it.
      
      Before this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Another balloon device already registered
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Adding balloon handler failed
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      Another balloon device already registered
      Adding balloon handler failed
      Device 'virtio-balloon-pci' could not be initialized
      
      QMP:
      
      { "execute": "device_add", "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
      	"error": {
      		"class": "GenericError",
      		"desc": "Adding balloon handler failed"
      	}
      }
      
      After this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Only one balloon device is supported
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      (qemu) device_add virtio-balloon-pci,id=balloon0
      Only one balloon device is supported
      Device 'virtio-balloon-pci' could not be initialized
      (qemu)
      
      QMP:
      
      { "execute": "device_add",
                "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
          "error": {
              "class": "GenericError",
              "desc": "Only one balloon device is supported"
          }
      }
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      46abb812
  5. 20 4月, 2015 1 次提交
  6. 16 3月, 2015 1 次提交
  7. 12 3月, 2015 1 次提交
  8. 10 3月, 2015 1 次提交
  9. 05 3月, 2015 2 次提交
  10. 01 3月, 2015 1 次提交
  11. 26 2月, 2015 5 次提交
  12. 16 2月, 2015 1 次提交
  13. 12 2月, 2015 1 次提交
  14. 06 2月, 2015 1 次提交
    • A
      migration: Append JSON description of migration stream · 8118f095
      Alexander Graf 提交于
      One of the annoyances of the current migration format is the fact that
      it's not self-describing. In fact, it's not properly describing at all.
      Some code randomly scattered throughout QEMU elaborates roughly how to
      read and write a stream of bytes.
      
      We discussed an idea during KVM Forum 2013 to add a JSON description of
      the migration protocol itself to the migration stream. This patch
      adds a section after the VM_END migration end marker that contains
      description data on what the device sections of the stream are composed of.
      
      This approach is backwards compatible with any QEMU version reading the
      stream, because QEMU just stops reading after the VM_END marker and ignores
      any data following it.
      
      With an additional external program this allows us to decipher the
      contents of any migration stream and hopefully make migration bugs easier
      to track down.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      8118f095
  15. 27 1月, 2015 1 次提交
  16. 05 1月, 2015 1 次提交
  17. 10 12月, 2014 1 次提交
  18. 01 12月, 2014 1 次提交
  19. 28 11月, 2014 1 次提交
    • D
      Fix for crash after migration in virtio-rng on bi-endian targets · db12451d
      David Gibson 提交于
      VirtIO devices now remember which endianness they're operating in in order
      to support targets which may have guests of either endianness, such as
      powerpc.  This endianness state is transferred in a subsection of the
      virtio device's information.
      
      With virtio-rng this can lead to an abort after a loadvm hitting the
      assert() in virtio_is_big_endian().  This can be reproduced by doing a
      migrate and load from file on a bi-endian target with a virtio-rng device.
      The actual guest state isn't particularly important to triggering this.
      
      The cause is that virtio_rng_load_device() calls virtio_rng_process() which
      accesses the ring and thus needs the endianness.  However,
      virtio_rng_process() is called via virtio_load() before it loads the
      subsections.  Essentially the ->load callback in VirtioDeviceClass should
      only be used for actually reading the device state from the stream, not for
      post-load re-initialization.
      
      This patch fixes the bug by moving the virtio_rng_process() after the call
      to virtio_load().  Better yet would be to convert virtio to use vmsd and
      have the virtio_rng_process() as a post_load callback, but that's a bigger
      project for another day.
      
      This is bugfix, and should be considered for the 2.2 branch.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Message-id: 1417067290-20715-1-git-send-email-david@gibson.dropbear.id.au
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      db12451d
  20. 04 11月, 2014 1 次提交
  21. 02 11月, 2014 2 次提交
    • B
      hw/virtio/vring/event_idx: fix the vring_avail_event error · a3614c65
      Bin Wu 提交于
      The event idx in virtio is an effective way to reduce the number of
      interrupts and exits of the guest. When the guest puts an request
      into the virtio ring, it doesn't exit immediately to inform the
      backend. Instead, the guest checks the "avail" event idx to determine
      the notification.
      
      In virtqueue_pop, when a request is poped, the current avail event
      idx should be set to the number of vq->last_avail_idx.
      Signed-off-by: NBin Wu <wu.wubin@huawei.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      a3614c65
    • M
      virtio-pci: fix migration for pci bus master · 68a27b20
      Michael S. Tsirkin 提交于
      Current support for bus master (clearing OK bit) together with the need to
      support guests which do not enable PCI bus mastering, leads to extra state in
      VIRTIO_PCI_FLAG_BUS_MASTER_BUG bit, which isn't robust in case of cross-version
      migration for the case when guests use the device before setting DRIVER_OK.
      
      Rip out this code, and replace it:
      -   Modern QEMU doesn't need VIRTIO_PCI_FLAG_BUS_MASTER_BUG
          so just drop it for latest machine type.
      -   For compat machine types, set PCI_COMMAND if DRIVER_OK
          is set.
      
      As this is needed for 2.1 for both pc and ppc, move PC_COMPAT macros from pc.h
      to a new common header.
      
      Cc: Greg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NAlexander Graf <agraf@suse.de>
      
      68a27b20
  22. 30 10月, 2014 1 次提交
  23. 23 10月, 2014 1 次提交
  24. 20 10月, 2014 1 次提交
    • M
      hw: Convert from BlockDriverState to BlockBackend, mostly · 4be74634
      Markus Armbruster 提交于
      Device models should access their block backends only through the
      block-backend.h API.  Convert them, and drop direct includes of
      inappropriate headers.
      
      Just four uses of BlockDriverState are left:
      
      * The Xen paravirtual block device backend (xen_disk.c) opens images
        itself when set up via xenbus, bypassing blockdev.c.  I figure it
        should go through qmp_blockdev_add() instead.
      
      * Device model "usb-storage" prompts for keys.  No other device model
        does, and this one probably shouldn't do it, either.
      
      * ide_issue_trim_cb() uses bdrv_aio_discard() instead of
        blk_aio_discard() because it fishes its backend out of a BlockAIOCB,
        which has only the BlockDriverState.
      
      * PC87312State has an unused BlockDriverState[] member.
      
      The next two commits take care of the latter two.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      4be74634
  25. 15 10月, 2014 4 次提交
  26. 30 9月, 2014 1 次提交