1. 04 6月, 2015 2 次提交
  2. 01 6月, 2015 3 次提交
  3. 31 5月, 2015 7 次提交
  4. 30 4月, 2015 1 次提交
  5. 28 4月, 2015 4 次提交
    • S
      virtio-scsi: Move DEFINE_VIRTIO_SCSI_FEATURES to virtio-scsi · da2f84d1
      Shannon Zhao 提交于
      So far virtio-scsi-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_SCSI_FEATURES on
      backend or transport.
      
      The host features belong to the backends while virtio-scsi-pci,
      virtio-scsi-s390 and virtio-scsi-ccw set the DEFINE_VIRTIO_SCSI_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Move DEFINE_VIRTIO_SCSI_FEATURES to the backend virtio-scsi. The
      transports just sync the host features from backends.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da2f84d1
    • S
      virtio-net: Move DEFINE_VIRTIO_NET_FEATURES to virtio-net · da3e8a23
      Shannon Zhao 提交于
      So far virtio-net-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_NET_FEATURES on
      backend or transport. So the performance is low.
      
      The host features belong to the backend while virtio-net-pci,
      virtio-net-s390 and virtio-net-ccw set the DEFINE_VIRTIO_NET_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Here we move DEFINE_VIRTIO_NET_FEATURES to the backend virtio-net. The
      transports just sync the host features from backend. Meanwhile move
      virtio_net_set_config_size to virtio-net to make sure the config size
      is correct and don't expose it.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da3e8a23
    • J
      virtio-pci: speedup MSI-X masking and unmasking · 851c2a75
      Jason Wang 提交于
      This patch tries to speed up the MSI-X masking and unmasking through
      the mapping between vector and queues. With this patch it will there's
      no need to go through all possible virtqueues, which may help to
      reduce the time spent when doing MSI-X masking/unmasking a single
      vector when more than hundreds or even thousands of virtqueues were
      supported.
      
      Tested with 80 queue pairs virito-net-pci by changing the smp affinity
      in the background and doing netperf in the same time:
      
      Before the patch:
      5711.70 Gbits/sec
      After the patch:
      6830.98 Gbits/sec
      
      About 19.6% improvements in throughput.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      851c2a75
    • J
      virtio: introduce vector to virtqueues mapping · e0d686bf
      Jason Wang 提交于
      Currently we will try to traverse all virtqueues to find a subset that
      using a specific vector. This is sub optimal when we will support
      hundreds or even thousands of virtqueues. So this patch introduces a
      method which could be used by transport to get all virtqueues that
      using a same vector. This is done through QLISTs and the number of
      QLISTs was queried through a transport specific method. When guest
      setting vectors, the virtqueue will be linked and helpers for traverse
      the list was also introduced.
      
      The first user will be virtio pci which will use this to speed up
      MSI-X masking and unmasking handling.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      e0d686bf
  6. 25 4月, 2015 1 次提交
    • L
      balloon: improve error msg when adding second device · 46abb812
      Luiz Capitulino 提交于
      A VM supports only one balloon device, but due to several changes
      in infrastructure the error message got messed up when trying
      to add a second device. Fix it.
      
      Before this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Another balloon device already registered
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Adding balloon handler failed
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      Another balloon device already registered
      Adding balloon handler failed
      Device 'virtio-balloon-pci' could not be initialized
      
      QMP:
      
      { "execute": "device_add", "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
      	"error": {
      		"class": "GenericError",
      		"desc": "Adding balloon handler failed"
      	}
      }
      
      After this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Only one balloon device is supported
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      (qemu) device_add virtio-balloon-pci,id=balloon0
      Only one balloon device is supported
      Device 'virtio-balloon-pci' could not be initialized
      (qemu)
      
      QMP:
      
      { "execute": "device_add",
                "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
          "error": {
              "class": "GenericError",
              "desc": "Only one balloon device is supported"
          }
      }
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      46abb812
  7. 20 4月, 2015 1 次提交
  8. 16 3月, 2015 1 次提交
  9. 12 3月, 2015 1 次提交
  10. 10 3月, 2015 1 次提交
  11. 05 3月, 2015 2 次提交
  12. 01 3月, 2015 1 次提交
  13. 26 2月, 2015 5 次提交
  14. 16 2月, 2015 1 次提交
  15. 12 2月, 2015 1 次提交
  16. 06 2月, 2015 1 次提交
    • A
      migration: Append JSON description of migration stream · 8118f095
      Alexander Graf 提交于
      One of the annoyances of the current migration format is the fact that
      it's not self-describing. In fact, it's not properly describing at all.
      Some code randomly scattered throughout QEMU elaborates roughly how to
      read and write a stream of bytes.
      
      We discussed an idea during KVM Forum 2013 to add a JSON description of
      the migration protocol itself to the migration stream. This patch
      adds a section after the VM_END migration end marker that contains
      description data on what the device sections of the stream are composed of.
      
      This approach is backwards compatible with any QEMU version reading the
      stream, because QEMU just stops reading after the VM_END marker and ignores
      any data following it.
      
      With an additional external program this allows us to decipher the
      contents of any migration stream and hopefully make migration bugs easier
      to track down.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      8118f095
  17. 27 1月, 2015 1 次提交
  18. 05 1月, 2015 1 次提交
  19. 10 12月, 2014 1 次提交
  20. 01 12月, 2014 1 次提交
  21. 28 11月, 2014 1 次提交
    • D
      Fix for crash after migration in virtio-rng on bi-endian targets · db12451d
      David Gibson 提交于
      VirtIO devices now remember which endianness they're operating in in order
      to support targets which may have guests of either endianness, such as
      powerpc.  This endianness state is transferred in a subsection of the
      virtio device's information.
      
      With virtio-rng this can lead to an abort after a loadvm hitting the
      assert() in virtio_is_big_endian().  This can be reproduced by doing a
      migrate and load from file on a bi-endian target with a virtio-rng device.
      The actual guest state isn't particularly important to triggering this.
      
      The cause is that virtio_rng_load_device() calls virtio_rng_process() which
      accesses the ring and thus needs the endianness.  However,
      virtio_rng_process() is called via virtio_load() before it loads the
      subsections.  Essentially the ->load callback in VirtioDeviceClass should
      only be used for actually reading the device state from the stream, not for
      post-load re-initialization.
      
      This patch fixes the bug by moving the virtio_rng_process() after the call
      to virtio_load().  Better yet would be to convert virtio to use vmsd and
      have the virtio_rng_process() as a post_load callback, but that's a bigger
      project for another day.
      
      This is bugfix, and should be considered for the 2.2 branch.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Message-id: 1417067290-20715-1-git-send-email-david@gibson.dropbear.id.au
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      db12451d
  22. 04 11月, 2014 1 次提交
  23. 02 11月, 2014 1 次提交