1. 11 6月, 2015 8 次提交
  2. 10 6月, 2015 1 次提交
  3. 04 6月, 2015 3 次提交
  4. 01 6月, 2015 3 次提交
  5. 31 5月, 2015 7 次提交
  6. 30 4月, 2015 1 次提交
  7. 28 4月, 2015 4 次提交
    • S
      virtio-scsi: Move DEFINE_VIRTIO_SCSI_FEATURES to virtio-scsi · da2f84d1
      Shannon Zhao 提交于
      So far virtio-scsi-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_SCSI_FEATURES on
      backend or transport.
      
      The host features belong to the backends while virtio-scsi-pci,
      virtio-scsi-s390 and virtio-scsi-ccw set the DEFINE_VIRTIO_SCSI_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Move DEFINE_VIRTIO_SCSI_FEATURES to the backend virtio-scsi. The
      transports just sync the host features from backends.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da2f84d1
    • S
      virtio-net: Move DEFINE_VIRTIO_NET_FEATURES to virtio-net · da3e8a23
      Shannon Zhao 提交于
      So far virtio-net-device can't expose host features to guest while
      using virtio-mmio because it doesn't set DEFINE_VIRTIO_NET_FEATURES on
      backend or transport. So the performance is low.
      
      The host features belong to the backend while virtio-net-pci,
      virtio-net-s390 and virtio-net-ccw set the DEFINE_VIRTIO_NET_FEATURES
      on transports. But they already have the ability to forward property
      accesses to the backend child. So if we move the host features to
      backends, it doesn't break the backwards compatibility for them and
      make host features work while using virtio-mmio.
      
      Here we move DEFINE_VIRTIO_NET_FEATURES to the backend virtio-net. The
      transports just sync the host features from backend. Meanwhile move
      virtio_net_set_config_size to virtio-net to make sure the config size
      is correct and don't expose it.
      Signed-off-by: NShannon Zhao <zhaoshenglong@huawei.com>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      da3e8a23
    • J
      virtio-pci: speedup MSI-X masking and unmasking · 851c2a75
      Jason Wang 提交于
      This patch tries to speed up the MSI-X masking and unmasking through
      the mapping between vector and queues. With this patch it will there's
      no need to go through all possible virtqueues, which may help to
      reduce the time spent when doing MSI-X masking/unmasking a single
      vector when more than hundreds or even thousands of virtqueues were
      supported.
      
      Tested with 80 queue pairs virito-net-pci by changing the smp affinity
      in the background and doing netperf in the same time:
      
      Before the patch:
      5711.70 Gbits/sec
      After the patch:
      6830.98 Gbits/sec
      
      About 19.6% improvements in throughput.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      851c2a75
    • J
      virtio: introduce vector to virtqueues mapping · e0d686bf
      Jason Wang 提交于
      Currently we will try to traverse all virtqueues to find a subset that
      using a specific vector. This is sub optimal when we will support
      hundreds or even thousands of virtqueues. So this patch introduces a
      method which could be used by transport to get all virtqueues that
      using a same vector. This is done through QLISTs and the number of
      QLISTs was queried through a transport specific method. When guest
      setting vectors, the virtqueue will be linked and helpers for traverse
      the list was also introduced.
      
      The first user will be virtio pci which will use this to speed up
      MSI-X masking and unmasking handling.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      e0d686bf
  8. 25 4月, 2015 1 次提交
    • L
      balloon: improve error msg when adding second device · 46abb812
      Luiz Capitulino 提交于
      A VM supports only one balloon device, but due to several changes
      in infrastructure the error message got messed up when trying
      to add a second device. Fix it.
      
      Before this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Another balloon device already registered
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Adding balloon handler failed
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      Another balloon device already registered
      Adding balloon handler failed
      Device 'virtio-balloon-pci' could not be initialized
      
      QMP:
      
      { "execute": "device_add", "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
      	"error": {
      		"class": "GenericError",
      		"desc": "Adding balloon handler failed"
      	}
      }
      
      After this fix
      
      Command-line:
      
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Only one balloon device is supported
      qemu-qmp: -device virtio-balloon-pci,id=balloon0: Device 'virtio-balloon-pci' could not be initialized
      
      HMP:
      
      (qemu) device_add virtio-balloon-pci,id=balloon0
      Only one balloon device is supported
      Device 'virtio-balloon-pci' could not be initialized
      (qemu)
      
      QMP:
      
      { "execute": "device_add",
                "arguments": { "driver": "virtio-balloon-pci", "id": "balloon0" } }
      {
          "error": {
              "class": "GenericError",
              "desc": "Only one balloon device is supported"
          }
      }
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      46abb812
  9. 20 4月, 2015 1 次提交
  10. 16 3月, 2015 1 次提交
  11. 12 3月, 2015 1 次提交
  12. 10 3月, 2015 1 次提交
  13. 05 3月, 2015 2 次提交
  14. 01 3月, 2015 1 次提交
  15. 26 2月, 2015 5 次提交