1. 28 9月, 2012 1 次提交
    • A
      virtio: Introduce virtqueue_get_avail_bytes() · 0d8d7690
      Amit Shah 提交于
      The current virtqueue_avail_bytes() is oddly named, and checks if a
      particular number of bytes are available in a vq.  A better API is to
      fetch the number of bytes available in the vq, and let the caller do
      what's interesting with the numbers.
      
      Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
      for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
      is made a wrapper over this new function.
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      0d8d7690
  2. 07 8月, 2012 1 次提交
  3. 17 7月, 2012 1 次提交
  4. 12 7月, 2012 2 次提交
  5. 22 5月, 2012 1 次提交
  6. 19 4月, 2012 1 次提交
  7. 22 2月, 2012 1 次提交
  8. 29 11月, 2011 1 次提交
  9. 17 9月, 2011 1 次提交
  10. 12 9月, 2011 1 次提交
  11. 05 8月, 2011 1 次提交
  12. 05 7月, 2011 1 次提交
  13. 12 6月, 2011 1 次提交
  14. 29 3月, 2011 1 次提交
    • M
      virtio-pci: fix bus master work around on load · 89c473fd
      Michael S. Tsirkin 提交于
      Commit c81131db
      detects old guests by comparing virtio and
      PCI status. It attempts to do this on load,
      as well, but load_config callback in a binding
      is invoked too early and so the virtio status
      isn't set yet.
      
      We could add yet another callback to the
      binding, to invoke after load, but it
      seems easier to reuse the existing vmstate
      callback.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Cc: Alexander Graf <agraf@suse.de>
      89c473fd
  15. 21 3月, 2011 1 次提交
  16. 02 2月, 2011 1 次提交
  17. 10 1月, 2011 2 次提交
    • S
      virtio-pci: Use ioeventfd for virtqueue notify · 25db9ebe
      Stefan Hajnoczi 提交于
      Virtqueue notify is currently handled synchronously in userspace virtio.  This
      prevents the vcpu from executing guest code while hardware emulation code
      handles the notify.
      
      On systems that support KVM, the ioeventfd mechanism can be used to make
      virtqueue notify a lightweight exit by deferring hardware emulation to the
      iothread and allowing the VM to continue execution.  This model is similar to
      how vhost receives virtqueue notifies.
      
      The result of this change is improved performance for userspace virtio devices.
      Virtio-blk throughput increases especially for multithreaded scenarios and
      virtio-net transmit throughput increases substantially.
      
      Some virtio devices are known to have guest drivers which expect a notify to be
      processed synchronously and spin waiting for completion.
      For virtio-net, this also seems to interact with the guest stack in strange
      ways so that TCP throughput for small message sizes (~200bytes)
      is harmed. Only enable ioeventfd for virtio-blk for now.
      
      Care must be taken not to interfere with vhost-net, which uses host
      notifiers.  If the set_host_notifier() API is used by a device
      virtio-pci will disable virtio-ioeventfd and let the device deal with
      host notifiers as it wishes.
      
      Finally, there used to be a limit of 6 KVM io bus devices inside the
      kernel.  On such a kernel, don't use ioeventfd for virtqueue host
      notification since the limit is reached too easily.  This ensures that
      existing vhost-net setups (which always use ioeventfd) have ioeventfds
      available so they can continue to work.
      
      After migration and on VM change state (running/paused) virtio-ioeventfd
      will enable/disable itself.
      
       * VIRTIO_CONFIG_S_DRIVER_OK -> enable virtio-ioeventfd
       * !VIRTIO_CONFIG_S_DRIVER_OK -> disable virtio-ioeventfd
       * virtio_pci_set_host_notifier() -> disable virtio-ioeventfd
       * vm_change_state(running=0) -> disable virtio-ioeventfd
       * vm_change_state(running=1) -> enable virtio-ioeventfd
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      25db9ebe
    • M
      virtio: move vmstate change tracking to core · 85cf2a8d
      Michael S. Tsirkin 提交于
      Move tracking vmstate change from virtio-net to virtio.c
      as it is going to be used by virito-blk and virtio-pci
      for the ioeventfd support.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      85cf2a8d
  18. 07 10月, 2010 1 次提交
  19. 08 9月, 2010 1 次提交
  20. 31 8月, 2010 1 次提交
  21. 23 8月, 2010 1 次提交
  22. 26 7月, 2010 1 次提交
  23. 04 5月, 2010 1 次提交
  24. 02 4月, 2010 3 次提交
  25. 11 2月, 2010 1 次提交
    • C
      block: add topology qdev properties · 428c149b
      Christoph Hellwig 提交于
      Add three new qdev properties to export block topology information to
      the guest.  This is needed to get optimal I/O alignment for RAID arrays
      or SSDs.
      
      The options are:
      
       - physical_block_size to specify the physical block size of the device,
         this is going to increase from 512 bytes to 4096 kilobytes for many
         modern storage devices
       - min_io_size to specify the minimal I/O size without performance impact,
         this is typically set to the RAID chunk size for arrays.
       - opt_io_size to specify the optimal sustained I/O size, this is
         typically the RAID stripe width for arrays.
      
      I decided to not auto-probe these values from blkid which might easily
      be possible as I don't know how to deal with these issues on migration.
      
      Note that we specificly only set the physical_block_size, and not the
      logial one which is the unit all I/O is described in.  The reason for
      that is that IDE does not support increasing the logical block size and
      at last for now I want to stick to one meachnisms in queue and allow
      for easy switching of transports for a given backing image which would
      not be possible if scsi and virtio use real 4k sectors, while ide only
      uses the physical block exponent.
      
      To make this more common for the different block drivers introduce a
      new BlockConf structure holding all common block properties and a
      DEFINE_BLOCK_PROPERTIES macro to add them all together, mirroring
      what is done for network drivers.  Also switch over all block drivers
      to use it, except for the floppy driver which has weird driveA/driveB
      properties and probably won't require any advanced block options ever.
      
      Example usage for a virtio device with 4k physical block size and
      8k optimal I/O size:
      
        -drive file=scratch.img,media=disk,cache=none,id=scratch \
        -device virtio-blk-pci,drive=scratch,physical_block_size=4096,opt_io_size=8192
      
      aliguori: updated patch to take into account BLOCK events
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      428c149b
  26. 20 1月, 2010 2 次提交
    • A
      virtio-console: qdev conversion, new virtio-serial-bus · 98b19252
      Amit Shah 提交于
      This commit converts the virtio-console device to create a new
      virtio-serial bus that can host console and generic serial ports. The
      file hosting this code is now called virtio-serial-bus.c.
      
      The virtio console is now a very simple qdev device that sits on the
      virtio-serial-bus and communicates between the bus and qemu's chardevs.
      
      This commit also includes a few changes to the virtio backing code for
      pci and s390 to spawn the virtio-serial bus.
      
      As a result of the qdev conversion, we get rid of a lot of legacy code.
      The old-style way of instantiating a virtio console using
      
          -virtioconsole ...
      
      is maintained, but the new, preferred way is to use
      
          -device virtio-serial -device virtconsole,chardev=...
      
      With this commit, multiple devices as well as multiple ports with a
      single device can be supported.
      
      For multiple ports support, each port gets an IO vq pair. Since the
      guest needs to know in advance how many vqs a particular device will
      need, we have to set this number as a property of the virtio-serial
      device and also as a config option.
      
      In addition, we also spawn a pair of control IO vqs. This is an internal
      channel meant for guest-host communication for things like port
      open/close, sending port properties over to the guest, etc.
      
      This commit is a part of a series of other commits to get the full
      implementation of multiport support. Future commits will add other
      support as well as ride on the savevm version that we bump up here.
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      98b19252
    • A
      virtio: Remove duplicate macro definition for max. virtqueues, bump up the max · bb61564c
      Amit Shah 提交于
      VIRTIO_PCI_QUEUE_MAX is redefined in hw/virtio.c. Let's just keep it in
      hw/virtio.h.
      
      Also, bump up the value of the maximum allowed virtqueues to 64. This is
      in preparation to allow multiple ports per virtio-console device.
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      bb61564c
  27. 12 1月, 2010 2 次提交
  28. 12 12月, 2009 1 次提交
  29. 28 10月, 2009 1 次提交
  30. 02 10月, 2009 2 次提交
  31. 25 9月, 2009 1 次提交
  32. 11 8月, 2009 1 次提交
    • G
      qdev-ify virtio-blk. · d176c495
      Gerd Hoffmann 提交于
      First user of the new drive property.  With this patch applied host
      and guest config can be specified separately, like this:
      
        -drive if=none,id=disk1,file=/path/to/disk.img
        -device virtio-blk-pci,drive=disk1
      
      You can set any property for virtio-blk-pci now.  You can set the pci
      address via addr=.  You can switch the device into 0.10 compat mode
      using class=0x0180.  As this is per device you can have one 0.10 and one
      0.11 virtio block device in a single virtual machine.
      
      Old syntax continues to work.  Internally it does the same as the two
      lines above though.  One side effect this has is a different
      initialization order, which might result in a different pci address
      being assigned by default.
      
      Long term plan here is to have this working for all block devices, i.e.
      once all scsi is properly qdev-ified you will be able to do something
      like this:
      
        -drive if=none,id=sda,file=/path/to/disk.img
        -device lsi,id=lsi,addr=<pciaddr>
        -device scsi-disk,drive=sda,bus=lsi.0,lun=<n>
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      Message-Id: 
      d176c495
  33. 24 6月, 2009 1 次提交