1. 27 6月, 2018 1 次提交
  2. 26 3月, 2018 1 次提交
    • G
      virtio_net: flush uncompleted TX on reset · 94b52958
      Greg Kurz 提交于
      If the backend could not transmit a packet right away for some reason,
      the packet is queued for asynchronous sending. The corresponding vq
      element is tracked in the async_tx.elem field of the VirtIONetQueue,
      for later freeing when the transmission is complete.
      
      If a reset happens before completion, virtio_net_tx_complete() will push
      async_tx.elem back to the guest anyway, and we end up with the inuse flag
      of the vq being equal to -1. The next call to virtqueue_pop() is then
      likely to fail with "Virtqueue size exceeded".
      
      This can be reproduced easily by starting a guest with an hubport backend
      that is not connected to a functional network, eg,
      
       -device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0
      
      and no other -netdev hubport,hubid=0 on the command line.
      
      The appropriate fix is to ensure that such an asynchronous transmission
      cannot survive a device reset. So for all queues, we first try to send
      the packet again, and eventually we purge it if the backend still could
      not deliver it.
      
      CC: qemu-stable@nongnu.org
      Reported-by: NR. Nageswara Sastry <nasastry@in.ibm.com>
      Buglink: https://github.com/open-power-host-os/qemu/issues/37Signed-off-by: NGreg Kurz <groug@kaod.org>
      Tested-by: NR. Nageswara Sastry <nasastry@in.ibm.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      94b52958
  3. 14 3月, 2018 2 次提交
  4. 03 3月, 2018 1 次提交
  5. 09 2月, 2018 2 次提交
  6. 28 11月, 2017 1 次提交
    • J
      virtio-net: don't touch virtqueue if vm is stopped · 70e53e6e
      Jason Wang 提交于
      Guest state should not be touched if VM is stopped, unfortunately we
      didn't check running state and tried to drain tx queue unconditionally
      in virtio_net_set_status(). A crash was then noticed as a migration
      destination when user type quit after virtqueue state is loaded but
      before region cache is initialized. In this case,
      virtio_net_drop_tx_queue_data() tries to access the uninitialized
      region cache.
      
      Fix this by only dropping tx queue data when vm is running.
      
      Fixes: 283e2c2a ("net: virtio-net discards TX data after link down")
      Cc: Yuri Benditovich <yuri.benditovich@daynix.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: qemu-stable@nongnu.org
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      70e53e6e
  7. 27 9月, 2017 1 次提交
  8. 17 7月, 2017 2 次提交
  9. 04 7月, 2017 2 次提交
  10. 13 6月, 2017 1 次提交
  11. 26 5月, 2017 1 次提交
  12. 23 5月, 2017 1 次提交
  13. 31 3月, 2017 1 次提交
  14. 18 2月, 2017 1 次提交
  15. 14 2月, 2017 1 次提交
  16. 10 1月, 2017 2 次提交
  17. 15 11月, 2016 3 次提交
  18. 10 10月, 2016 5 次提交
  19. 27 9月, 2016 1 次提交
    • M
      virtio-net: allow increasing rx queue size · 1c0fbfa3
      Michael S. Tsirkin 提交于
      This allows increasing the rx queue size up to 1024: unlike with tx,
      guests don't put in huge S/G lists into RX so the risk of running into
      the max 1024 limitation due to some off-by-one seems small.
      
      It's helpful for users like OVS-DPDK which don't do any buffering on the
      host - 1K roughly matches 500 entries in tun + 256 in the current rx
      queue, which seems to work reasonably well. We could probably make do
      with ~750 entries but virtio spec limits us to powers of two.
      It might be a good idea to specify an s/g size limit in a future
      version.
      
      It also might be possible to make the queue size smaller down the road, 64
      seems like the minimal value which will still work (as guests seem to
      assume a queue full of 1.5K buffers is enough to process the largest
      incoming packet, which is ~64K).  No one actually asked for this, and
      with virtio 1 guests can reduce ring size without need for host
      configuration, so don't bother with this for now.
      
      Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Suggested-by: NPatrik Hermansson <phermansson@gmail.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      1c0fbfa3
  20. 22 7月, 2016 2 次提交
  21. 20 7月, 2016 1 次提交
    • E
      qapi: Change Netdev into a flat union · f394b2e2
      Eric Blake 提交于
      This is a mostly-mechanical conversion that creates a new flat
      union 'Netdev' QAPI type that covers all the branches of the
      former 'NetClientOptions' simple union, where the branches are
      now listed in a new 'NetClientDriver' enum rather than generated
      from the simple union.  The existence of a flat union has no
      change to the command line syntax accepted for new code, and
      will make it possible for a future patch to switch the QMP
      command to parse a boxed union for no change to valid QMP; but
      it does have some ripple effect on the C code when dealing with
      the new types.
      
      While making the conversion, note that the 'NetLegacy' type
      remains unchanged: it applies only to legacy command line options,
      and will not be ported to QMP, so it should remain a wrapper
      around a simple union; to avoid confusion, the type named
      'NetClientOptions' is now gone, and we introduce 'NetLegacyOptions'
      in its place.  Then, in the C code, we convert from NetLegacy to
      Netdev as soon as possible, so that the bulk of the net stack
      only has to deal with one QAPI type, not two.  Note that since
      the old legacy code always rejected 'hubport', we can just omit
      that branch from the new 'NetLegacyOptions' simple union.
      
      Based on an idea originally by Zoltán Kővágó <DirtY.iCE.hu@gmail.com>:
      Message-Id: <01a527fbf1a5de880091f98cf011616a78adeeee.1441627176.git.DirtY.iCE.hu@gmail.com>
      although the sed script in that patch no longer applies due to
      other changes in the tree since then, and I also did some manual
      cleanups (such as fixing whitespace to keep checkpatch happy).
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1468468228-27827-13-git-send-email-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      [Fixup from Eric squashed in]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      f394b2e2
  22. 04 7月, 2016 1 次提交
  23. 27 6月, 2016 1 次提交
  24. 16 2月, 2016 1 次提交
    • G
      virtio-net: use the backend cross-endian capabilities · 1bfa316c
      Greg Kurz 提交于
      When running a fully emulated device in cross-endian conditions, including
      a virtio 1.0 device offered to a big endian guest, we need to fix the vnet
      headers. This is currently handled by the virtio_net_hdr_swap() function
      in the core virtio-net code but it should actually be handled by the net
      backend.
      
      With this patch, virtio-net now tries to configure the backend to do the
      endian fixing when the device starts (i.e. drivers sets the CONFIG_OK bit).
      If the backend cannot support the requested endiannes, we have to fallback
      onto virtio_net_hdr_swap(): this is recorded in the needs_vnet_hdr_swap flag,
      to be used in the TX and RX paths.
      
      Note that we reset the backend to the default behaviour (guest native
      endianness) when the device stops (i.e. device status had CONFIG_OK bit and
      driver unsets it). This is needed, with the linux tap backend at least,
      otherwise the guest may lose network connectivity if rebooted into a
      different endianness.
      
      The current vhost-net code also tries to configure net backends. This will
      be no more needed and will be reverted in a subsequent patch.
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Reviewed-by: NLaurent Vivier <lvivier@redhat.com>
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NLaurent Vivier <lvivier@redhat.com>
      1bfa316c
  25. 07 2月, 2016 1 次提交
    • P
      virtio: move allocation to virtqueue_pop/vring_pop · 51b19ebe
      Paolo Bonzini 提交于
      The return code of virtqueue_pop/vring_pop is unused except to check for
      errors or 0.  We can thus easily move allocation inside the functions
      and just return a pointer to the VirtQueueElement.
      
      The advantage is that we will be able to allocate only the space that
      is needed for the actual size of the s/g list instead of the full
      VIRTQUEUE_MAX_SIZE items.  Currently VirtQueueElement takes about 48K
      of memory, and this kind of allocation puts a lot of stress on malloc.
      By cutting the size by two or three orders of magnitude, malloc can
      use much more efficient algorithms.
      
      The patch is pretty large, but changes to each device are testable
      more or less independently.  Splitting it would mostly add churn.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      51b19ebe
  26. 29 1月, 2016 1 次提交
  27. 01 10月, 2015 1 次提交
  28. 24 9月, 2015 1 次提交