1. 15 11月, 2016 3 次提交
  2. 10 10月, 2016 5 次提交
  3. 27 9月, 2016 1 次提交
    • M
      virtio-net: allow increasing rx queue size · 1c0fbfa3
      Michael S. Tsirkin 提交于
      This allows increasing the rx queue size up to 1024: unlike with tx,
      guests don't put in huge S/G lists into RX so the risk of running into
      the max 1024 limitation due to some off-by-one seems small.
      
      It's helpful for users like OVS-DPDK which don't do any buffering on the
      host - 1K roughly matches 500 entries in tun + 256 in the current rx
      queue, which seems to work reasonably well. We could probably make do
      with ~750 entries but virtio spec limits us to powers of two.
      It might be a good idea to specify an s/g size limit in a future
      version.
      
      It also might be possible to make the queue size smaller down the road, 64
      seems like the minimal value which will still work (as guests seem to
      assume a queue full of 1.5K buffers is enough to process the largest
      incoming packet, which is ~64K).  No one actually asked for this, and
      with virtio 1 guests can reduce ring size without need for host
      configuration, so don't bother with this for now.
      
      Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Suggested-by: NPatrik Hermansson <phermansson@gmail.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      1c0fbfa3
  4. 22 7月, 2016 2 次提交
  5. 20 7月, 2016 1 次提交
    • E
      qapi: Change Netdev into a flat union · f394b2e2
      Eric Blake 提交于
      This is a mostly-mechanical conversion that creates a new flat
      union 'Netdev' QAPI type that covers all the branches of the
      former 'NetClientOptions' simple union, where the branches are
      now listed in a new 'NetClientDriver' enum rather than generated
      from the simple union.  The existence of a flat union has no
      change to the command line syntax accepted for new code, and
      will make it possible for a future patch to switch the QMP
      command to parse a boxed union for no change to valid QMP; but
      it does have some ripple effect on the C code when dealing with
      the new types.
      
      While making the conversion, note that the 'NetLegacy' type
      remains unchanged: it applies only to legacy command line options,
      and will not be ported to QMP, so it should remain a wrapper
      around a simple union; to avoid confusion, the type named
      'NetClientOptions' is now gone, and we introduce 'NetLegacyOptions'
      in its place.  Then, in the C code, we convert from NetLegacy to
      Netdev as soon as possible, so that the bulk of the net stack
      only has to deal with one QAPI type, not two.  Note that since
      the old legacy code always rejected 'hubport', we can just omit
      that branch from the new 'NetLegacyOptions' simple union.
      
      Based on an idea originally by Zoltán Kővágó <DirtY.iCE.hu@gmail.com>:
      Message-Id: <01a527fbf1a5de880091f98cf011616a78adeeee.1441627176.git.DirtY.iCE.hu@gmail.com>
      although the sed script in that patch no longer applies due to
      other changes in the tree since then, and I also did some manual
      cleanups (such as fixing whitespace to keep checkpatch happy).
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1468468228-27827-13-git-send-email-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      [Fixup from Eric squashed in]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      f394b2e2
  6. 04 7月, 2016 1 次提交
  7. 27 6月, 2016 1 次提交
  8. 16 2月, 2016 1 次提交
    • G
      virtio-net: use the backend cross-endian capabilities · 1bfa316c
      Greg Kurz 提交于
      When running a fully emulated device in cross-endian conditions, including
      a virtio 1.0 device offered to a big endian guest, we need to fix the vnet
      headers. This is currently handled by the virtio_net_hdr_swap() function
      in the core virtio-net code but it should actually be handled by the net
      backend.
      
      With this patch, virtio-net now tries to configure the backend to do the
      endian fixing when the device starts (i.e. drivers sets the CONFIG_OK bit).
      If the backend cannot support the requested endiannes, we have to fallback
      onto virtio_net_hdr_swap(): this is recorded in the needs_vnet_hdr_swap flag,
      to be used in the TX and RX paths.
      
      Note that we reset the backend to the default behaviour (guest native
      endianness) when the device stops (i.e. device status had CONFIG_OK bit and
      driver unsets it). This is needed, with the linux tap backend at least,
      otherwise the guest may lose network connectivity if rebooted into a
      different endianness.
      
      The current vhost-net code also tries to configure net backends. This will
      be no more needed and will be reverted in a subsequent patch.
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Reviewed-by: NLaurent Vivier <lvivier@redhat.com>
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NLaurent Vivier <lvivier@redhat.com>
      1bfa316c
  9. 07 2月, 2016 1 次提交
    • P
      virtio: move allocation to virtqueue_pop/vring_pop · 51b19ebe
      Paolo Bonzini 提交于
      The return code of virtqueue_pop/vring_pop is unused except to check for
      errors or 0.  We can thus easily move allocation inside the functions
      and just return a pointer to the VirtQueueElement.
      
      The advantage is that we will be able to allocate only the space that
      is needed for the actual size of the s/g list instead of the full
      VIRTQUEUE_MAX_SIZE items.  Currently VirtQueueElement takes about 48K
      of memory, and this kind of allocation puts a lot of stress on malloc.
      By cutting the size by two or three orders of magnitude, malloc can
      use much more efficient algorithms.
      
      The patch is pretty large, but changes to each device are testable
      more or less independently.  Splitting it would mostly add churn.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      51b19ebe
  10. 29 1月, 2016 1 次提交
  11. 01 10月, 2015 1 次提交
  12. 24 9月, 2015 2 次提交
  13. 10 9月, 2015 1 次提交
  14. 13 8月, 2015 1 次提交
  15. 27 7月, 2015 2 次提交
  16. 20 7月, 2015 3 次提交
  17. 13 7月, 2015 1 次提交
  18. 19 6月, 2015 1 次提交
  19. 11 6月, 2015 4 次提交
  20. 04 6月, 2015 1 次提交
  21. 01 6月, 2015 1 次提交
  22. 31 5月, 2015 2 次提交
  23. 11 5月, 2015 1 次提交
  24. 30 4月, 2015 1 次提交
  25. 28 4月, 2015 1 次提交