1. 07 2月, 2017 4 次提交
  2. 05 2月, 2017 1 次提交
  3. 04 2月, 2017 1 次提交
  4. 26 1月, 2017 3 次提交
  5. 25 1月, 2017 1 次提交
  6. 21 1月, 2017 1 次提交
  7. 09 1月, 2017 1 次提交
  8. 30 12月, 2016 1 次提交
  9. 25 12月, 2016 1 次提交
  10. 24 12月, 2016 9 次提交
  11. 18 12月, 2016 4 次提交
    • J
      virtio_net: xdp, add slowpath case for non contiguous buffers · 72979a6c
      John Fastabend 提交于
      virtio_net XDP support expects receive buffers to be contiguous.
      If this is not the case we enable a slowpath to allow connectivity
      to continue but at a significan performance overhead associated with
      linearizing data. To make it painfully aware to users that XDP is
      running in a degraded mode we throw an xdp buffer error.
      
      To linearize packets we allocate a page and copy the segments of
      the data, including the header, into it. After this the page can be
      handled by XDP code flow as normal.
      
      Then depending on the return code the page is either freed or sent
      to the XDP xmit path. There is no attempt to optimize this path.
      
      This case is being handled simple as a precaution in case some
      unknown backend were to generate packets in this form. To test this
      I had to hack qemu and force it to generate these packets. I do not
      expect this case to be generated by "real" backends.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72979a6c
    • J
      virtio_net: add XDP_TX support · 56434a01
      John Fastabend 提交于
      This adds support for the XDP_TX action to virtio_net. When an XDP
      program is run and returns the XDP_TX action the virtio_net XDP
      implementation will transmit the packet on a TX queue that aligns
      with the current CPU that the XDP packet was processed on.
      
      Before sending the packet the header is zeroed.  Also XDP is expected
      to handle checksum correctly so no checksum offload  support is
      provided.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56434a01
    • J
      virtio_net: add dedicated XDP transmit queues · 672aafd5
      John Fastabend 提交于
      XDP requires using isolated transmit queues to avoid interference
      with normal networking stack (BQL, NETDEV_TX_BUSY, etc). This patch
      adds a XDP queue per cpu when a XDP program is loaded and does not
      expose the queues to the OS via the normal API call to
      netif_set_real_num_tx_queues(). This way the stack will never push
      an skb to these queues.
      
      However virtio/vhost/qemu implementation only allows for creating
      TX/RX queue pairs at this time so creating only TX queues was not
      possible. And because the associated RX queues are being created I
      went ahead and exposed these to the stack and let the backend use
      them. This creates more RX queues visible to the network stack than
      TX queues which is worth mentioning but does not cause any issues as
      far as I can tell.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      672aafd5
    • J
      virtio_net: Add XDP support · f600b690
      John Fastabend 提交于
      This adds XDP support to virtio_net. Some requirements must be
      met for XDP to be enabled depending on the mode. First it will
      only be supported with LRO disabled so that data is not pushed
      across multiple buffers. Second the MTU must be less than a page
      size to avoid having to handle XDP across multiple pages.
      
      If mergeable receive is enabled this patch only supports the case
      where header and data are in the same buf which we can check when
      a packet is received by looking at num_buf. If the num_buf is
      greater than 1 and a XDP program is loaded the packet is dropped
      and a warning is thrown. When any_header_sg is set this does not
      happen and both header and data is put in a single buffer as expected
      so we check this when XDP programs are loaded.  Subsequent patches
      will process the packet in a degraded mode to ensure connectivity
      and correctness is not lost even if backend pushes packets into
      multiple buffers.
      
      If big packets mode is enabled and MTU/LRO conditions above are
      met then XDP is allowed.
      
      This patch was tested with qemu with vhost=on and vhost=off where
      mergeable and big_packet modes were forced via hard coding feature
      negotiation. Multiple buffers per packet was forced via a small
      test patch to vhost.c in the vhost=on qemu mode.
      Suggested-by: NShrijeet Mukherjee <shrijeet@gmail.com>
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f600b690
  12. 13 12月, 2016 1 次提交
  13. 07 12月, 2016 1 次提交
  14. 29 11月, 2016 1 次提交
    • J
      virtio-net: enable multiqueue by default · 44900010
      Jason Wang 提交于
      We use single queue even if multiqueue is enabled and let admin to
      enable it through ethtool later. This is used to avoid possible
      regression (small packet TCP stream transmission). But looks like an
      overkill since:
      
      - single queue user can disable multiqueue when launching qemu
      - brings extra troubles for the management since it needs extra admin
        tool in guest to enable multiqueue
      - multiqueue performs much better than single queue in most of the
        cases
      
      So this patch enables multiqueue by default: if #queues is less than or
      equal to #vcpu, enable as much as queue pairs; if #queues is greater
      than #vcpu, enable #vcpu queue pairs.
      
      Cc: Hannes Frederic Sowa <hannes@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Neil Horman <nhorman@redhat.com>
      Cc: Jeremy Eder <jeder@redhat.com>
      Cc: Marko Myllynen <myllynen@redhat.com>
      Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44900010
  15. 17 11月, 2016 1 次提交
  16. 08 11月, 2016 1 次提交
  17. 29 10月, 2016 1 次提交
  18. 21 10月, 2016 1 次提交
    • J
      net: use core MTU range checking in virt drivers · d0c2c997
      Jarod Wilson 提交于
      hyperv_net:
      - set min/max_mtu, per Haiyang, after rndis_filter_device_add
      
      virtio_net:
      - set min/max_mtu
      - remove virtnet_change_mtu
      
      vmxnet3:
      - set min/max_mtu
      
      xen-netback:
      - min_mtu = 0, max_mtu = 65517
      
      xen-netfront:
      - min_mtu = 0, max_mtu = 65535
      
      unisys/visor:
      - clean up defines a little to not clash with network core or add
        redundat definitions
      
      CC: netdev@vger.kernel.org
      CC: virtualization@lists.linux-foundation.org
      CC: "K. Y. Srinivasan" <kys@microsoft.com>
      CC: Haiyang Zhang <haiyangz@microsoft.com>
      CC: "Michael S. Tsirkin" <mst@redhat.com>
      CC: Shrikrishna Khare <skhare@vmware.com>
      CC: "VMware, Inc." <pv-drivers@vmware.com>
      CC: Wei Liu <wei.liu2@citrix.com>
      CC: Paul Durrant <paul.durrant@citrix.com>
      CC: David Kershner <david.kershner@unisys.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0c2c997
  19. 03 9月, 2016 1 次提交
  20. 20 7月, 2016 1 次提交
  21. 14 6月, 2016 1 次提交
  22. 11 6月, 2016 1 次提交
  23. 07 6月, 2016 1 次提交
  24. 01 6月, 2016 1 次提交