1. 13 12月, 2016 1 次提交
  2. 07 12月, 2016 1 次提交
  3. 29 11月, 2016 1 次提交
    • J
      virtio-net: enable multiqueue by default · 44900010
      Jason Wang 提交于
      We use single queue even if multiqueue is enabled and let admin to
      enable it through ethtool later. This is used to avoid possible
      regression (small packet TCP stream transmission). But looks like an
      overkill since:
      
      - single queue user can disable multiqueue when launching qemu
      - brings extra troubles for the management since it needs extra admin
        tool in guest to enable multiqueue
      - multiqueue performs much better than single queue in most of the
        cases
      
      So this patch enables multiqueue by default: if #queues is less than or
      equal to #vcpu, enable as much as queue pairs; if #queues is greater
      than #vcpu, enable #vcpu queue pairs.
      
      Cc: Hannes Frederic Sowa <hannes@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Neil Horman <nhorman@redhat.com>
      Cc: Jeremy Eder <jeder@redhat.com>
      Cc: Marko Myllynen <myllynen@redhat.com>
      Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44900010
  4. 17 11月, 2016 1 次提交
  5. 08 11月, 2016 1 次提交
  6. 29 10月, 2016 1 次提交
  7. 21 10月, 2016 1 次提交
    • J
      net: use core MTU range checking in virt drivers · d0c2c997
      Jarod Wilson 提交于
      hyperv_net:
      - set min/max_mtu, per Haiyang, after rndis_filter_device_add
      
      virtio_net:
      - set min/max_mtu
      - remove virtnet_change_mtu
      
      vmxnet3:
      - set min/max_mtu
      
      xen-netback:
      - min_mtu = 0, max_mtu = 65517
      
      xen-netfront:
      - min_mtu = 0, max_mtu = 65535
      
      unisys/visor:
      - clean up defines a little to not clash with network core or add
        redundat definitions
      
      CC: netdev@vger.kernel.org
      CC: virtualization@lists.linux-foundation.org
      CC: "K. Y. Srinivasan" <kys@microsoft.com>
      CC: Haiyang Zhang <haiyangz@microsoft.com>
      CC: "Michael S. Tsirkin" <mst@redhat.com>
      CC: Shrikrishna Khare <skhare@vmware.com>
      CC: "VMware, Inc." <pv-drivers@vmware.com>
      CC: Wei Liu <wei.liu2@citrix.com>
      CC: Paul Durrant <paul.durrant@citrix.com>
      CC: David Kershner <david.kershner@unisys.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0c2c997
  8. 03 9月, 2016 1 次提交
  9. 20 7月, 2016 1 次提交
  10. 14 6月, 2016 1 次提交
  11. 11 6月, 2016 1 次提交
  12. 07 6月, 2016 1 次提交
  13. 01 6月, 2016 1 次提交
  14. 17 3月, 2016 1 次提交
  15. 12 2月, 2016 1 次提交
  16. 08 2月, 2016 1 次提交
    • N
      virtio_net: add ethtool support for set and get of settings · 16032be5
      Nikolay Aleksandrov 提交于
      This patch allows the user to set and retrieve speed and duplex of the
      virtio_net device via ethtool. Having this functionality is very helpful
      for simulating different environments and also enables the virtio_net
      device to participate in operations where proper speed and duplex are
      required (e.g. currently bonding lacp mode requires full duplex). Custom
      speed and duplex are not allowed, the user-supplied settings are validated
      before applying.
      
      Example:
      $ ethtool eth1
      Settings for eth1:
      ...
      	Speed: Unknown!
      	Duplex: Unknown! (255)
      $ ethtool -s eth1 speed 1000 duplex full
      $ ethtool eth1
      Settings for eth1:
      ...
      	Speed: 1000Mb/s
      	Duplex: Full
      
      Based on a patch by Roopa Prabhu.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16032be5
  17. 07 12月, 2015 1 次提交
  18. 19 11月, 2015 2 次提交
    • E
      net: provide generic busy polling to all NAPI drivers · 93d05d4a
      Eric Dumazet 提交于
      NAPI drivers no longer need to observe a particular protocol
      to benefit from busy polling (CONFIG_NET_RX_BUSY_POLL=y)
      
      napi_hash_add() and napi_hash_del() are automatically called
      from core networking stack, respectively from
      netif_napi_add() and netif_napi_del()
      
      This patch depends on free_netdev() and netif_napi_del() being
      called from process context, which seems to be the norm.
      
      Drivers might still prefer to call napi_hash_del() on their
      own, since they might combine all the rcu grace periods into
      a single one, knowing their NAPI structures lifetime, while
      core networking stack has no idea of a possible combining.
      
      Once this patch proves to not bring serious regressions,
      we will cleanup drivers to either remove napi_hash_del()
      or provide appropriate rcu grace periods combining.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93d05d4a
    • E
      net: move skb_mark_napi_id() into core networking stack · 93f93a44
      Eric Dumazet 提交于
      We would like to automatically provide busy polling support
      to all NAPI drivers, without them having to implement anything.
      
      skb_mark_napi_id() can be called from napi_gro_receive() and
      napi_get_frags().
      
      Few drivers are still calling skb_mark_napi_id() because
      they use netif_receive_skb(). They should eventually call
      napi_gro_receive() instead. I will leave this to drivers
      maintainers.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93f93a44
  19. 28 8月, 2015 1 次提交
  20. 21 8月, 2015 1 次提交
  21. 07 8月, 2015 1 次提交
  22. 04 8月, 2015 1 次提交
    • E
      virtio_net: add gro capability · 0fbd050a
      Eric Dumazet 提交于
      Straightforward patch to add GRO processing to virtio_net.
      
      napi_complete_done() usage allows more aggressive aggregation,
      opted-in by setting /sys/class/net/xxx/gro_flush_timeout
      
      Tested:
      
      Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
      Rick Jones reported following results.
      
      One VM of each on a pair of OpenStack compute nodes with E5-2650Lv3 CPUs
      and Intel 82599ES-based NICs. So, two "before" and two "after" VMs.
      The OpenStack compute nodes were running OpenStack Kilo, with VxLAN
      encapsulation being used through OVS so no GRO coming-up the host
      stack.  The compute nodes themselves were running a 3.14-based kernel.
      
      Single-stream netperf, CPU utilizations and thus service demands are
      based on intra-guest reported CPU.
      
      Throughput Mbit/s, bigger is better
              Min     Median  Average Max
      4.2.0-rc3+      1364    1686    1678    1938
      4.2.0-rc3+flush1k       1824    2269    2275    2647
      
      Send Service Demand, smaller is better
              Min     Median  Average Max
      4.2.0-rc3+      0.236   0.558   0.524   0.802
      4.2.0-rc3+flush1k       0.176   0.503   0.471   0.738
      
      Receive Service Demand, smaller is better.
              Min     Median  Average Max
      4.2.0-rc3+      1.906   2.188   2.191   2.531
      4.2.0-rc3+flush1k       0.448   0.529   0.533   0.692
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Tested-by: NRick Jones <rick.jones2@hp.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0fbd050a
  23. 21 7月, 2015 1 次提交
  24. 07 4月, 2015 1 次提交
  25. 30 3月, 2015 1 次提交
  26. 25 3月, 2015 1 次提交
  27. 13 3月, 2015 1 次提交
    • J
      virtio-net: correctly delete napi hash · ab3971b1
      Jason Wang 提交于
      We don't delete napi from hash list during module exit. This will
      cause the following panic when doing module load and unload:
      
      BUG: unable to handle kernel paging request at 0000004e00000075
      IP: [<ffffffff816bd01b>] napi_hash_add+0x6b/0xf0
      PGD 3c5d5067 PUD 0
      Oops: 0000 [#1] SMP
      ...
      Call Trace:
      [<ffffffffa0a5bfb7>] init_vqs+0x107/0x490 [virtio_net]
      [<ffffffffa0a5c9f2>] virtnet_probe+0x562/0x791815639d880be [virtio_net]
      [<ffffffff8139e667>] virtio_dev_probe+0x137/0x200
      [<ffffffff814c7f2a>] driver_probe_device+0x7a/0x250
      [<ffffffff814c81d3>] __driver_attach+0x93/0xa0
      [<ffffffff814c8140>] ? __device_attach+0x40/0x40
      [<ffffffff814c6053>] bus_for_each_dev+0x63/0xa0
      [<ffffffff814c7a79>] driver_attach+0x19/0x20
      [<ffffffff814c76f0>] bus_add_driver+0x170/0x220
      [<ffffffffa0a60000>] ? 0xffffffffa0a60000
      [<ffffffff814c894f>] driver_register+0x5f/0xf0
      [<ffffffff8139e41b>] register_virtio_driver+0x1b/0x30
      [<ffffffffa0a60010>] virtio_net_driver_init+0x10/0x12 [virtio_net]
      
      This patch fixes this by doing this in virtnet_free_queues(). And also
      don't delete napi in virtnet_freeze() since it will call
      virtnet_free_queues() which has already did this.
      
      Fixes 91815639 ("virtio-net: rx busy polling support")
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ab3971b1
  28. 04 2月, 2015 1 次提交
  29. 23 1月, 2015 1 次提交
  30. 21 1月, 2015 1 次提交
  31. 31 12月, 2014 1 次提交
  32. 23 12月, 2014 1 次提交
    • H
      virtio_net: Fix napi poll list corruption · 8acdf999
      Herbert Xu 提交于
      The commit d75b1ade (net: less
      interrupt masking in NAPI) breaks virtio_net in an insidious way.
      
      It is now required that if the entire budget is consumed when poll
      returns, the napi poll_list must remain empty.  However, like some
      other drivers virtio_net tries to do a last-ditch check and if
      there is more work it will call napi_schedule and then immediately
      process some of this new work.  Should the entire budget be consumed
      while processing such new work then we will violate the new caller
      contract.
      
      This patch fixes this by not touching any work when we reschedule
      in virtio_net.
      
      The worst part of this bug is that the list corruption causes other
      napi users to be moved off-list.  In my case I was chasing a stall
      in IPsec (IPsec uses netif_rx) and I only belatedly realised that it
      was virtio_net which caused the stall even though the virtio_net
      poll was still functioning perfectly after IPsec stalled.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8acdf999
  33. 09 12月, 2014 7 次提交