1. 21 5月, 2013 1 次提交
    • W
      rps: selective flow shedding during softnet overflow · 99bbc707
      Willem de Bruijn 提交于
      A cpu executing the network receive path sheds packets when its input
      queue grows to netdev_max_backlog. A single high rate flow (such as a
      spoofed source DoS) can exceed a single cpu processing rate and will
      degrade throughput of other flows hashed onto the same cpu.
      
      This patch adds a more fine grained hashtable. If the netdev backlog
      is above a threshold, IRQ cpus track the ratio of total traffic of
      each flow (using 4096 buckets, configurable). The ratio is measured
      by counting the number of packets per flow over the last 256 packets
      from the source cpu. Any flow that occupies a large fraction of this
      (set at 50%) will see packet drop while above the threshold.
      
      Tested:
      Setup is a muli-threaded UDP echo server with network rx IRQ on cpu0,
      kernel receive (RPS) on cpu0 and application threads on cpus 2--7
      each handling 20k req/s. Throughput halves when hit with a 400 kpps
      antagonist storm. With this patch applied, antagonist overload is
      dropped and the server processes its complete load.
      
      The patch is effective when kernel receive processing is the
      bottleneck. The above RPS scenario is a extreme, but the same is
      reached with RFS and sufficient kernel processing (iptables, packet
      socket tap, ..).
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      99bbc707
  2. 17 5月, 2013 1 次提交
    • E
      bonding: allow TSO being set on bonding master · b0ce3508
      Eric Dumazet 提交于
      In some situations, we need to disable TSO on bonding slaves.
      
      bonding device automatically unset TSO in bond_fix_features(), and
      performance is not good because :
      
      1) We consume more cpu cycles.
      
      2) GSO segmentation has some bugs leading to out of order TCP packets
      if this segmentation is done before virtual device. This particular
      problem will be addressed in a separate patch.
      
      This patch allows TSO being set/unset on the bonding master,
      so that GSO segmentation is done after bonding layer.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Michał Mirosław <mirqus@gmail.com>
      Cc: Jay Vosburgh <fubar@us.ibm.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0ce3508
  3. 06 5月, 2013 1 次提交
  4. 20 4月, 2013 2 次提交
  5. 16 4月, 2013 1 次提交
    • V
      net: add dev_uc_sync_multiple() and dev_mc_sync_multiple() api · 4cd729b0
      Vlad Yasevich 提交于
      The current implementation of dev_uc_sync/unsync() assumes that there is
      a strict 1-to-1 relationship between the source and destination of the sync.
      In other words, once an address has been synced to a destination device, it
      will not be synced to any other device through the sync API.
      However, there are some virtual devices that aggreate a number of lower
      devices and need to sync addresses to all of them.  The current
      API falls short there.
      
      This patch introduces a new dev_uc_sync_multiple() api that can be called
      in the above circumstances and allows sync to work for every invocation.
      
      CC: Jiri Pirko <jiri@resnulli.us>
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4cd729b0
  6. 05 4月, 2013 1 次提交
    • V
      net: count hw_addr syncs so that unsync works properly. · 4543fbef
      Vlad Yasevich 提交于
      A few drivers use dev_uc_sync/unsync to synchronize the
      address lists from master down to slave/lower devices.  In
      some cases (bond/team) a single address list is synched down
      to multiple devices.  At the time of unsync, we have a leak
      in these lower devices, because "synced" is treated as a
      boolean and the address will not be unsynced for anything after
      the first device/call.
      
      Treat "synced" as a count (same as refcount) and allow all
      unsync calls to work.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4543fbef
  7. 31 3月, 2013 1 次提交
    • E
      net: reorder some fields of net_device · 4c3d5e7b
      Eric Dumazet 提交于
      As time passed, some fields were added in net_device, and not
      at sensible offsets.
      
      Lets reorder some fields to reduce number of cache lines in RX path.
      Fields not used in data path should be moved out of this critical cache
      line.
      
      In particular, move broadcast[] to the end of the rx section,
      as it is less used, and ethernet uses only the beginning of the 32bytes
      field.
      
      Before patch :
      
      offsetof(struct net_device,dev_addr)=0x258
      offsetof(struct net_device,rx_handler)=0x2b8
      offsetof(struct net_device,ingress_queue)=0x2c8
      offsetof(struct net_device,broadcast)=0x278
      
      After :
      
      offsetof(struct net_device,dev_addr)=0x280
      offsetof(struct net_device,rx_handler)=0x298
      offsetof(struct net_device,ingress_queue)=0x2a8
      offsetof(struct net_device,broadcast)=0x2b0
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c3d5e7b
  8. 28 3月, 2013 2 次提交
  9. 15 3月, 2013 2 次提交
  10. 10 3月, 2013 1 次提交
  11. 06 3月, 2013 1 次提交
  12. 20 2月, 2013 1 次提交
  13. 19 2月, 2013 1 次提交
  14. 16 2月, 2013 1 次提交
  15. 14 2月, 2013 3 次提交
  16. 11 2月, 2013 1 次提交
  17. 07 2月, 2013 1 次提交
    • C
      net: adjust skb_gso_segment() for calling in rx path · 12b0004d
      Cong Wang 提交于
      skb_gso_segment() is almost always called in tx path,
      except for openvswitch. It calls this function when
      it receives the packet and tries to queue it to user-space.
      In this special case, the ->ip_summed check inside
      skb_gso_segment() is no longer true, as ->ip_summed value
      has different meanings on rx path.
      
      This patch adjusts skb_gso_segment() so that we can at least
      avoid such warnings on checksum.
      
      Cc: Jesse Gross <jesse@nicira.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NCong Wang <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12b0004d
  18. 29 1月, 2013 1 次提交
    • C
      netpoll: add RCU annotation to npinfo field · 5fbee843
      Cong Wang 提交于
      dev->npinfo is protected by RCU.
      
      This fixes the following sparse warnings:
      
      net/core/netpoll.c:177:48: error: incompatible types in comparison expression (different address spaces)
      net/core/netpoll.c:200:35: error: incompatible types in comparison expression (different address spaces)
      net/core/netpoll.c:221:35: error: incompatible types in comparison expression (different address spaces)
      net/core/netpoll.c:327:18: error: incompatible types in comparison expression (different address spaces)
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NCong Wang <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5fbee843
  19. 12 1月, 2013 1 次提交
  20. 11 1月, 2013 3 次提交
  21. 05 1月, 2013 3 次提交
  22. 04 1月, 2013 1 次提交
  23. 30 12月, 2012 1 次提交
  24. 29 12月, 2012 1 次提交
  25. 22 12月, 2012 1 次提交
  26. 09 12月, 2012 1 次提交
  27. 08 12月, 2012 1 次提交
    • E
      net: gro: fix possible panic in skb_gro_receive() · c3c7c254
      Eric Dumazet 提交于
      commit 2e71a6f8 (net: gro: selective flush of packets) added
      a bug for skbs using frag_list. This part of the GRO stack is rarely
      used, as it needs skb not using a page fragment for their skb->head.
      
      Most drivers do use a page fragment, but some of them use GFP_KERNEL
      allocations for the initial fill of their RX ring buffer.
      
      napi_gro_flush() overwrite skb->prev that was used for these skb to
      point to the last skb in frag_list.
      
      Fix this using a separate field in struct napi_gro_cb to point to the
      last fragment.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3c7c254
  28. 30 11月, 2012 1 次提交
  29. 27 11月, 2012 1 次提交
    • B
      sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name · c91f6df2
      Brian Haley 提交于
      Instead of having the getsockopt() of SO_BINDTODEVICE return an index, which
      will then require another call like if_indextoname() to get the actual interface
      name, have it return the name directly.
      
      This also matches the existing man page description on socket(7) which mentions
      the argument being an interface name.
      
      If the value has not been set, zero is returned and optlen will be set to zero
      to indicate there is no interface name present.
      
      Added a seqlock to protect this code path, and dev_ifname(), from someone
      changing the device name via dev_change_name().
      
      v2: Added seqlock protection while copying device name.
      
      v3: Fixed word wrap in patch.
      Signed-off-by: NBrian Haley <brian.haley@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c91f6df2
  30. 19 11月, 2012 1 次提交
  31. 16 11月, 2012 1 次提交