1. 10 5月, 2010 8 次提交
  2. 08 5月, 2010 1 次提交
  3. 07 5月, 2010 2 次提交
    • E
      rps: Various optimizations · eecfd7c4
      Eric Dumazet 提交于
      Introduce ____napi_schedule() helper for callers in irq disabled
      contexts. rps_trigger_softirq() becomes a leaf function.
      
      Use container_of() in process_backlog() instead of accessing per_cpu
      address.
      
      Use a custom inlined version of __napi_complete() in process_backlog()
      to avoid one locked instruction :
      
       only current cpu owns and manipulates this napi,
       and NAPI_STATE_SCHED is the only possible flag set on backlog.
       we can use a plain write instead of clear_bit(),
       and we dont need an smp_mb() memory barrier, since RPS is on,
       backlog is protected by a spinlock.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eecfd7c4
    • B
      ipv6: udp: make short packet logging consistent with ipv4 · d6bc0149
      Bjørn Mork 提交于
      Adding addresses and ports to the short packet log message,
      like ipv4/udp.c does it, makes these messages a lot more useful:
      
      [  822.182450] UDPv6: short packet: From [2001:db8:ffb4:3::1]:47839 23715/178 to [2001:db8:ffb4:3:5054:ff:feff:200]:1234
      
      This requires us to drop logging in case pskb_may_pull() fails,
      which also is consistent with ipv4/udp.c
      Signed-off-by: NBjørn Mork <bjorn@mork.no>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d6bc0149
  4. 06 5月, 2010 3 次提交
    • W
      bridge: make bridge support netpoll · c06ee961
      WANG Cong 提交于
      Based on the previous patch, make bridge support netpoll by:
      
      1) implement the 2 methods to support netpoll for bridge;
      
      2) modify netpoll during forwarding packets via bridge;
      
      3) disable netpoll support of bridge when a netpoll-unabled device
         is added to bridge;
      
      4) enable netpoll support when all underlying devices support netpoll.
      
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Stephen Hemminger <shemminger@linux-foundation.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c06ee961
    • W
      netpoll: add generic support for bridge and bonding devices · 0e34e931
      WANG Cong 提交于
      This whole patchset is for adding netpoll support to bridge and bonding
      devices. I already tested it for bridge, bonding, bridge over bonding,
      and bonding over bridge. It looks fine now.
      
      To make bridge and bonding support netpoll, we need to adjust
      some netpoll generic code. This patch does the following things:
      
      1) introduce two new priv_flags for struct net_device:
         IFF_IN_NETPOLL which identifies we are processing a netpoll;
         IFF_DISABLE_NETPOLL is used to disable netpoll support for a device
         at run-time;
      
      2) introduce one new method for netdev_ops:
         ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is
           removed.
      
      3) introduce netpoll_poll_dev() which takes a struct net_device * parameter;
         export netpoll_send_skb() and netpoll_poll_dev() which will be used later;
      
      4) hide a pointer to struct netpoll in struct netpoll_info, ditto.
      
      5) introduce ->real_dev for struct netpoll.
      
      6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable
         netconsole before releasing a slave, to avoid deadlocks.
      
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e34e931
    • J
      mac80211: use fixed channel in ibss join when appropriate · adfba3c7
      Johannes Berg 提交于
      "mac80211: improve IBSS scanning" was missing a hunk.
      This adds that hunk as originally intended.
      Signed-off-by: NJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      adfba3c7
  5. 05 5月, 2010 1 次提交
    • E
      net: __alloc_skb() speedup · ec7d2f2c
      Eric Dumazet 提交于
      With following patch I can reach maximum rate of my pktgen+udpsink
      simulator :
      - 'old' machine : dual quad core E5450  @3.00GHz
      - 64 UDP rx flows (only differ by destination port)
      - RPS enabled, NIC interrupts serviced on cpu0
      - rps dispatched on 7 other cores. (~130.000 IPI per second)
      - SLAB allocator (faster than SLUB in this workload)
      - tg3 NIC
      - 1.080.000 pps without a single drop at NIC level.
      
      Idea is to add two prefetchw() calls in __alloc_skb(), one to prefetch
      first sk_buff cache line, the second to prefetch the shinfo part.
      
      Also using one memset() to initialize all skb_shared_info fields instead
      of one by one to reduce number of instructions, using long word moves.
      
      All skb_shared_info fields before 'dataref' are cleared in 
      __alloc_skb().
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec7d2f2c
  6. 04 5月, 2010 6 次提交
  7. 03 5月, 2010 1 次提交
    • C
      net: fix softnet_stat · dee42870
      Changli Gao 提交于
      Per cpu variable softnet_data.total was shared between IRQ and SoftIRQ context
      without any protection. And enqueue_to_backlog should update the netdev_rx_stat
      of the target CPU.
      
      This patch renames softnet_data.total to softnet_data.processed: the number of
      packets processed in uppper levels(IP stacks).
      
      softnet_stat data is moved into softnet_data.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      ----
       include/linux/netdevice.h |   17 +++++++----------
       net/core/dev.c            |   26 ++++++++++++--------------
       net/sched/sch_generic.c   |    2 +-
       3 files changed, 20 insertions(+), 25 deletions(-)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dee42870
  8. 02 5月, 2010 2 次提交
    • D
      net: Inline skb_pull() in eth_type_trans(). · 47d29646
      David S. Miller 提交于
      In commit 6be8ac2f ("[NET]: uninline skb_pull, de-bloats a lot")
      we uninlined skb_pull.
      
      But in some critical paths it makes sense to inline this thing
      and it helps performance significantly.
      
      Create an skb_pull_inline() so that we can do this in a way that
      serves also as annotation.
      
      Based upon a patch by Eric Dumazet.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47d29646
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  9. 01 5月, 2010 16 次提交