1. 21 4月, 2010 1 次提交
  2. 17 4月, 2010 1 次提交
    • T
      rfs: Receive Flow Steering · fec5e652
      Tom Herbert 提交于
      This patch implements receive flow steering (RFS).  RFS steers
      received packets for layer 3 and 4 processing to the CPU where
      the application for the corresponding flow is running.  RFS is an
      extension of Receive Packet Steering (RPS).
      
      The basic idea of RFS is that when an application calls recvmsg
      (or sendmsg) the application's running CPU is stored in a hash
      table that is indexed by the connection's rxhash which is stored in
      the socket structure.  The rxhash is passed in skb's received on
      the connection from netif_receive_skb.  For each received packet,
      the associated rxhash is used to look up the CPU in the hash table,
      if a valid CPU is set then the packet is steered to that CPU using
      the RPS mechanisms.
      
      The convolution of the simple approach is that it would potentially
      allow OOO packets.  If threads are thrashing around CPUs or multiple
      threads are trying to read from the same sockets, a quickly changing
      CPU value in the hash table could cause rampant OOO packets--
      we consider this a non-starter.
      
      To avoid OOO packets, this solution implements two types of hash
      tables: rps_sock_flow_table and rps_dev_flow_table.
      
      rps_sock_table is a global hash table.  Each entry is just a CPU
      number and it is populated in recvmsg and sendmsg as described above.
      This table contains the "desired" CPUs for flows.
      
      rps_dev_flow_table is specific to each device queue.  Each entry
      contains a CPU and a tail queue counter.  The CPU is the "current"
      CPU for a matching flow.  The tail queue counter holds the value
      of a tail queue counter for the associated CPU's backlog queue at
      the time of last enqueue for a flow matching the entry.
      
      Each backlog queue has a queue head counter which is incremented
      on dequeue, and so a queue tail counter is computed as queue head
      count + queue length.  When a packet is enqueued on a backlog queue,
      the current value of the queue tail counter is saved in the hash
      entry of the rps_dev_flow_table.
      
      And now the trick: when selecting the CPU for RPS (get_rps_cpu)
      the rps_sock_flow table and the rps_dev_flow table for the RX queue
      are consulted.  When the desired CPU for the flow (found in the
      rps_sock_flow table) does not match the current CPU (found in the
      rps_dev_flow table), the current CPU is changed to the desired CPU
      if one of the following is true:
      
      - The current CPU is unset (equal to RPS_NO_CPU)
      - Current CPU is offline
      - The current CPU's queue head counter >= queue tail counter in the
      rps_dev_flow table.  This checks if the queue tail has advanced
      beyond the last packet that was enqueued using this table entry.
      This guarantees that all packets queued using this entry have been
      dequeued, thus preserving in order delivery.
      
      Making each queue have its own rps_dev_flow table has two advantages:
      1) the tail queue counters will be written on each receive, so
      keeping the table local to interrupting CPU s good for locality.  2)
      this allows lockless access to the table-- the CPU number and queue
      tail counter need to be accessed together under mutual exclusion
      from netif_receive_skb, we assume that this is only called from
      device napi_poll which is non-reentrant.
      
      This patch implements RFS for TCP and connected UDP sockets.
      It should be usable for other flow oriented protocols.
      
      There are two configuration parameters for RFS.  The
      "rps_flow_entries" kernel init parameter sets the number of
      entries in the rps_sock_flow_table, the per rxqueue sysfs entry
      "rps_flow_cnt" contains the number of entries in the rps_dev_flow
      table for the rxqueue.  Both are rounded to power of two.
      
      The obvious benefit of RFS (over just RPS) is that it achieves
      CPU locality between the receive processing for a flow and the
      applications processing; this can result in increased performance
      (higher pps, lower latency).
      
      The benefits of RFS are dependent on cache hierarchy, application
      load, and other factors.  On simple benchmarks, we don't necessarily
      see improvement and sometimes see degradation.  However, for more
      complex benchmarks and for applications where cache pressure is
      much higher this technique seems to perform very well.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR
      test with 1 byte req. and resp.  The RPC test is an request/response
      test similar in structure to netperf RR test ith 100 threads on
      each host, but does more work in userspace that netperf.
      
      e1000e on 8 core Intel
         No RFS or RPS		104K tps at 30% CPU
         No RFS (best RPS config):    290K tps at 63% CPU
         RFS				303K tps at 61% CPU
      
      RPC test	tps	CPU%	50/90/99% usec latency	Latency StdDev
        No RFS/RPS	103K	48%	757/900/3185		4472.35
        RPS only:	174K	73%	415/993/2468		491.66
        RFS		223K	73%	379/651/1382		315.61
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fec5e652
  3. 16 4月, 2010 1 次提交
  4. 15 4月, 2010 3 次提交
  5. 14 4月, 2010 8 次提交
  6. 13 4月, 2010 1 次提交
    • E
      net: sk_dst_cache RCUification · b6c6712a
      Eric Dumazet 提交于
      With latest CONFIG_PROVE_RCU stuff, I felt more comfortable to make this
      work.
      
      sk->sk_dst_cache is currently protected by a rwlock (sk_dst_lock)
      
      This rwlock is readlocked for a very small amount of time, and dst
      entries are already freed after RCU grace period. This calls for RCU
      again :)
      
      This patch converts sk_dst_lock to a spinlock, and use RCU for readers.
      
      __sk_dst_get() is supposed to be called with rcu_read_lock() or if
      socket locked by user, so use appropriate rcu_dereference_check()
      condition (rcu_read_lock_held() || sock_owned_by_user(sk))
      
      This patch avoids two atomic ops per tx packet on UDP connected sockets,
      for example, and permits sk_dst_lock to be much less dirtied.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c6712a
  7. 12 4月, 2010 3 次提交
  8. 11 4月, 2010 1 次提交
  9. 09 4月, 2010 2 次提交
    • D
      tcp: Set CHECKSUM_UNNECESSARY in tcp_init_nondata_skb · 2626419a
      David S. Miller 提交于
      Back in commit 04a0551c
      ("loopback: Drop obsolete ip_summed setting") we stopped
      setting CHECKSUM_UNNECESSARY in the loopback xmit.
      
      This is because such a setting was a lie since it implies that the
      checksum field of the packet is properly filled in.
      
      Instead what happens normally is that CHECKSUM_PARTIAL is set and
      skb->csum is calculated as needed.
      
      But this was only happening for TCP data packets (via the
      skb->ip_summed assignment done in tcp_sendmsg()).  It doesn't
      happen for non-data packets like ACKs etc.
      
      Fix this by setting skb->ip_summed in the common non-data packet
      constructor.  It already is setting skb->csum to zero.
      
      But this reminds us that we still have things like ip_output.c's
      ip_dev_loopback_xmit() which sets skb->ip_summed to the value
      CHECKSUM_UNNECESSARY, which Herbert's patch teaches us is not
      valid.  So we'll have to address that at some point too.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2626419a
    • J
      udp: fix for unicast RX path optimization · 1223c67c
      Jorge Boncompte [DTI2] 提交于
      Commits 5051ebd2 and
      5051ebd2 ("ipv[46]: udp: optimize unicast RX
      path") broke some programs.
      
      	After upgrading a L2TP server to 2.6.33 it started to fail, tunnels going up an
      down, after the 10th tunnel came up. My modified rp-l2tp uses a global
      unconnected socket bound to (INADDR_ANY, 1701) and one connected socket per
      tunnel after parameter negotiation.
      
      	After ten sockets were open and due to mixed parameters to
      udp[46]_lib_lookup2() kernel started to drop packets.
      Signed-off-by: NJorge Boncompte [DTI2] <jorge@dti2.net>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1223c67c
  10. 07 4月, 2010 1 次提交
    • T
      xfrm: cache bundles instead of policies for outgoing flows · 80c802f3
      Timo Teräs 提交于
      __xfrm_lookup() is called for each packet transmitted out of
      system. The xfrm_find_bundle() does a linear search which can
      kill system performance depending on how many bundles are
      required per policy.
      
      This modifies __xfrm_lookup() to store bundles directly in
      the flow cache. If we did not get a hit, we just create a new
      bundle instead of doing slow search. This means that we can now
      get multiple xfrm_dst's for same flow (on per-cpu basis).
      Signed-off-by: NTimo Teras <timo.teras@iki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80c802f3
  11. 04 4月, 2010 2 次提交
    • E
      icmp: Account for ICMP out errors · 1f8438a8
      Eric Dumazet 提交于
      When ip_append() fails because of socket limit or memory shortage,
      increment ICMP_MIB_OUTERRORS counter, so that "netstat -s" can report
      these errors.
      
      LANG=C netstat -s | grep "ICMP messages failed"
          0 ICMP messages failed
      
      For IPV6, implement ICMP6_MIB_OUTERRORS counter as well.
      
      # grep Icmp6OutErrors /proc/net/dev_snmp6/*
      /proc/net/dev_snmp6/eth0:Icmp6OutErrors                   	0
      /proc/net/dev_snmp6/lo:Icmp6OutErrors                   	0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f8438a8
    • J
      net: convert multicast list to list_head · 22bedad3
      Jiri Pirko 提交于
      Converts the list and the core manipulating with it to be the same as uc_list.
      
      +uses two functions for adding/removing mc address (normal and "global"
       variant) instead of a function parameter.
      +removes dev_mcast.c completely.
      +exposes netdev_hw_addr_list_* macros along with __hw_addr_* functions for
       manipulation with lists on a sandbox (used in bonding and 80211 drivers)
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      22bedad3
  12. 02 4月, 2010 2 次提交
  13. 31 3月, 2010 1 次提交
  14. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  15. 27 3月, 2010 4 次提交
  16. 25 3月, 2010 1 次提交
  17. 22 3月, 2010 5 次提交
  18. 21 3月, 2010 1 次提交
    • S
      NET_DMA: free skbs periodically · 73852e81
      Steven J. Magnani 提交于
      Under NET_DMA, data transfer can grind to a halt when userland issues a
      large read on a socket with a high RCVLOWAT (i.e., 512 KB for both).
      This appears to be because the NET_DMA design queues up lots of memcpy
      operations, but doesn't issue or wait for them (and thus free the
      associated skbs) until it is time for tcp_recvmesg() to return.
      The socket hangs when its TCP window goes to zero before enough data is
      available to satisfy the read.
      
      Periodically issue asynchronous memcpy operations, and free skbs for ones
      that have completed, to prevent sockets from going into zero-window mode.
      Signed-off-by: NSteven J. Magnani <steve@digidescorp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73852e81
  19. 20 3月, 2010 1 次提交