1. 21 3月, 2006 2 次提交
  2. 01 2月, 2006 1 次提交
    • E
      [IPV6] tcp_v6_send_synack: release the destination · 78b91042
      Eric W. Biederman 提交于
      This patch fix dst reference counting in tcp_v6_send_synack
      
      Analysis:
      Currently tcp_v6_send_synack is never called with a dst entry
      so dst always comes in as NULL.
      
      ip6_dst_lookup calls ip6_route_output which calls dst_hold
      before it returns the dst entry.   Neither xfrm_lookup
      nor tcp_make_synack consume the dst entry so we still have
      a dst_entry with a bumped refrence count at the end of
      this function.
      
      Therefore we need to call dst_release just before we return
      just like tcp_v4_send_synack does.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      78b91042
  3. 12 1月, 2006 1 次提交
  4. 08 1月, 2006 1 次提交
  5. 04 1月, 2006 11 次提交
  6. 13 12月, 2005 1 次提交
  7. 11 11月, 2005 1 次提交
    • H
      [NET]: Detect hardware rx checksum faults correctly · fb286bb2
      Herbert Xu 提交于
      Here is the patch that introduces the generic skb_checksum_complete
      which also checks for hardware RX checksum faults.  If that happens,
      it'll call netdev_rx_csum_fault which currently prints out a stack
      trace with the device name.  In future it can turn off RX checksum.
      
      I've converted every spot under net/ that does RX checksum checks to
      use skb_checksum_complete or __skb_checksum_complete with the
      exceptions of:
      
      * Those places where checksums are done bit by bit.  These will call
      netdev_rx_csum_fault directly.
      
      * The following have not been completely checked/converted:
      
      ipmr
      ip_vs
      netfilter
      dccp
      
      This patch is based on patches and suggestions from Stephen Hemminger
      and David S. Miller.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb286bb2
  8. 06 11月, 2005 1 次提交
  9. 04 10月, 2005 1 次提交
    • E
      [INET]: speedup inet (tcp/dccp) lookups · 81c3d547
      Eric Dumazet 提交于
      Arnaldo and I agreed it could be applied now, because I have other
      pending patches depending on this one (Thank you Arnaldo)
      
      (The other important patch moves skc_refcnt in a separate cache line,
      so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)
      
      1) First some performance data :
      --------------------------------
      
      tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()
      
      The most time critical code is :
      
      sk_for_each(sk, node, &head->chain) {
           if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
               goto hit; /* You sunk my battleship! */
      }
      
      The sk_for_each() does use prefetch() hints but only the begining of
      "struct sock" is prefetched.
      
      As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
      away from the begining of "struct sock", it has to bring into CPU
      cache cold cache line. Each iteration has to use at least 2 cache
      lines.
      
      This can be problematic if some chains are very long.
      
      2) The goal
      -----------
      
      The idea I had is to change things so that INET_MATCH() may return
      FALSE in 99% of cases only using the data already in the CPU cache,
      using one cache line per iteration.
      
      3) Description of the patch
      ---------------------------
      
      Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
      filling a 32 bits hole on 64 bits platform.
      
      struct sock_common {
      	unsigned short		skc_family;
      	volatile unsigned char	skc_state;
      	unsigned char		skc_reuse;
      	int			skc_bound_dev_if;
      	struct hlist_node	skc_node;
      	struct hlist_node	skc_bind_node;
      	atomic_t		skc_refcnt;
      +	unsigned int		skc_hash;
      	struct proto		*skc_prot;
      };
      
      Store in this 32 bits field the full hash, not masked by (ehash_size -
      1) Using this full hash as the first comparison done in INET_MATCH
      permits us immediatly skip the element without touching a second cache
      line in case of a miss.
      
      Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
      sk_hash and tw_hash) already contains the slot number if we mask with
      (ehash_size - 1)
      
      File include/net/inet_hashtables.h
      
      64 bits platforms :
      #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))
           ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie))   &&  \
           ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      32bits platforms:
      #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))                 &&  \
           (inet_sk(__sk)->daddr          == (__saddr))   &&  \
           (inet_sk(__sk)->rcv_saddr      == (__daddr))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      
      - Adds a prefetch(head->chain.first) in 
      __inet_lookup_established()/__tcp_v4_check_established() and 
      __inet6_lookup_established()/__tcp_v6_check_established() and 
      __dccp_v4_check_established() to bring into cache the first element of the 
      list, before the {read|write}_lock(&head->lock);
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      81c3d547
  10. 09 9月, 2005 1 次提交
  11. 08 9月, 2005 1 次提交
  12. 30 8月, 2005 16 次提交
  13. 24 8月, 2005 1 次提交
  14. 06 7月, 2005 1 次提交
    • D
      [TCP]: Move to new TSO segmenting scheme. · c1b4a7e6
      David S. Miller 提交于
      Make TSO segment transmit size decisions at send time not earlier.
      
      The basic scheme is that we try to build as large a TSO frame as
      possible when pulling in the user data, but the size of the TSO frame
      output to the card is determined at transmit time.
      
      This is guided by tp->xmit_size_goal.  It is always set to a multiple
      of MSS and tells sendmsg/sendpage how large an SKB to try and build.
      
      Later, tcp_write_xmit() and tcp_push_one() chop up the packet if
      necessary and conditions warrant.  These routines can also decide to
      "defer" in order to wait for more ACKs to arrive and thus allow larger
      TSO frames to be emitted.
      
      A general observation is that TSO elongates the pipe, thus requiring a
      larger congestion window and larger buffering especially at the sender
      side.  Therefore, it is important that applications 1) get a large
      enough socket send buffer (this is accomplished by our dynamic send
      buffer expansion code) 2) do large enough writes.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1b4a7e6