1. 23 9月, 2006 5 次提交
  2. 27 8月, 2006 1 次提交
  3. 03 8月, 2006 2 次提交
    • W
      [TCP]: SNMPv2 tcpAttemptFails counter error · 3687b1dc
      Wei Yongjun 提交于
      Refer to RFC2012, tcpAttemptFails is defined as following:
        tcpAttemptFails OBJECT-TYPE
            SYNTAX      Counter32
            MAX-ACCESS  read-only
            STATUS      current
            DESCRIPTION
                    "The number of times TCP connections have made a direct
                    transition to the CLOSED state from either the SYN-SENT
                    state or the SYN-RCVD state, plus the number of times TCP
                    connections have made a direct transition to the LISTEN
                    state from the SYN-RCVD state."
            ::= { tcp 7 }
      
      When I lookup into RFC793, I found that the state change should occured
      under following condition:
        1. SYN-SENT -> CLOSED
           a) Received ACK,RST segment when SYN-SENT state.
      
        2. SYN-RCVD -> CLOSED
           b) Received SYN segment when SYN-RCVD state(came from LISTEN).
           c) Received RST segment when SYN-RCVD state(came from SYN-SENT).
           d) Received SYN segment when SYN-RCVD state(came from SYN-SENT).
      
        3. SYN-RCVD -> LISTEN
           e) Received RST segment when SYN-RCVD state(came from LISTEN).
      
      In my test, those direct state transition can not be counted to
      tcpAttemptFails.
      Signed-off-by: NWei Yongjun <yjwei@nanjing-fnst.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3687b1dc
    • H
      [IPV6]: Audit all ip6_dst_lookup/ip6_dst_store calls · 497c615a
      Herbert Xu 提交于
      The current users of ip6_dst_lookup can be divided into two classes:
      
      1) The caller holds no locks and is in user-context (UDP).
      2) The caller does not want to lookup the dst cache at all.
      
      The second class covers everyone except UDP because most people do
      the cache lookup directly before calling ip6_dst_lookup.  This patch
      adds ip6_sk_dst_lookup for the first class.
      
      Similarly ip6_dst_store users can be divded into those that need to
      take the socket dst lock and those that don't.  This patch adds
      __ip6_dst_store for those (everyone except UDP/datagram) that don't
      need an extra lock.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      497c615a
  4. 09 7月, 2006 1 次提交
  5. 01 7月, 2006 3 次提交
    • H
      [IPV6]: Added GSO support for TCPv6 · f83ef8c0
      Herbert Xu 提交于
      This patch adds GSO support for IPv6 and TCPv6.  This is based on a patch
      by Ananda Raju <Ananda.Raju@neterion.com>.  His original description is:
      
      	This patch enables TSO over IPv6. Currently Linux network stacks
      	restricts TSO over IPv6 by clearing of the NETIF_F_TSO bit from
      	"dev->features". This patch will remove this restriction.
      
      	This patch will introduce a new flag NETIF_F_TSO6 which will be used
      	to check whether device supports TSO over IPv6. If device support TSO
      	over IPv6 then we don't clear of NETIF_F_TSO and which will make the
      	TCP layer to create TSO packets. Any device supporting TSO over IPv6
      	will set NETIF_F_TSO6 flag in "dev->features" along with NETIF_F_TSO.
      
      	In case when user disables TSO using ethtool, NETIF_F_TSO will get
      	cleared from "dev->features". So even if we have NETIF_F_TSO6 we don't
      	get TSO packets created by TCP layer.
      
      	SKB_GSO_TCPV4 renamed to SKB_GSO_TCP to make it generic GSO packet.
      	SKB_GSO_UDPV4 renamed to SKB_GSO_UDP as UFO is not a IPv4 feature.
      	UFO is supported over IPv6 also
      
      	The following table shows there is significant improvement in
      	throughput with normal frames and CPU usage for both normal and jumbo.
      
      	--------------------------------------------------
      	|          |     1500        |      9600         |
      	|          ------------------|-------------------|
      	|          | thru     CPU    |  thru     CPU     |
      	--------------------------------------------------
      	| TSO OFF  | 2.00   5.5% id  |  5.66   20.0% id  |
      	--------------------------------------------------
      	| TSO ON   | 2.63   78.0 id  |  5.67   39.0% id  |
      	--------------------------------------------------
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f83ef8c0
    • H
      [IPV6]: Added GSO support for TCPv6 · adcfc7d0
      Herbert Xu 提交于
      This patch adds GSO support for IPv6 and TCPv6.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      adcfc7d0
    • J
      Remove obsolete #include <linux/config.h> · 6ab3d562
      Jörn Engel 提交于
      Signed-off-by: NJörn Engel <joern@wohnheim.fh-wedel.de>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      6ab3d562
  6. 30 6月, 2006 1 次提交
  7. 18 6月, 2006 1 次提交
  8. 21 3月, 2006 4 次提交
  9. 01 2月, 2006 1 次提交
    • E
      [IPV6] tcp_v6_send_synack: release the destination · 78b91042
      Eric W. Biederman 提交于
      This patch fix dst reference counting in tcp_v6_send_synack
      
      Analysis:
      Currently tcp_v6_send_synack is never called with a dst entry
      so dst always comes in as NULL.
      
      ip6_dst_lookup calls ip6_route_output which calls dst_hold
      before it returns the dst entry.   Neither xfrm_lookup
      nor tcp_make_synack consume the dst entry so we still have
      a dst_entry with a bumped refrence count at the end of
      this function.
      
      Therefore we need to call dst_release just before we return
      just like tcp_v4_send_synack does.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      78b91042
  10. 12 1月, 2006 1 次提交
  11. 08 1月, 2006 1 次提交
  12. 04 1月, 2006 11 次提交
  13. 13 12月, 2005 1 次提交
  14. 11 11月, 2005 1 次提交
    • H
      [NET]: Detect hardware rx checksum faults correctly · fb286bb2
      Herbert Xu 提交于
      Here is the patch that introduces the generic skb_checksum_complete
      which also checks for hardware RX checksum faults.  If that happens,
      it'll call netdev_rx_csum_fault which currently prints out a stack
      trace with the device name.  In future it can turn off RX checksum.
      
      I've converted every spot under net/ that does RX checksum checks to
      use skb_checksum_complete or __skb_checksum_complete with the
      exceptions of:
      
      * Those places where checksums are done bit by bit.  These will call
      netdev_rx_csum_fault directly.
      
      * The following have not been completely checked/converted:
      
      ipmr
      ip_vs
      netfilter
      dccp
      
      This patch is based on patches and suggestions from Stephen Hemminger
      and David S. Miller.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb286bb2
  15. 06 11月, 2005 1 次提交
  16. 04 10月, 2005 1 次提交
    • E
      [INET]: speedup inet (tcp/dccp) lookups · 81c3d547
      Eric Dumazet 提交于
      Arnaldo and I agreed it could be applied now, because I have other
      pending patches depending on this one (Thank you Arnaldo)
      
      (The other important patch moves skc_refcnt in a separate cache line,
      so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)
      
      1) First some performance data :
      --------------------------------
      
      tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()
      
      The most time critical code is :
      
      sk_for_each(sk, node, &head->chain) {
           if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
               goto hit; /* You sunk my battleship! */
      }
      
      The sk_for_each() does use prefetch() hints but only the begining of
      "struct sock" is prefetched.
      
      As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
      away from the begining of "struct sock", it has to bring into CPU
      cache cold cache line. Each iteration has to use at least 2 cache
      lines.
      
      This can be problematic if some chains are very long.
      
      2) The goal
      -----------
      
      The idea I had is to change things so that INET_MATCH() may return
      FALSE in 99% of cases only using the data already in the CPU cache,
      using one cache line per iteration.
      
      3) Description of the patch
      ---------------------------
      
      Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
      filling a 32 bits hole on 64 bits platform.
      
      struct sock_common {
      	unsigned short		skc_family;
      	volatile unsigned char	skc_state;
      	unsigned char		skc_reuse;
      	int			skc_bound_dev_if;
      	struct hlist_node	skc_node;
      	struct hlist_node	skc_bind_node;
      	atomic_t		skc_refcnt;
      +	unsigned int		skc_hash;
      	struct proto		*skc_prot;
      };
      
      Store in this 32 bits field the full hash, not masked by (ehash_size -
      1) Using this full hash as the first comparison done in INET_MATCH
      permits us immediatly skip the element without touching a second cache
      line in case of a miss.
      
      Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
      sk_hash and tw_hash) already contains the slot number if we mask with
      (ehash_size - 1)
      
      File include/net/inet_hashtables.h
      
      64 bits platforms :
      #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))
           ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie))   &&  \
           ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      32bits platforms:
      #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))                 &&  \
           (inet_sk(__sk)->daddr          == (__saddr))   &&  \
           (inet_sk(__sk)->rcv_saddr      == (__daddr))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      
      - Adds a prefetch(head->chain.first) in 
      __inet_lookup_established()/__tcp_v4_check_established() and 
      __inet6_lookup_established()/__tcp_v6_check_established() and 
      __dccp_v4_check_established() to bring into cache the first element of the 
      list, before the {read|write}_lock(&head->lock);
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      81c3d547
  17. 09 9月, 2005 1 次提交
  18. 08 9月, 2005 1 次提交
  19. 30 8月, 2005 2 次提交