1. 14 10月, 2005 1 次提交
  2. 11 10月, 2005 2 次提交
    • H
      [IPSEC] Fix block size/MTU bugs in ESP · d4875b04
      Herbert Xu 提交于
      This patch fixes the following bugs in ESP:
      
      * Fix transport mode MTU overestimate.  This means that the inner MTU
        is smaller than it needs be.  Worse yet, given an input MTU which
        is a multiple of 4 it will always produce an estimate which is not
        a multiple of 4.
      
        For example, given a standard ESP/3DES/MD5 transform and an MTU of
        1500, the resulting MTU for transport mode is 1462 when it should
        be 1464.
      
        The reason for this is because IP header lengths are always a multiple
        of 4 for IPv4 and 8 for IPv6.
      
      * Ensure that the block size is at least 4.  This is required by RFC2406
        and corresponds to what the esp_output function does.  At the moment
        this only affects crypto_null as its block size is 1.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d4875b04
    • H
      [IPSEC]: Use ALIGN macro in ESP · a02a6422
      Herbert Xu 提交于
      This patch uses the macro ALIGN in all the applicable spots for ESP.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a02a6422
  3. 06 10月, 2005 2 次提交
  4. 05 10月, 2005 1 次提交
  5. 04 10月, 2005 5 次提交
    • H
      [IPV4]: Replace __in_dev_get with __in_dev_get_rcu/rtnl · e5ed6399
      Herbert Xu 提交于
      The following patch renames __in_dev_get() to __in_dev_get_rtnl() and
      introduces __in_dev_get_rcu() to cover the second case.
      
      1) RCU with refcnt should use in_dev_get().
      2) RCU without refcnt should use __in_dev_get_rcu().
      3) All others must hold RTNL and use __in_dev_get_rtnl().
      
      There is one exception in net/ipv4/route.c which is in fact a pre-existing
      race condition.  I've marked it as such so that we remember to fix it.
      
      This patch is based on suggestions and prior work by Suzanne Wood and
      Paul McKenney.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e5ed6399
    • D
      [IPV6]: Fix leak added by udp connect dst caching fix. · a5e7c210
      David S. Miller 提交于
      Based upon a patch from Mitsuru KANDA <mk@linux-ipv6.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5e7c210
    • Y
      [IPV6]: Fix ipv6 fragment ID selection at slow path · f36d6ab1
      Yan Zheng 提交于
      Signed-Off-By: NYan Zheng <yanzheng@21cn.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f36d6ab1
    • E
      [INET]: speedup inet (tcp/dccp) lookups · 81c3d547
      Eric Dumazet 提交于
      Arnaldo and I agreed it could be applied now, because I have other
      pending patches depending on this one (Thank you Arnaldo)
      
      (The other important patch moves skc_refcnt in a separate cache line,
      so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)
      
      1) First some performance data :
      --------------------------------
      
      tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()
      
      The most time critical code is :
      
      sk_for_each(sk, node, &head->chain) {
           if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
               goto hit; /* You sunk my battleship! */
      }
      
      The sk_for_each() does use prefetch() hints but only the begining of
      "struct sock" is prefetched.
      
      As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
      away from the begining of "struct sock", it has to bring into CPU
      cache cold cache line. Each iteration has to use at least 2 cache
      lines.
      
      This can be problematic if some chains are very long.
      
      2) The goal
      -----------
      
      The idea I had is to change things so that INET_MATCH() may return
      FALSE in 99% of cases only using the data already in the CPU cache,
      using one cache line per iteration.
      
      3) Description of the patch
      ---------------------------
      
      Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
      filling a 32 bits hole on 64 bits platform.
      
      struct sock_common {
      	unsigned short		skc_family;
      	volatile unsigned char	skc_state;
      	unsigned char		skc_reuse;
      	int			skc_bound_dev_if;
      	struct hlist_node	skc_node;
      	struct hlist_node	skc_bind_node;
      	atomic_t		skc_refcnt;
      +	unsigned int		skc_hash;
      	struct proto		*skc_prot;
      };
      
      Store in this 32 bits field the full hash, not masked by (ehash_size -
      1) Using this full hash as the first comparison done in INET_MATCH
      permits us immediatly skip the element without touching a second cache
      line in case of a miss.
      
      Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
      sk_hash and tw_hash) already contains the slot number if we mask with
      (ehash_size - 1)
      
      File include/net/inet_hashtables.h
      
      64 bits platforms :
      #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))
           ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie))   &&  \
           ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      32bits platforms:
      #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
           (((__sk)->sk_hash == (__hash))                 &&  \
           (inet_sk(__sk)->daddr          == (__saddr))   &&  \
           (inet_sk(__sk)->rcv_saddr      == (__daddr))   &&  \
           (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
      
      
      - Adds a prefetch(head->chain.first) in 
      __inet_lookup_established()/__tcp_v4_check_established() and 
      __inet6_lookup_established()/__tcp_v6_check_established() and 
      __dccp_v4_check_established() to bring into cache the first element of the 
      list, before the {read|write}_lock(&head->lock);
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      81c3d547
    • H
      [NET]: Fix packet timestamping. · 325ed823
      Herbert Xu 提交于
      I've found the problem in general.  It affects any 64-bit
      architecture.  The problem occurs when you change the system time.
      
      Suppose that when you boot your system clock is forward by a day.
      This gets recorded down in skb_tv_base.  You then wind the clock back
      by a day.  From that point onwards the offset will be negative which
      essentially overflows the 32-bit variables they're stored in.
      
      In fact, why don't we just store the real time stamp in those 32-bit
      variables? After all, we're not going to overflow for quite a while
      yet.
      
      When we do overflow, we'll need a better solution of course.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      325ed823
  6. 27 9月, 2005 1 次提交
  7. 25 9月, 2005 1 次提交
  8. 20 9月, 2005 2 次提交
  9. 18 9月, 2005 1 次提交
  10. 15 9月, 2005 2 次提交
  11. 10 9月, 2005 3 次提交
  12. 09 9月, 2005 3 次提交
  13. 08 9月, 2005 2 次提交
  14. 02 9月, 2005 2 次提交
  15. 30 8月, 2005 12 次提交