1. 04 5月, 2007 1 次提交
    • S
      [TCP]: zero out rx_opt in tcp_disconnect() · b40b4f79
      Srinivas Aji 提交于
      When the server drops its connection, NFS client reconnects using the
      same socket after disconnecting. If the new connection's SYN,ACK
      doesn't contain the TCP timestamp option and the old connection's did,
      tp->tcp_header_len is recomputed assuming no timestamp header but
      tp->rx_opt.tstamp_ok remains set. Then tcp_build_and_update_options()
      adds in a timestamp option past the end of the allocated TCP header,
      overwriting TCP data, or when the data is in skb_shinfo(skb)->frags[],
      overwriting skb_shinfo(skb) causing a crash soon after. (The issue was
      debugged from such a crash.)
      
      Similarly, wscale_ok and sack_ok also get set based on the SYN,ACK
      packet but not reset on disconnect, since they are zeroed out at
      initialization. The patch zeroes out the entire tp->rx_opt struct in
      tcp_disconnect() to avoid this sort of problem.
      Signed-off-by: NSrinivas Aji <Aji_Srinivas@emc.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b40b4f79
  2. 29 4月, 2007 1 次提交
  3. 26 4月, 2007 8 次提交
  4. 17 3月, 2007 1 次提交
  5. 27 2月, 2007 1 次提交
  6. 11 2月, 2007 1 次提交
  7. 09 2月, 2007 1 次提交
    • E
      [NET]: change layout of ehash table · dbca9b27
      Eric Dumazet 提交于
      ehash table layout is currently this one :
      
      First half of this table is used by sockets not in TIME_WAIT state
      Second half of it is used by sockets in TIME_WAIT state.
      
      This is non optimal because of for a given hash or socket, the two chain heads 
      are located in separate cache lines.
      Moreover the locks of the second half are never used.
      
      If instead of this halving, we use two list heads in inet_ehash_bucket instead 
      of only one, we probably can avoid one cache miss, and reduce ram usage, 
      particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK, 
      CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep 
      together related chains to speedup lookups and socket state change.
      
      In this patch I did not try to align struct inet_ehash_bucket, but a future 
      patch could try to make this structure have a convenient size (a power of two 
      or a multiple of L1_CACHE_SIZE).
      I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so 
      maybe we dont need to scratch our heads to align the bucket...
      
      Note : In case struct inet_ehash_bucket is not a power of two, we could 
      probably change alloc_large_system_hash() (in case it use __get_free_pages()) 
      to free the unused space. It currently allocates a big zone, but the last 
      quarter of it could be freed. Again, this should be a temporary 'problem'.
      
      Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dbca9b27
  8. 14 12月, 2006 1 次提交
  9. 03 12月, 2006 4 次提交
  10. 16 11月, 2006 1 次提交
  11. 08 11月, 2006 1 次提交
  12. 23 9月, 2006 4 次提交
  13. 03 8月, 2006 2 次提交
  14. 04 7月, 2006 1 次提交
  15. 01 7月, 2006 4 次提交
  16. 30 6月, 2006 1 次提交
    • H
      [NET]: Added GSO header verification · 576a30eb
      Herbert Xu 提交于
      When GSO packets come from an untrusted source (e.g., a Xen guest domain),
      we need to verify the header integrity before passing it to the hardware.
      
      Since the first step in GSO is to verify the header, we can reuse that
      code by adding a new bit to gso_type: SKB_GSO_DODGY.  Packets with this
      bit set can only be fed directly to devices with the corresponding bit
      NETIF_F_GSO_ROBUST.  If the device doesn't have that bit, then the skb
      is fed to the GSO engine which will allow the packet to be sent to the
      hardware if it passes the header check.
      
      This patch changes the sg flag to a full features flag.  The same method
      can be used to implement TSO ECN support.  We simply have to mark packets
      with CWR set with SKB_GSO_ECN so that only hardware with a corresponding
      NETIF_F_TSO_ECN can accept them.  The GSO engine can either fully segment
      the packet, or segment the first MTU and pass the rest to the hardware for
      further segmentation.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      576a30eb
  17. 26 6月, 2006 1 次提交
    • H
      [NET]: Fix CHECKSUM_HW GSO problems. · 0718bcc0
      Herbert Xu 提交于
      Fix checksum problems in the GSO code path for CHECKSUM_HW packets.
      
      The ipv4 TCP pseudo header checksum has to be adjusted for GSO
      segmented packets.
      
      The adjustment is needed because the length field in the pseudo-header
      changes.  However, because we have the inequality oldlen > newlen, we
      know that delta = (u16)~oldlen + newlen is still a 16-bit quantity.
      This also means that htonl(delta) + th->check still fits in 32 bits.
      Therefore we don't have to use csum_add on this operations.
      
      This is based on a patch by Michael Chan <mchan@broadcom.com>.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0718bcc0
  18. 23 6月, 2006 2 次提交
    • H
      [NET]: Add software TSOv4 · f4c50d99
      Herbert Xu 提交于
      This patch adds the GSO implementation for IPv4 TCP.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4c50d99
    • H
      [NET]: Merge TSO/UFO fields in sk_buff · 7967168c
      Herbert Xu 提交于
      Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not
      going to scale if we add any more segmentation methods (e.g., DCCP).  So
      let's merge them.
      
      They were used to tell the protocol of a packet.  This function has been
      subsumed by the new gso_type field.  This is essentially a set of netdev
      feature bits (shifted by 16 bits) that are required to process a specific
      skb.  As such it's easy to tell whether a given device can process a GSO
      skb: you just have to and the gso_type field and the netdev's features
      field.
      
      I've made gso_type a conjunction.  The idea is that you have a base type
      (e.g., SKB_GSO_TCPV4) that can be modified further to support new features.
      For example, if we add a hardware TSO type that supports ECN, they would
      declare NETIF_F_TSO | NETIF_F_TSO_ECN.  All TSO packets with CWR set would
      have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO
      packets would be SKB_GSO_TCPV4.  This means that only the CWR packets need
      to be emulated in software.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7967168c
  19. 18 6月, 2006 4 次提交