1. 26 4月, 2007 7 次提交
  2. 17 3月, 2007 1 次提交
  3. 27 2月, 2007 1 次提交
  4. 11 2月, 2007 1 次提交
  5. 09 2月, 2007 1 次提交
    • E
      [NET]: change layout of ehash table · dbca9b27
      Eric Dumazet 提交于
      ehash table layout is currently this one :
      
      First half of this table is used by sockets not in TIME_WAIT state
      Second half of it is used by sockets in TIME_WAIT state.
      
      This is non optimal because of for a given hash or socket, the two chain heads 
      are located in separate cache lines.
      Moreover the locks of the second half are never used.
      
      If instead of this halving, we use two list heads in inet_ehash_bucket instead 
      of only one, we probably can avoid one cache miss, and reduce ram usage, 
      particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK, 
      CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep 
      together related chains to speedup lookups and socket state change.
      
      In this patch I did not try to align struct inet_ehash_bucket, but a future 
      patch could try to make this structure have a convenient size (a power of two 
      or a multiple of L1_CACHE_SIZE).
      I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so 
      maybe we dont need to scratch our heads to align the bucket...
      
      Note : In case struct inet_ehash_bucket is not a power of two, we could 
      probably change alloc_large_system_hash() (in case it use __get_free_pages()) 
      to free the unused space. It currently allocates a big zone, but the last 
      quarter of it could be freed. Again, this should be a temporary 'problem'.
      
      Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dbca9b27
  6. 14 12月, 2006 1 次提交
  7. 03 12月, 2006 4 次提交
  8. 16 11月, 2006 1 次提交
  9. 08 11月, 2006 1 次提交
  10. 23 9月, 2006 4 次提交
  11. 03 8月, 2006 2 次提交
  12. 04 7月, 2006 1 次提交
  13. 01 7月, 2006 4 次提交
  14. 30 6月, 2006 1 次提交
    • H
      [NET]: Added GSO header verification · 576a30eb
      Herbert Xu 提交于
      When GSO packets come from an untrusted source (e.g., a Xen guest domain),
      we need to verify the header integrity before passing it to the hardware.
      
      Since the first step in GSO is to verify the header, we can reuse that
      code by adding a new bit to gso_type: SKB_GSO_DODGY.  Packets with this
      bit set can only be fed directly to devices with the corresponding bit
      NETIF_F_GSO_ROBUST.  If the device doesn't have that bit, then the skb
      is fed to the GSO engine which will allow the packet to be sent to the
      hardware if it passes the header check.
      
      This patch changes the sg flag to a full features flag.  The same method
      can be used to implement TSO ECN support.  We simply have to mark packets
      with CWR set with SKB_GSO_ECN so that only hardware with a corresponding
      NETIF_F_TSO_ECN can accept them.  The GSO engine can either fully segment
      the packet, or segment the first MTU and pass the rest to the hardware for
      further segmentation.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      576a30eb
  15. 26 6月, 2006 1 次提交
    • H
      [NET]: Fix CHECKSUM_HW GSO problems. · 0718bcc0
      Herbert Xu 提交于
      Fix checksum problems in the GSO code path for CHECKSUM_HW packets.
      
      The ipv4 TCP pseudo header checksum has to be adjusted for GSO
      segmented packets.
      
      The adjustment is needed because the length field in the pseudo-header
      changes.  However, because we have the inequality oldlen > newlen, we
      know that delta = (u16)~oldlen + newlen is still a 16-bit quantity.
      This also means that htonl(delta) + th->check still fits in 32 bits.
      Therefore we don't have to use csum_add on this operations.
      
      This is based on a patch by Michael Chan <mchan@broadcom.com>.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0718bcc0
  16. 23 6月, 2006 2 次提交
    • H
      [NET]: Add software TSOv4 · f4c50d99
      Herbert Xu 提交于
      This patch adds the GSO implementation for IPv4 TCP.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4c50d99
    • H
      [NET]: Merge TSO/UFO fields in sk_buff · 7967168c
      Herbert Xu 提交于
      Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not
      going to scale if we add any more segmentation methods (e.g., DCCP).  So
      let's merge them.
      
      They were used to tell the protocol of a packet.  This function has been
      subsumed by the new gso_type field.  This is essentially a set of netdev
      feature bits (shifted by 16 bits) that are required to process a specific
      skb.  As such it's easy to tell whether a given device can process a GSO
      skb: you just have to and the gso_type field and the netdev's features
      field.
      
      I've made gso_type a conjunction.  The idea is that you have a base type
      (e.g., SKB_GSO_TCPV4) that can be modified further to support new features.
      For example, if we add a hardware TSO type that supports ECN, they would
      declare NETIF_F_TSO | NETIF_F_TSO_ECN.  All TSO packets with CWR set would
      have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO
      packets would be SKB_GSO_TCPV4.  This means that only the CWR packets need
      to be emulated in software.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7967168c
  17. 18 6月, 2006 4 次提交
  18. 04 5月, 2006 1 次提交
    • H
      [TCP]: Fix sock_orphan dead lock · 75c2d907
      Herbert Xu 提交于
      Calling sock_orphan inside bh_lock_sock in tcp_close can lead to dead
      locks.  For example, the inet_diag code holds sk_callback_lock without
      disabling BH.  If an inbound packet arrives during that admittedly tiny
      window, it will cause a dead lock on bh_lock_sock.  Another possible
      path would be through sock_wfree if the network device driver frees the
      tx skb in process context with BH enabled.
      
      We can fix this by moving sock_orphan out of bh_lock_sock.
      
      The tricky bit is to work out when we need to destroy the socket
      ourselves and when it has already been destroyed by someone else.
      
      By moving sock_orphan before the release_sock we can solve this
      problem.  This is because as long as we own the socket lock its
      state cannot change.
      
      So we simply record the socket state before the release_sock
      and then check the state again after we regain the socket lock.
      If the socket state has transitioned to TCP_CLOSE in the time being,
      we know that the socket has been destroyed.  Otherwise the socket is
      still ours to keep.
      
      Note that I've also moved the increment on the orphan count forward.
      This may look like a problem as we're increasing it even if the socket
      is just about to be destroyed where it'll be decreased again.  However,
      this simply enlarges a window that already exists.  This also changes
      the orphan count test by one.
      
      Considering what the orphan count is meant to do this is no big deal.
      
      This problem was discoverd by Ingo Molnar using his lock validator.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      75c2d907
  19. 26 3月, 2006 1 次提交
    • D
      [PATCH] POLLRDHUP/EPOLLRDHUP handling for half-closed devices notifications · f348d70a
      Davide Libenzi 提交于
      Implement the half-closed devices notifiation, by adding a new POLLRDHUP
      (and its alias EPOLLRDHUP) bit to the existing poll/select sets.  Since the
      existing POLLHUP handling, that does not report correctly half-closed
      devices, was feared to be changed, this implementation leaves the current
      POLLHUP reporting unchanged and simply add a new bit that is set in the few
      places where it makes sense.  The same thing was discussed and conceptually
      agreed quite some time ago:
      
      http://lkml.org/lkml/2003/7/12/116
      
      Since this new event bit is added to the existing Linux poll infrastruture,
      even the existing poll/select system calls will be able to use it.  As far
      as the existing POLLHUP handling, the patch leaves it as is.  The
      pollrdhup-2.6.16.rc5-0.10.diff defines the POLLRDHUP for all the existing
      archs and sets the bit in the six relevant files.  The other attached diff
      is the simple change required to sys/epoll.h to add the EPOLLRDHUP
      definition.
      
      There is "a stupid program" to test POLLRDHUP delivery here:
      
       http://www.xmailserver.org/pollrdhup-test.c
      
      It tests poll(2), but since the delivery is same epoll(2) will work equally.
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f348d70a
  20. 25 3月, 2006 1 次提交