1. 23 4月, 2009 8 次提交
  2. 22 4月, 2009 2 次提交
  3. 21 4月, 2009 27 次提交
  4. 20 4月, 2009 3 次提交
    • H
      tun: Fix sk_sleep races when attaching/detaching · c40af84a
      Herbert Xu 提交于
      As the sk_sleep wait queue actually lives in tfile, which may be
      detached from the tun device, bad things will happen when we use
      sk_sleep after detaching.
      
      Since the tun device is the persistent data structure here (when
      requested by the user), it makes much more sense to have the wait
      queue live there.  There is no reason to have it in tfile at all
      since the only time we can wait is if we have a tun attached.
      In fact we already have a wait queue in tun_struct, so we might
      as well use it.
      Reported-by: NEric W. Biederman <ebiederm@xmission.com>
      Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Tested-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c40af84a
    • H
      tun: Only free a netdev when all tun descriptors are closed · 9c3fea6a
      Herbert Xu 提交于
      The commit c70f1829 ("tun: Fix
      races between tun_net_close and free_netdev") fixed a race where
      an asynchronous deletion of a tun device can hose a poll(2) on
      a tun fd attached to that device.
      
      However, this came at the cost of moving the tun wait queue into
      the tun file data structure.  The problem with this is that it
      imposes restrictions on when and where the tun device can access
      the wait queue since the tun file may change at any time due to
      detaching and reattaching.
      
      In particular, now that we need to use the wait queue on the
      receive path it becomes difficult to properly synchronise this
      with the detachment of the tun device.
      
      This patch solves the original race in a different way.  Since
      the race is only because the underlying memory gets freed, we
      can prevent it simply by ensuring that we don't do that until
      all tun descriptors ever attached to the device (even if they
      have since be detached because they may still be sitting in poll)
      have been closed.
      
      This is done by using reference counting the attached tun file
      descriptors.  The refcount in tun->sk has been reappropriated
      for this purpose since it was already being used for that, albeit
      from the opposite angle.
      
      Note that we no longer zero tfile->tun since tun_get will return
      NULL anyway after the refcount on tfile hits zero.  Instead it
      represents whether this device has ever been attached to a device.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c3fea6a
    • F
      syncookies: remove last_synq_overflow from struct tcp_sock · a0f82f64
      Florian Westphal 提交于
      last_synq_overflow eats 4 or 8 bytes in struct tcp_sock, even
      though it is only used when a listening sockets syn queue
      is full.
      
      We can (ab)use rx_opt.ts_recent_stamp to store the same information;
      it is not used otherwise as long as a socket is in listen state.
      
      Move linger2 around to avoid splitting struct mtu_probe
      across cacheline boundary on 32 bit arches.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0f82f64