1. 23 4月, 2009 5 次提交
  2. 22 4月, 2009 2 次提交
  3. 21 4月, 2009 27 次提交
  4. 20 4月, 2009 6 次提交
    • H
      tun: Fix sk_sleep races when attaching/detaching · c40af84a
      Herbert Xu 提交于
      As the sk_sleep wait queue actually lives in tfile, which may be
      detached from the tun device, bad things will happen when we use
      sk_sleep after detaching.
      
      Since the tun device is the persistent data structure here (when
      requested by the user), it makes much more sense to have the wait
      queue live there.  There is no reason to have it in tfile at all
      since the only time we can wait is if we have a tun attached.
      In fact we already have a wait queue in tun_struct, so we might
      as well use it.
      Reported-by: NEric W. Biederman <ebiederm@xmission.com>
      Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Tested-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c40af84a
    • H
      tun: Only free a netdev when all tun descriptors are closed · 9c3fea6a
      Herbert Xu 提交于
      The commit c70f1829 ("tun: Fix
      races between tun_net_close and free_netdev") fixed a race where
      an asynchronous deletion of a tun device can hose a poll(2) on
      a tun fd attached to that device.
      
      However, this came at the cost of moving the tun wait queue into
      the tun file data structure.  The problem with this is that it
      imposes restrictions on when and where the tun device can access
      the wait queue since the tun file may change at any time due to
      detaching and reattaching.
      
      In particular, now that we need to use the wait queue on the
      receive path it becomes difficult to properly synchronise this
      with the detachment of the tun device.
      
      This patch solves the original race in a different way.  Since
      the race is only because the underlying memory gets freed, we
      can prevent it simply by ensuring that we don't do that until
      all tun descriptors ever attached to the device (even if they
      have since be detached because they may still be sitting in poll)
      have been closed.
      
      This is done by using reference counting the attached tun file
      descriptors.  The refcount in tun->sk has been reappropriated
      for this purpose since it was already being used for that, albeit
      from the opposite angle.
      
      Note that we no longer zero tfile->tun since tun_get will return
      NULL anyway after the refcount on tfile hits zero.  Instead it
      represents whether this device has ever been attached to a device.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c3fea6a
    • F
      syncookies: remove last_synq_overflow from struct tcp_sock · a0f82f64
      Florian Westphal 提交于
      last_synq_overflow eats 4 or 8 bytes in struct tcp_sock, even
      though it is only used when a listening sockets syn queue
      is full.
      
      We can (ab)use rx_opt.ts_recent_stamp to store the same information;
      it is not used otherwise as long as a socket is in listen state.
      
      Move linger2 around to avoid splitting struct mtu_probe
      across cacheline boundary on 32 bit arches.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0f82f64
    • E
      loopback: packet drops accounting · 7eebb0b2
      Eric Dumazet 提交于
      We can in some situations drop packets in netif_rx()
      
      loopback driver does not report these (unlikely) drops to its stats,
      and incorrectly change packets/bytes counts.
      
      After this patch applied, "ifconfig lo" can reports these drops as in :
      
      # ifconfig lo
      lo        Link encap:Local Loopback
                inet addr:127.0.0.1  Mask:255.0.0.0
                UP LOOPBACK RUNNING  MTU:16436  Metric:1
                RX packets:692562900 errors:3228 dropped:3228 overruns:0 frame:0
                TX packets:692562900 errors:0 dropped:0 overruns:0 carrier:0
                collisions:0 txqueuelen:0
                RX bytes:2865674174 (2.6 GiB)  TX bytes:2865674174 (2.6 GiB)
      
      I initialy chose to reflect those errors only in tx_dropped/tx_errors, but David
      convinced me that it was really RX errors, as loopback_xmit() really starts
      a RX process. (calling eth_type_trans() for example, that itself pulls the ethernet header)
      
      These errors are accounted in rx_dropped/rx_errors.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7eebb0b2
    • B
      net: Fix GRO for multiple page fragments · 5db8765a
      Ben Hutchings 提交于
      This loop over fragments in napi_fraginfo_skb() was "interesting".
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5db8765a
    • D