1. 02 12月, 2012 3 次提交
  2. 30 11月, 2012 2 次提交
  3. 28 11月, 2012 4 次提交
  4. 27 11月, 2012 1 次提交
    • D
      atm: br2684: Fix excessive queue bloat · ae088d66
      David Woodhouse 提交于
      There's really no excuse for an additional wmem_default of buffering
      between the netdev queue and the ATM device. Two packets (one in-flight,
      and one ready to send) ought to be fine. It's not as if it should take
      long to get another from the netdev queue when we need it.
      
      If necessary we can make the queue space configurable later, but I don't
      think it's likely to be necessary.
      
      cf. commit 9d02daf7 (pppoatm: Fix
      excessive queue bloat) which did something very similar for PPPoATM.
      
      Note that there is a tremendously unlikely race condition which may
      result in qspace temporarily going negative. If a CPU running the
      br2684_pop() function goes off into the weeds for a long period of time
      after incrementing qspace to 1, but before calling netdev_wake_queue()...
      and another CPU ends up calling br2684_start_xmit() and *stopping* the
      queue again before the first CPU comes back, the netdev queue could
      end up being woken when qspace has already reached zero.
      
      An alternative approach to coping with this race would be to check in
      br2684_start_xmit() for qspace==0 and return NETDEV_TX_BUSY, but just
      using '> 0' and '< 1' for comparison instead of '== 0' and '!= 0' is
      simpler. It just warranted a mention of *why* we do it that way...
      
      Move the call to atmvcc->send() to happen *after* the accounting and
      potentially stopping the netdev queue, in br2684_xmit_vcc(). This matters
      if the ->send() call suffers an immediate failure, because it'll call
      br2684_pop() with the offending skb before returning. We want that to
      happen *after* we've done the initial accounting for the packet in
      question. Also make it return an appropriate success/failure indication
      while we're at it.
      
      Tested by running 'ping -l 1000 bottomless.aaisp.net.uk' from within my
      network, with only a single PPPoE-over-BR2684 link running. And after
      setting txqueuelen on the nas0 interface to something low (5, in fact).
      Before the patch, we'd see about 15 packets being queued and a resulting
      latency of ~56ms being reached. After the patch, we see only about 8,
      which is fairly much what we expect. And a max latency of ~36ms. On this
      OpenWRT box, wmem_default is 163840.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Reviewed-by: NKrzysztof Mazur <krzysiek@podlesie.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae088d66
  5. 01 9月, 2012 1 次提交
  6. 16 8月, 2012 2 次提交
  7. 04 6月, 2012 2 次提交
  8. 16 5月, 2012 2 次提交
  9. 10 5月, 2012 1 次提交
    • J
      atm: Convert compare_ether_addr to ether_addr_equal · 150238eb
      Joe Perches 提交于
      Use the new bool function ether_addr_equal to add
      some clarity and reduce the likelihood for misuse
      of compare_ether_addr for sorting.
      
      Done via cocci script:
      
      $ cat compare_ether_addr.cocci
      @@
      expression a,b;
      @@
      -	!compare_ether_addr(a, b)
      +	ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	compare_ether_addr(a, b)
      +	!ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	!ether_addr_equal(a, b) == 0
      +	ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	!ether_addr_equal(a, b) != 0
      +	!ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	ether_addr_equal(a, b) == 0
      +	!ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	ether_addr_equal(a, b) != 0
      +	ether_addr_equal(a, b)
      
      @@
      expression a,b;
      @@
      -	!!ether_addr_equal(a, b)
      +	ether_addr_equal(a, b)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      150238eb
  10. 16 4月, 2012 1 次提交
  11. 14 4月, 2012 1 次提交
    • D
      pppoatm: Fix excessive queue bloat · 9d02daf7
      David Woodhouse 提交于
      We discovered that PPPoATM has an excessively deep transmit queue. A
      queue the size of the default socket send buffer (wmem_default) is
      maintained between the PPP generic core and the ATM device.
      
      Fix it to queue a maximum of *two* packets. The one the ATM device is
      currently working on, and one more for the ATM driver to process
      immediately in its TX done interrupt handler. The PPP core is designed
      to feed packets to the channel with minimal latency, so that really
      ought to be enough to keep the ATM device busy.
      
      While we're at it, fix the fact that we were triggering the wakeup
      tasklet on *every* pppoatm_pop() call. The comment saying "this is
      inefficient, but doing it right is too hard" turns out to be overly
      pessimistic... I think :)
      
      On machines like the Traverse Geos, with a slow Geode CPU and two
      high-speed ADSL2+ interfaces, there were reports of extremely high CPU
      usage which could partly be attributed to the extra wakeups.
      
      (The wakeup handling could actually be made a whole lot easier if we
       stop checking sk->sk_sndbuf altogether. Given that we now only queue
       *two* packets ever, one wonders what the point is. As it is, you could
       already deadlock the thing by setting the sk_sndbuf to a value lower
       than the MTU of the device, and it'd just block for ever.)
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d02daf7
  12. 06 4月, 2012 1 次提交
  13. 29 3月, 2012 1 次提交
  14. 05 3月, 2012 1 次提交
  15. 22 2月, 2012 1 次提交
  16. 02 2月, 2012 1 次提交
  17. 06 12月, 2011 1 次提交
  18. 03 12月, 2011 1 次提交
  19. 01 12月, 2011 4 次提交
  20. 29 11月, 2011 2 次提交
  21. 27 11月, 2011 1 次提交
  22. 23 11月, 2011 5 次提交
  23. 14 11月, 2011 1 次提交
    • E
      neigh: new unresolved queue limits · 8b5c171b
      Eric Dumazet 提交于
      Le mercredi 09 novembre 2011 à 16:21 -0500, David Miller a écrit :
      > From: David Miller <davem@davemloft.net>
      > Date: Wed, 09 Nov 2011 16:16:44 -0500 (EST)
      >
      > > From: Eric Dumazet <eric.dumazet@gmail.com>
      > > Date: Wed, 09 Nov 2011 12:14:09 +0100
      > >
      > >> unres_qlen is the number of frames we are able to queue per unresolved
      > >> neighbour. Its default value (3) was never changed and is responsible
      > >> for strange drops, especially if IP fragments are used, or multiple
      > >> sessions start in parallel. Even a single tcp flow can hit this limit.
      > >  ...
      > >
      > > Ok, I've applied this, let's see what happens :-)
      >
      > Early answer, build fails.
      >
      > Please test build this patch with DECNET enabled and resubmit.  The
      > decnet neigh layer still refers to the removed ->queue_len member.
      >
      > Thanks.
      
      Ouch, this was fixed on one machine yesterday, but not the other one I
      used this morning, sorry.
      
      [PATCH V5 net-next] neigh: new unresolved queue limits
      
      unres_qlen is the number of frames we are able to queue per unresolved
      neighbour. Its default value (3) was never changed and is responsible
      for strange drops, especially if IP fragments are used, or multiple
      sessions start in parallel. Even a single tcp flow can hit this limit.
      
      $ arp -d 192.168.20.108 ; ping -c 2 -s 8000 192.168.20.108
      PING 192.168.20.108 (192.168.20.108) 8000(8028) bytes of data.
      8008 bytes from 192.168.20.108: icmp_seq=2 ttl=64 time=0.322 ms
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b5c171b