1. 27 11月, 2012 2 次提交
    • D
      atm: br2684: Fix excessive queue bloat · ae088d66
      David Woodhouse 提交于
      There's really no excuse for an additional wmem_default of buffering
      between the netdev queue and the ATM device. Two packets (one in-flight,
      and one ready to send) ought to be fine. It's not as if it should take
      long to get another from the netdev queue when we need it.
      
      If necessary we can make the queue space configurable later, but I don't
      think it's likely to be necessary.
      
      cf. commit 9d02daf7 (pppoatm: Fix
      excessive queue bloat) which did something very similar for PPPoATM.
      
      Note that there is a tremendously unlikely race condition which may
      result in qspace temporarily going negative. If a CPU running the
      br2684_pop() function goes off into the weeds for a long period of time
      after incrementing qspace to 1, but before calling netdev_wake_queue()...
      and another CPU ends up calling br2684_start_xmit() and *stopping* the
      queue again before the first CPU comes back, the netdev queue could
      end up being woken when qspace has already reached zero.
      
      An alternative approach to coping with this race would be to check in
      br2684_start_xmit() for qspace==0 and return NETDEV_TX_BUSY, but just
      using '> 0' and '< 1' for comparison instead of '== 0' and '!= 0' is
      simpler. It just warranted a mention of *why* we do it that way...
      
      Move the call to atmvcc->send() to happen *after* the accounting and
      potentially stopping the netdev queue, in br2684_xmit_vcc(). This matters
      if the ->send() call suffers an immediate failure, because it'll call
      br2684_pop() with the offending skb before returning. We want that to
      happen *after* we've done the initial accounting for the packet in
      question. Also make it return an appropriate success/failure indication
      while we're at it.
      
      Tested by running 'ping -l 1000 bottomless.aaisp.net.uk' from within my
      network, with only a single PPPoE-over-BR2684 link running. And after
      setting txqueuelen on the nas0 interface to something low (5, in fact).
      Before the patch, we'd see about 15 packets being queued and a resulting
      latency of ~56ms being reached. After the patch, we see only about 8,
      which is fairly much what we expect. And a max latency of ~36ms. On this
      OpenWRT box, wmem_default is 163840.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Reviewed-by: NKrzysztof Mazur <krzysiek@podlesie.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae088d66
    • B
      dsa: Hide core config options; make drivers select what they need · b3422a31
      Ben Hutchings 提交于
      Commit 82167cb8 ('net: dsa/slave: Fix
      compilation warnings') fixed one possible invalid configuration
      (NET_DSA enabled with no trailer formats) but added others: drivers
      can select NET_DSA without its dependencies being met.
      
      It's not very useful to make either the DSA core or the tagging
      formats manually selectable without a driver to use them, so:
      
      1. Define a hidden HAVE_NET_DSA option and move the dependencies of
         NET_DSA to that.  While we're at it, drop the deprecated
         EXPERIMENTAL dependency.
      2. Make NET_DSA and the drivers dependent on HAVE_NET_DSA.
      3. Hide the tagging format options again.
      4. Make drivers select both NET_DSA and the appropriate tagging format
         option.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b3422a31
  2. 26 11月, 2012 12 次提交
  3. 25 11月, 2012 1 次提交
  4. 24 11月, 2012 25 次提交