1. 03 8月, 2009 1 次提交
    • B
      ppp: fix lost fragments in ppp_mp_explode() (resubmit) · a53a8b56
      Ben McKeegan 提交于
      This patch fixes the corner cases where the sum of MTU of the free
      channels (adjusted for fragmentation overheads) is less than the MTU
      of PPP link.  There are at least 3 situations where this case might
      arise:
      
      - some of the channels are busy
      
      - the multilink session is running in a degraded state (i.e. with less
      than its full complement of active channels)
      
      - by design, where multilink protocol is being used to artificially
      increase the effective link MTU of a single link.
      
      Without this patch, at most 1 fragment is ever sent per free channel
      for a given PPP frame and any remaining part of the PPP frame that
      does not fit into those fragments is silently discarded.
      
      This patch restores the original behaviour which was broken by commit
      9c705260 'ppp:ppp_mp_explode()
      redesign'.  Once all 'free' channels have been given a fragment, an
      additional fragment is queued to each available channel in turn, as many
      times as necessary, until the entire PPP frame has been consumed.
      Signed-off-by: NBen McKeegan <ben@netservers.co.uk>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a53a8b56
  2. 20 5月, 2009 1 次提交
  3. 14 3月, 2009 1 次提交
    • G
      ppp: ppp_mp_explode() redesign · 9c705260
      Gabriele Paoloni 提交于
      I found the PPP subsystem to not work properly when connecting channels
      with different speeds to the same bundle.
      
      Problem Description:
      
      As the "ppp_mp_explode" function fragments the sk_buff buffer evenly
      among the PPP channels that are connected to a certain PPP unit to
      make up a bundle, if we are transmitting using an upper layer protocol
      that requires an Ack before sending the next packet (like TCP/IP for
      example), we will have a bandwidth bottleneck on the slowest channel
      of the bundle.
      
      Let's clarify by an example. Let's consider a scenario where we have
      two PPP links making up a bundle: a slow link (10KB/sec) and a fast
      link (1000KB/sec) working at the best (full bandwidth). On the top we
      have a TCP/IP stack sending a 1000 Bytes sk_buff buffer down to the
      PPP subsystem. The "ppp_mp_explode" function will divide the buffer in
      two fragments of 500B each (we are neglecting all the headers, crc,
      flags etc?.). Before the TCP/IP stack sends out the next buffer, it
      will have to wait for the ACK response from the remote peer, so it
      will have to wait for both fragments to have been sent over the two
      PPP links, received by the remote peer and reconstructed. The
      resulting behaviour is that, rather than having a bundle working
      @1010KB/sec (the sum of the channels bandwidths), we'll have a bundle
      working @20KB/sec (the double of the slowest channels bandwidth).
      
      
      Problem Solution:
      
      The problem has been solved by redesigning the "ppp_mp_explode"
      function in such a way to make it split the sk_buff buffer according
      to the speeds of the underlying PPP channels (the speeds of the serial
      interfaces respectively attached to the PPP channels). Referring to
      the above example, the redesigned "ppp_mp_explode" function will now
      divide the 1000 Bytes buffer into two fragments whose sizes are set
      according to the speeds of the channels where they are going to be
      sent on (e.g .  10 Byets on 10KB/sec channel and 990 Bytes on
      1000KB/sec channel).  The reworked function grants the same
      performances of the original one in optimal working conditions (i.e. a
      bundle made up of PPP links all working at the same speed), while
      greatly improving performances on the bundles made up of channels
      working at different speeds.
      Signed-off-by: NGabriele Paoloni <gabriele.paoloni@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c705260
  4. 27 2月, 2009 2 次提交
  5. 18 2月, 2009 1 次提交
  6. 10 2月, 2009 1 次提交
  7. 22 1月, 2009 1 次提交
  8. 13 1月, 2009 1 次提交
    • C
      net: ppp_generic - fix regressions caused by IDR conversion · 85997576
      Cyrill Gorcunov 提交于
      The commits:
      
      	7a95d267
      	("net: ppp_generic - use idr technique instead of cardmaps")
      
      	ab5024ab
      	("net: ppp_generic - use DEFINE_IDR for static initialization")
      
      introduced usage of IDR functionality but broke userspace side.
      
      Before this commits it was possible to allocate new ppp interface with
      specified number. Now it fails with EINVAL.  Fix it by trying to
      allocate interface with specified unit number and return EEXIST if
      fail which allow pppd to ask us to allocate new unit number.
      
      And fix messages on memory allocation fails - add details that it's
      PPP module who is complaining.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85997576
  9. 19 12月, 2008 2 次提交
  10. 17 12月, 2008 1 次提交
  11. 21 11月, 2008 1 次提交
  12. 20 11月, 2008 2 次提交
  13. 04 11月, 2008 1 次提交
  14. 17 10月, 2008 1 次提交
  15. 16 10月, 2008 1 次提交
  16. 10 10月, 2008 1 次提交
  17. 23 9月, 2008 1 次提交
  18. 22 9月, 2008 1 次提交
  19. 27 7月, 2008 1 次提交
  20. 22 7月, 2008 1 次提交
  21. 21 6月, 2008 2 次提交
  22. 26 5月, 2008 1 次提交
  23. 14 5月, 2008 1 次提交
  24. 24 4月, 2008 1 次提交
  25. 29 1月, 2008 1 次提交
  26. 13 11月, 2007 1 次提交
  27. 17 9月, 2007 2 次提交
  28. 22 8月, 2007 1 次提交
  29. 20 7月, 2007 1 次提交
  30. 24 6月, 2007 1 次提交
  31. 09 5月, 2007 1 次提交
  32. 26 4月, 2007 2 次提交
  33. 26 3月, 2007 1 次提交
  34. 13 2月, 2007 1 次提交