1. 12 6月, 2006 1 次提交
    • A
      [TCP]: continued: reno sacked_out count fix · 79320d7e
      Aki M Nyrhinen 提交于
      From: Aki M Nyrhinen <anyrhine@cs.helsinki.fi>
      
      IMHO the current fix to the problem (in_flight underflow in reno)
      is incorrect.  it treats the symptons but ignores the problem. the
      problem is timing out packets other than the head packet when we
      don't have sack. i try to explain (sorry if explaining the obvious).
      
      with sack, scanning the retransmit queue for timed out packets is
      fine because we know which packets in our retransmit queue have been
      acked by the receiver.
      
      without sack, we know only how many packets in our retransmit queue the
      receiver has acknowledged, but no idea which packets.
      
      think of a "typical" slow-start overshoot case, where for example
      every third packet in a window get lost because a router buffer gets
      full.
      
      with sack, we check for timeouts on those every third packet (as the
      rest have been sacked). the packet counting works out and if there
      is no reordering, we'll retransmit exactly the packets that were 
      lost.
      
      without sack, however, we check for timeout on every packet and end up
      retransmitting consecutive packets in the retransmit queue. in our
      slow-start example, 2/3 of those retransmissions are unnecessary. these
      unnecessary retransmissions eat the congestion window and evetually
      prevent fast recovery from continuing, if enough packets were lost.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      79320d7e
  2. 17 5月, 2006 1 次提交
    • A
      [TCP]: reno sacked_out count fix · 8872d8e1
      Angelo P. Castellani 提交于
      From: "Angelo P. Castellani" <angelo.castellani+lkml@gmail.com>
      
      Using NewReno, if a sk_buff is timed out and is accounted as lost_out,
      it should also be removed from the sacked_out.
      
      This is necessary because recovery using NewReno fast retransmit could
      take up to a lot RTTs and the sk_buff RTO can expire without actually
      being really lost.
      
      left_out = sacked_out + lost_out
      in_flight = packets_out - left_out + retrans_out
      
      Using NewReno without this patch, on very large network losses,
      left_out becames bigger than packets_out + retrans_out (!!).
      
      For this reason unsigned integer in_flight overflows to 2^32 - something.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8872d8e1
  3. 15 4月, 2006 1 次提交
    • A
      [IPV4]: Possible cleanups. · 6c97e72a
      Adrian Bunk 提交于
      This patch contains the following possible cleanups:
      - make the following needlessly global function static:
        - arp.c: arp_rcv()
      - remove the following unused EXPORT_SYMBOL's:
        - devinet.c: devinet_ioctl
        - fib_frontend.c: ip_rt_ioctl
        - inet_hashtables.c: inet_bind_bucket_create
        - inet_hashtables.c: inet_bind_hash
        - tcp_input.c: sysctl_tcp_abc
        - tcp_ipv4.c: sysctl_tcp_tw_reuse
        - tcp_output.c: sysctl_tcp_mtu_probing
        - tcp_output.c: sysctl_tcp_base_mss
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c97e72a
  4. 21 3月, 2006 2 次提交
  5. 10 2月, 2006 1 次提交
  6. 10 1月, 2006 1 次提交
  7. 04 1月, 2006 3 次提交
  8. 16 11月, 2005 1 次提交
  9. 11 11月, 2005 5 次提交
  10. 28 10月, 2005 1 次提交
    • H
      [TCP]: Clear stale pred_flags when snd_wnd changes · 2ad41065
      Herbert Xu 提交于
      This bug is responsible for causing the infamous "Treason uncloaked"
      messages that's been popping up everywhere since the printk was added.
      It has usually been blamed on foreign operating systems.  However,
      some of those reports implicate Linux as both systems are running
      Linux or the TCP connection is going across the loopback interface.
      
      In fact, there really is a bug in the Linux TCP header prediction code
      that's been there since at least 2.1.8.  This bug was tracked down with
      help from Dale Blount.
      
      The effect of this bug ranges from harmless "Treason uncloaked"
      messages to hung/aborted TCP connections.  The details of the bug
      and fix is as follows.
      
      When snd_wnd is updated, we only update pred_flags if
      tcp_fast_path_check succeeds.  When it fails (for example,
      when our rcvbuf is used up), we will leave pred_flags with
      an out-of-date snd_wnd value.
      
      When the out-of-date pred_flags happens to match the next incoming
      packet we will again hit the fast path and use the current snd_wnd
      which will be wrong.
      
      In the case of the treason messages, it just happens that the snd_wnd
      cached in pred_flags is zero while tp->snd_wnd is non-zero.  Therefore
      when a zero-window packet comes in we incorrectly conclude that the
      window is non-zero.
      
      In fact if the peer continues to send us zero-window pure ACKs we
      will continue making the same mistake.  It's only when the peer
      transmits a zero-window packet with data attached that we get a
      chance to snap out of it.  This is what triggers the treason
      message at the next retransmit timeout.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
      2ad41065
  11. 30 9月, 2005 1 次提交
    • A
      [TCP]: Don't over-clamp window in tcp_clamp_window() · 09e9ec87
      Alexey Kuznetsov 提交于
      From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      
      Handle better the case where the sender sends full sized
      frames initially, then moves to a mode where it trickles
      out small amounts of data at a time.
      
      This known problem is even mentioned in the comments
      above tcp_grow_window() in tcp_input.c, specifically:
      
      ...
       * The scheme does not work when sender sends good segments opening
       * window and then starts to feed us spagetti. But it should work
       * in common situations. Otherwise, we have to rely on queue collapsing.
      ...
      
      When the sender gives full sized frames, the "struct sk_buff" overhead
      from each packet is small.  So we'll advertize a larger window.
      If the sender moves to a mode where small segments are sent, this
      ratio becomes tilted to the other extreme and we start overrunning
      the socket buffer space.
      
      tcp_clamp_window() tries to address this, but it's clamping of
      tp->window_clamp is a wee bit too aggressive for this particular case.
      
      Fix confirmed by Ion Badulescu.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09e9ec87
  12. 15 9月, 2005 1 次提交
  13. 02 9月, 2005 1 次提交
  14. 30 8月, 2005 7 次提交
  15. 09 7月, 2005 1 次提交
  16. 06 7月, 2005 6 次提交
    • D
      [TCP]: Move to new TSO segmenting scheme. · c1b4a7e6
      David S. Miller 提交于
      Make TSO segment transmit size decisions at send time not earlier.
      
      The basic scheme is that we try to build as large a TSO frame as
      possible when pulling in the user data, but the size of the TSO frame
      output to the card is determined at transmit time.
      
      This is guided by tp->xmit_size_goal.  It is always set to a multiple
      of MSS and tells sendmsg/sendpage how large an SKB to try and build.
      
      Later, tcp_write_xmit() and tcp_push_one() chop up the packet if
      necessary and conditions warrant.  These routines can also decide to
      "defer" in order to wait for more ACKs to arrive and thus allow larger
      TSO frames to be emitted.
      
      A general observation is that TSO elongates the pipe, thus requiring a
      larger congestion window and larger buffering especially at the sender
      side.  Therefore, it is important that applications 1) get a large
      enough socket send buffer (this is accomplished by our dynamic send
      buffer expansion code) 2) do large enough writes.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1b4a7e6
    • D
      [TCP]: Break out send buffer expansion test. · 0d9901df
      David S. Miller 提交于
      This makes it easier to understand, and allows easier
      tweaking of the heuristic later on.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d9901df
    • D
      [TCP]: Do not call tcp_tso_acked() if no work to do. · cb83199a
      David S. Miller 提交于
      In tcp_clean_rtx_queue(), if the TSO packet is not even partially
      acked, do not waste time calling tcp_tso_acked().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb83199a
    • D
      [TCP]: Kill bogus comment above tcp_tso_acked(). · a5647696
      David S. Miller 提交于
      Everything stated there is out of data.  tcp_trim_skb()
      does adjust the available socket send buffer space and
      skb->truesize now.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5647696
    • D
      [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling. · 55c97f3e
      David S. Miller 提交于
      'nonagle' should be passed to the tcp_snd_test() function
      as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the
      tail of the write_queue.  This is because Nagle does not
      apply to such frames since we cannot possibly tack more
      data onto them.
      
      However, while doing this __tcp_push_pending_frames() makes
      all of the packets in the write_queue use this modified
      'nonagle' value.
      
      Fix the bug and simplify this function by just calling
      tcp_write_xmit() directly if sk_send_head is non-NULL.
      
      As a result, we can now make tcp_data_snd_check() just call
      tcp_push_pending_frames() instead of the specialized
      __tcp_data_snd_check().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      55c97f3e
    • D
      [TCP]: Move __tcp_data_snd_check into tcp_output.c · 84d3e7b9
      David S. Miller 提交于
      It reimplements portions of tcp_snd_check(), so it
      we move it to tcp_output.c we can consolidate it's
      logic much easier in a later change.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84d3e7b9
  17. 24 6月, 2005 1 次提交
  18. 24 5月, 2005 1 次提交
    • D
      [TCP]: Fix stretch ACK performance killer when doing ucopy. · 31432412
      David S. Miller 提交于
      When we are doing ucopy, we try to defer the ACK generation to
      cleanup_rbuf().  This works most of the time very well, but if the
      ucopy prequeue is large, this ACKing behavior kills performance.
      
      With TSO, it is possible to fill the prequeue so large that by the
      time the ACK is sent and gets back to the sender, most of the window
      has emptied of data and performance suffers significantly.
      
      This behavior does help in some cases, so we should think about
      re-enabling this trick in the future, using some kind of limit in
      order to avoid the bug case.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      31432412
  19. 06 5月, 2005 1 次提交
  20. 26 4月, 2005 1 次提交
  21. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4