1. 11 11月, 2005 1 次提交
  2. 28 10月, 2005 1 次提交
    • H
      [TCP]: Clear stale pred_flags when snd_wnd changes · 2ad41065
      Herbert Xu 提交于
      This bug is responsible for causing the infamous "Treason uncloaked"
      messages that's been popping up everywhere since the printk was added.
      It has usually been blamed on foreign operating systems.  However,
      some of those reports implicate Linux as both systems are running
      Linux or the TCP connection is going across the loopback interface.
      
      In fact, there really is a bug in the Linux TCP header prediction code
      that's been there since at least 2.1.8.  This bug was tracked down with
      help from Dale Blount.
      
      The effect of this bug ranges from harmless "Treason uncloaked"
      messages to hung/aborted TCP connections.  The details of the bug
      and fix is as follows.
      
      When snd_wnd is updated, we only update pred_flags if
      tcp_fast_path_check succeeds.  When it fails (for example,
      when our rcvbuf is used up), we will leave pred_flags with
      an out-of-date snd_wnd value.
      
      When the out-of-date pred_flags happens to match the next incoming
      packet we will again hit the fast path and use the current snd_wnd
      which will be wrong.
      
      In the case of the treason messages, it just happens that the snd_wnd
      cached in pred_flags is zero while tp->snd_wnd is non-zero.  Therefore
      when a zero-window packet comes in we incorrectly conclude that the
      window is non-zero.
      
      In fact if the peer continues to send us zero-window pure ACKs we
      will continue making the same mistake.  It's only when the peer
      transmits a zero-window packet with data attached that we get a
      chance to snap out of it.  This is what triggers the treason
      message at the next retransmit timeout.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
      2ad41065
  3. 30 9月, 2005 1 次提交
    • A
      [TCP]: Don't over-clamp window in tcp_clamp_window() · 09e9ec87
      Alexey Kuznetsov 提交于
      From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      
      Handle better the case where the sender sends full sized
      frames initially, then moves to a mode where it trickles
      out small amounts of data at a time.
      
      This known problem is even mentioned in the comments
      above tcp_grow_window() in tcp_input.c, specifically:
      
      ...
       * The scheme does not work when sender sends good segments opening
       * window and then starts to feed us spagetti. But it should work
       * in common situations. Otherwise, we have to rely on queue collapsing.
      ...
      
      When the sender gives full sized frames, the "struct sk_buff" overhead
      from each packet is small.  So we'll advertize a larger window.
      If the sender moves to a mode where small segments are sent, this
      ratio becomes tilted to the other extreme and we start overrunning
      the socket buffer space.
      
      tcp_clamp_window() tries to address this, but it's clamping of
      tp->window_clamp is a wee bit too aggressive for this particular case.
      
      Fix confirmed by Ion Badulescu.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09e9ec87
  4. 15 9月, 2005 1 次提交
  5. 02 9月, 2005 1 次提交
  6. 30 8月, 2005 7 次提交
  7. 09 7月, 2005 1 次提交
  8. 06 7月, 2005 6 次提交
    • D
      [TCP]: Move to new TSO segmenting scheme. · c1b4a7e6
      David S. Miller 提交于
      Make TSO segment transmit size decisions at send time not earlier.
      
      The basic scheme is that we try to build as large a TSO frame as
      possible when pulling in the user data, but the size of the TSO frame
      output to the card is determined at transmit time.
      
      This is guided by tp->xmit_size_goal.  It is always set to a multiple
      of MSS and tells sendmsg/sendpage how large an SKB to try and build.
      
      Later, tcp_write_xmit() and tcp_push_one() chop up the packet if
      necessary and conditions warrant.  These routines can also decide to
      "defer" in order to wait for more ACKs to arrive and thus allow larger
      TSO frames to be emitted.
      
      A general observation is that TSO elongates the pipe, thus requiring a
      larger congestion window and larger buffering especially at the sender
      side.  Therefore, it is important that applications 1) get a large
      enough socket send buffer (this is accomplished by our dynamic send
      buffer expansion code) 2) do large enough writes.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1b4a7e6
    • D
      [TCP]: Break out send buffer expansion test. · 0d9901df
      David S. Miller 提交于
      This makes it easier to understand, and allows easier
      tweaking of the heuristic later on.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d9901df
    • D
      [TCP]: Do not call tcp_tso_acked() if no work to do. · cb83199a
      David S. Miller 提交于
      In tcp_clean_rtx_queue(), if the TSO packet is not even partially
      acked, do not waste time calling tcp_tso_acked().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb83199a
    • D
      [TCP]: Kill bogus comment above tcp_tso_acked(). · a5647696
      David S. Miller 提交于
      Everything stated there is out of data.  tcp_trim_skb()
      does adjust the available socket send buffer space and
      skb->truesize now.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5647696
    • D
      [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling. · 55c97f3e
      David S. Miller 提交于
      'nonagle' should be passed to the tcp_snd_test() function
      as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the
      tail of the write_queue.  This is because Nagle does not
      apply to such frames since we cannot possibly tack more
      data onto them.
      
      However, while doing this __tcp_push_pending_frames() makes
      all of the packets in the write_queue use this modified
      'nonagle' value.
      
      Fix the bug and simplify this function by just calling
      tcp_write_xmit() directly if sk_send_head is non-NULL.
      
      As a result, we can now make tcp_data_snd_check() just call
      tcp_push_pending_frames() instead of the specialized
      __tcp_data_snd_check().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      55c97f3e
    • D
      [TCP]: Move __tcp_data_snd_check into tcp_output.c · 84d3e7b9
      David S. Miller 提交于
      It reimplements portions of tcp_snd_check(), so it
      we move it to tcp_output.c we can consolidate it's
      logic much easier in a later change.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84d3e7b9
  9. 24 6月, 2005 1 次提交
  10. 24 5月, 2005 1 次提交
    • D
      [TCP]: Fix stretch ACK performance killer when doing ucopy. · 31432412
      David S. Miller 提交于
      When we are doing ucopy, we try to defer the ACK generation to
      cleanup_rbuf().  This works most of the time very well, but if the
      ucopy prequeue is large, this ACKing behavior kills performance.
      
      With TSO, it is possible to fill the prequeue so large that by the
      time the ACK is sent and gets back to the sender, most of the window
      has emptied of data and performance suffers significantly.
      
      This behavior does help in some cases, so we should think about
      re-enabling this trick in the future, using some kind of limit in
      order to avoid the bug case.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      31432412
  11. 06 5月, 2005 1 次提交
  12. 26 4月, 2005 1 次提交
  13. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4