1. 30 8月, 2005 7 次提交
  2. 31 7月, 2005 1 次提交
  3. 09 7月, 2005 2 次提交
  4. 06 7月, 2005 7 次提交
    • D
      [TCP]: Move to new TSO segmenting scheme. · c1b4a7e6
      David S. Miller 提交于
      Make TSO segment transmit size decisions at send time not earlier.
      
      The basic scheme is that we try to build as large a TSO frame as
      possible when pulling in the user data, but the size of the TSO frame
      output to the card is determined at transmit time.
      
      This is guided by tp->xmit_size_goal.  It is always set to a multiple
      of MSS and tells sendmsg/sendpage how large an SKB to try and build.
      
      Later, tcp_write_xmit() and tcp_push_one() chop up the packet if
      necessary and conditions warrant.  These routines can also decide to
      "defer" in order to wait for more ACKs to arrive and thus allow larger
      TSO frames to be emitted.
      
      A general observation is that TSO elongates the pipe, thus requiring a
      larger congestion window and larger buffering especially at the sender
      side.  Therefore, it is important that applications 1) get a large
      enough socket send buffer (this is accomplished by our dynamic send
      buffer expansion code) 2) do large enough writes.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1b4a7e6
    • D
      [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling. · 55c97f3e
      David S. Miller 提交于
      'nonagle' should be passed to the tcp_snd_test() function
      as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the
      tail of the write_queue.  This is because Nagle does not
      apply to such frames since we cannot possibly tack more
      data onto them.
      
      However, while doing this __tcp_push_pending_frames() makes
      all of the packets in the write_queue use this modified
      'nonagle' value.
      
      Fix the bug and simplify this function by just calling
      tcp_write_xmit() directly if sk_send_head is non-NULL.
      
      As a result, we can now make tcp_data_snd_check() just call
      tcp_push_pending_frames() instead of the specialized
      __tcp_data_snd_check().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      55c97f3e
    • D
      [TCP]: Fix redundant calculations of tcp_current_mss() · a2e2a59c
      David S. Miller 提交于
      tcp_write_xmit() uses tcp_current_mss(), but some of it's callers,
      namely __tcp_push_pending_frames(), already has this value available
      already.
      
      While we're here, fix the "cur_mss" argument to be "unsigned int"
      instead of plain "unsigned".
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2e2a59c
    • D
      [TCP]: Kill extra cwnd validate in __tcp_push_pending_frames(). · a762a980
      David S. Miller 提交于
      The tcp_cwnd_validate() function should only be invoked
      if we actually send some frames, yet __tcp_push_pending_frames()
      will always invoke it.  tcp_write_xmit() does the call for us,
      so the call here can simply be removed.
      
      Also, tcp_write_xmit() can be marked static.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a762a980
    • D
      [TCP]: Move __tcp_data_snd_check into tcp_output.c · 84d3e7b9
      David S. Miller 提交于
      It reimplements portions of tcp_snd_check(), so it
      we move it to tcp_output.c we can consolidate it's
      logic much easier in a later change.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84d3e7b9
    • D
      [TCP]: Move send test logic out of net/tcp.h · f6302d1d
      David S. Miller 提交于
      This just moves the code into tcp_output.c, no code logic changes are
      made by this patch.
      
      Using this as a baseline, we can begin to untangle the mess of
      comparisons for the Nagle test et al.  We will also be able to reduce
      all of the redundant computation that occurs when outputting data
      packets.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f6302d1d
    • D
      [TCP]: Fix quick-ack decrementing with TSO. · fc6415bc
      David S. Miller 提交于
      On each packet output, we call tcp_dec_quickack_mode()
      if the ACK flag is set.  It drops tp->ack.quick until
      it hits zero, at which time we deflate the ATO value.
      
      When doing TSO, we are emitting multiple packets with
      ACK set, so we should decrement tp->ack.quick that many
      segments.
      
      Note that, unlike this case, tcp_enter_cwr() should not
      take the tcp_skb_pcount(skb) into consideration.  That
      function, one time, readjusts tp->snd_cwnd and moves
      into TCP_CA_CWR state.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fc6415bc
  5. 24 6月, 2005 3 次提交
  6. 19 6月, 2005 4 次提交
  7. 06 5月, 2005 1 次提交
  8. 25 4月, 2005 1 次提交
    • D
      [TCP]: skb pcount with MTU discovery · d5ac99a6
      David S. Miller 提交于
      The problem is that when doing MTU discovery, the too-large segments in
      the write queue will be calculated as having a pcount of >1.  When
      tcp_write_xmit() is trying to send, tcp_snd_test() fails the cwnd test
      when pcount > cwnd.
      
      The segments are eventually transmitted one at a time by keepalive, but
      this can take a long time.
      
      This patch checks if TSO is enabled when setting pcount.
      Signed-off-by: NJohn Heffner <jheffner@psc.edu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d5ac99a6
  9. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4