1. 09 8月, 2005 1 次提交
  2. 06 8月, 2005 1 次提交
  3. 05 8月, 2005 3 次提交
  4. 31 7月, 2005 3 次提交
  5. 28 7月, 2005 3 次提交
  6. 23 7月, 2005 3 次提交
  7. 22 7月, 2005 1 次提交
    • R
      [NETFILTER]: ip_conntrack_expect_related must not free expectation · 4acdbdbe
      Rusty Russell 提交于
      If a connection tracking helper tells us to expect a connection, and
      we're already expecting that connection, we simply free the one they
      gave us and return success.
      
      The problem is that NAT helpers (eg. FTP) have to allocate the
      expectation first (to see what port is available) then rewrite the
      packet.  If that rewrite fails, they try to remove the expectation,
      but it was freed in ip_conntrack_expect_related.
      
      This is one example of a larger problem: having registered the
      expectation, the pointer is no longer ours to use.  Reference counting
      is needed for ctnetlink anyway, so introduce it now.
      
      To have a single "put" path, we need to grab the reference to the
      connection on creation, rather than open-coding it in the caller.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4acdbdbe
  8. 20 7月, 2005 3 次提交
  9. 19 7月, 2005 1 次提交
  10. 13 7月, 2005 1 次提交
  11. 12 7月, 2005 3 次提交
  12. 09 7月, 2005 8 次提交
  13. 06 7月, 2005 9 次提交
    • D
      [TCP]: Never TSO defer under periods of congestion. · 908a75c1
      David S. Miller 提交于
      Congestion window recover after loss depends upon the fact
      that if we have a full MSS sized frame at the head of the
      send queue, we will send it.  TSO deferral can defeat the
      ACK clocking necessary to exit cleanly from recovery.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      908a75c1
    • D
      [TCP]: Move to new TSO segmenting scheme. · c1b4a7e6
      David S. Miller 提交于
      Make TSO segment transmit size decisions at send time not earlier.
      
      The basic scheme is that we try to build as large a TSO frame as
      possible when pulling in the user data, but the size of the TSO frame
      output to the card is determined at transmit time.
      
      This is guided by tp->xmit_size_goal.  It is always set to a multiple
      of MSS and tells sendmsg/sendpage how large an SKB to try and build.
      
      Later, tcp_write_xmit() and tcp_push_one() chop up the packet if
      necessary and conditions warrant.  These routines can also decide to
      "defer" in order to wait for more ACKs to arrive and thus allow larger
      TSO frames to be emitted.
      
      A general observation is that TSO elongates the pipe, thus requiring a
      larger congestion window and larger buffering especially at the sender
      side.  Therefore, it is important that applications 1) get a large
      enough socket send buffer (this is accomplished by our dynamic send
      buffer expansion code) 2) do large enough writes.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1b4a7e6
    • D
      [TCP]: Break out send buffer expansion test. · 0d9901df
      David S. Miller 提交于
      This makes it easier to understand, and allows easier
      tweaking of the heuristic later on.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d9901df
    • D
      [TCP]: Do not call tcp_tso_acked() if no work to do. · cb83199a
      David S. Miller 提交于
      In tcp_clean_rtx_queue(), if the TSO packet is not even partially
      acked, do not waste time calling tcp_tso_acked().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb83199a
    • D
      [TCP]: Kill bogus comment above tcp_tso_acked(). · a5647696
      David S. Miller 提交于
      Everything stated there is out of data.  tcp_trim_skb()
      does adjust the available socket send buffer space and
      skb->truesize now.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5647696
    • D
      [TCP]: Fix send-side cpu utiliziation regression. · b4e26f5e
      David S. Miller 提交于
      Only put user data purely to pages when doing TSO.
      
      The extra page allocations cause two problems:
      
      1) Add the overhead of the page allocations themselves.
      2) Make us do small user copies when we get to the end
         of the TCP socket cache page.
      
      It is still beneficial to purely use pages for TSO,
      so we will do it for that case.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b4e26f5e
    • D
      [TCP]: Eliminate redundant computations in tcp_write_xmit(). · aa93466b
      David S. Miller 提交于
      tcp_snd_test() is run for every packet output by a single
      call to tcp_write_xmit(), but this is not necessary.
      
      For one, the congestion window space needs to only be
      calculated one time, then used throughout the duration
      of the loop.
      
      This cleanup also makes experimenting with different TSO
      packetization schemes much easier.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aa93466b
    • D
      [TCP]: Break out tcp_snd_test() into it's constituent parts. · 7f4dd0a9
      David S. Miller 提交于
      tcp_snd_test() does several different things, use inline
      functions to express this more clearly.
      
      1) It initializes the TSO count of SKB, if necessary.
      2) It performs the Nagle test.
      3) It makes sure the congestion window is adhered to.
      4) It makes sure SKB fits into the send window.
      
      This cleanup also sets things up so that things like the
      available packets in the congestion window does not need
      to be calculated multiple times by packet sending loops
      such as tcp_write_xmit().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f4dd0a9
    • D
      [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling. · 55c97f3e
      David S. Miller 提交于
      'nonagle' should be passed to the tcp_snd_test() function
      as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the
      tail of the write_queue.  This is because Nagle does not
      apply to such frames since we cannot possibly tack more
      data onto them.
      
      However, while doing this __tcp_push_pending_frames() makes
      all of the packets in the write_queue use this modified
      'nonagle' value.
      
      Fix the bug and simplify this function by just calling
      tcp_write_xmit() directly if sk_send_head is non-NULL.
      
      As a result, we can now make tcp_data_snd_check() just call
      tcp_push_pending_frames() instead of the specialized
      __tcp_data_snd_check().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      55c97f3e