1. 29 1月, 2008 5 次提交
    • I
      [TCP]: MTUprobe: prepare skb fields earlier · 50c4817e
      Ilpo Järvinen 提交于
      They better be valid when call to write_queue functions is made
      once things that follow are going in.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      50c4817e
    • P
      [NET]: Eliminate unused argument from sk_stream_alloc_pskb · df97c708
      Pavel Emelyanov 提交于
      The 3rd argument is always zero (according to grep :) Eliminate
      it and merge the function with sk_stream_alloc_skb.
      
      This saves 44 more bytes, and together with the previous patch
      we have:
      
      add/remove: 1/0 grow/shrink: 0/8 up/down: 183/-751 (-568)
      function                                     old     new   delta
      sk_stream_alloc_skb                            -     183    +183
      ip_rt_init                                   529     525      -4
      arp_ignore                                   112     107      -5
      __inet_lookup_listener                       284     274     -10
      tcp_sendmsg                                 2583    2481    -102
      tcp_sendpage                                1449    1300    -149
      tso_fragment                                 417     258    -159
      tcp_fragment                                1149     988    -161
      __tcp_push_pending_frames                   1998    1837    -161
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df97c708
    • I
      [TCP]: Move FRTO checks out from write queue abstraction funcs · 8512430e
      Ilpo Järvinen 提交于
      Better place exists in update_send_head (other non-queue related
      adjustments are done there as well) which is the only caller of
      tcp_advance_send_head (now that the bogus call from mtu_probe is
      gone).
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8512430e
    • I
      [TCP]: Rewrite SACK block processing & sack_recv_cache use · 68f8353b
      Ilpo Järvinen 提交于
      Key points of this patch are:
      
        - In case new SACK information is advance only type, no skb
          processing below previously discovered highest point is done
        - Optimize cases below highest point too since there's no need
          to always go up to highest point (which is very likely still
          present in that SACK), this is not entirely true though
          because I'm dropping the fastpath_skb_hint which could
          previously optimize those cases even better. Whether that's
          significant, I'm not too sure.
      
      Currently it will provide skipping by walking. Combined with
      RB-tree, all skipping would become fast too regardless of window
      size (can be done incrementally later).
      
      Previously a number of cases in TCP SACK processing fails to
      take advantage of costly stored information in sack_recv_cache,
      most importantly, expected events such as cumulative ACK and new
      hole ACKs. Processing on such ACKs result in rather long walks
      building up latencies (which easily gets nasty when window is
      huge). Those latencies are often completely unnecessary
      compared with the amount of _new_ information received, usually
      for cumulative ACK there's no new information at all, yet TCP
      walks whole queue unnecessary potentially taking a number of
      costly cache misses on the way, etc.!
      
      Since the inclusion of highest_sack, there's a lot information
      that is very likely redundant (SACK fastpath hint stuff,
      fackets_out, highest_sack), though there's no ultimate guarantee
      that they'll remain the same whole the time (in all unearthly
      scenarios). Take advantage of this knowledge here and drop
      fastpath hint and use direct access to highest SACKed skb as
      a replacement.
      
      Effectively "special cased" fastpath is dropped. This change
      adds some complexity to introduce better coveraged "fastpath",
      though the added complexity should make TCP behave more cache
      friendly.
      
      The current ACK's SACK blocks are compared against each cached
      block individially and only ranges that are new are then scanned
      by the high constant walk. For other parts of write queue, even
      when in previously known part of the SACK blocks, a faster skip
      function is used (if necessary at all). In addition, whenever
      possible, TCP fast-forwards to highest_sack skb that was made
      available by an earlier patch. In typical case, no other things
      but this fast-forward and mandatory markings after that occur
      making the access pattern quite similar to the former fastpath
      "special case".
      
      DSACKs are special case that must always be walked.
      
      The local to recv_sack_cache copying could be more intelligent
      w.r.t DSACKs which are likely to be there only once but that
      is left to a separate patch.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68f8353b
    • I
      [TCP]: Convert highest_sack to sk_buff to allow direct access · a47e5a98
      Ilpo Järvinen 提交于
      It is going to replace the sack fastpath hint quite soon... :-)
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a47e5a98
  2. 05 12月, 2007 1 次提交
  3. 23 11月, 2007 2 次提交
  4. 20 11月, 2007 1 次提交
  5. 12 10月, 2007 1 次提交
  6. 11 10月, 2007 16 次提交
  7. 19 7月, 2007 1 次提交
  8. 11 7月, 2007 1 次提交
  9. 09 5月, 2007 1 次提交
  10. 30 4月, 2007 1 次提交
  11. 29 4月, 2007 1 次提交
  12. 26 4月, 2007 8 次提交
  13. 10 4月, 2007 1 次提交
    • D
      [TCP]: slow_start_after_idle should influence cwnd validation too · 15d33c07
      David S. Miller 提交于
      For the cases that slow_start_after_idle are meant to deal
      with, it is almost a certainty that the congestion window
      tests will think the connection is application limited and
      we'll thus decrease the cwnd there too.  This defeats the
      whole point of setting slow_start_after_idle to zero.
      
      So test it there too.
      
      We do not cancel out the entire tcp_cwnd_validate() function
      so that if the sysctl is changed we still have the validation
      state maintained.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15d33c07