1. 29 9月, 2006 2 次提交
  2. 23 9月, 2006 3 次提交
  3. 18 9月, 2006 1 次提交
  4. 30 8月, 2006 1 次提交
  5. 05 8月, 2006 1 次提交
  6. 01 7月, 2006 1 次提交
  7. 30 6月, 2006 1 次提交
    • M
      [NET]: Add ECN support for TSO · b0da8537
      Michael Chan 提交于
      In the current TSO implementation, NETIF_F_TSO and ECN cannot be
      turned on together in a TCP connection.  The problem is that most
      hardware that supports TSO does not handle CWR correctly if it is set
      in the TSO packet.  Correct handling requires CWR to be set in the
      first packet only if it is set in the TSO header.
      
      This patch adds the ability to turn on NETIF_F_TSO and ECN using
      GSO if necessary to handle TSO packets with CWR set.  Hardware
      that handles CWR correctly can turn on NETIF_F_TSO_ECN in the dev->
      features flag.
      
      All TSO packets with CWR set will have the SKB_GSO_TCPV4_ECN set.  If
      the output device does not have the NETIF_F_TSO_ECN feature set, GSO
      will split the packet up correctly with CWR only set in the first
      segment.
      
      With help from Herbert Xu <herbert@gondor.apana.org.au>.
      
      Since ECN can always be enabled with TSO, the SOCK_NO_LARGESEND sock
      flag is completely removed.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0da8537
  8. 23 6月, 2006 1 次提交
    • H
      [NET]: Merge TSO/UFO fields in sk_buff · 7967168c
      Herbert Xu 提交于
      Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not
      going to scale if we add any more segmentation methods (e.g., DCCP).  So
      let's merge them.
      
      They were used to tell the protocol of a packet.  This function has been
      subsumed by the new gso_type field.  This is essentially a set of netdev
      feature bits (shifted by 16 bits) that are required to process a specific
      skb.  As such it's easy to tell whether a given device can process a GSO
      skb: you just have to and the gso_type field and the netdev's features
      field.
      
      I've made gso_type a conjunction.  The idea is that you have a base type
      (e.g., SKB_GSO_TCPV4) that can be modified further to support new features.
      For example, if we add a hardware TSO type that supports ECN, they would
      declare NETIF_F_TSO | NETIF_F_TSO_ECN.  All TSO packets with CWR set would
      have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO
      packets would be SKB_GSO_TCPV4.  This means that only the CWR packets need
      to be emulated in software.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7967168c
  9. 18 6月, 2006 3 次提交
  10. 12 6月, 2006 1 次提交
    • A
      [TCP]: continued: reno sacked_out count fix · 79320d7e
      Aki M Nyrhinen 提交于
      From: Aki M Nyrhinen <anyrhine@cs.helsinki.fi>
      
      IMHO the current fix to the problem (in_flight underflow in reno)
      is incorrect.  it treats the symptons but ignores the problem. the
      problem is timing out packets other than the head packet when we
      don't have sack. i try to explain (sorry if explaining the obvious).
      
      with sack, scanning the retransmit queue for timed out packets is
      fine because we know which packets in our retransmit queue have been
      acked by the receiver.
      
      without sack, we know only how many packets in our retransmit queue the
      receiver has acknowledged, but no idea which packets.
      
      think of a "typical" slow-start overshoot case, where for example
      every third packet in a window get lost because a router buffer gets
      full.
      
      with sack, we check for timeouts on those every third packet (as the
      rest have been sacked). the packet counting works out and if there
      is no reordering, we'll retransmit exactly the packets that were 
      lost.
      
      without sack, however, we check for timeout on every packet and end up
      retransmitting consecutive packets in the retransmit queue. in our
      slow-start example, 2/3 of those retransmissions are unnecessary. these
      unnecessary retransmissions eat the congestion window and evetually
      prevent fast recovery from continuing, if enough packets were lost.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      79320d7e
  11. 17 5月, 2006 1 次提交
    • A
      [TCP]: reno sacked_out count fix · 8872d8e1
      Angelo P. Castellani 提交于
      From: "Angelo P. Castellani" <angelo.castellani+lkml@gmail.com>
      
      Using NewReno, if a sk_buff is timed out and is accounted as lost_out,
      it should also be removed from the sacked_out.
      
      This is necessary because recovery using NewReno fast retransmit could
      take up to a lot RTTs and the sk_buff RTO can expire without actually
      being really lost.
      
      left_out = sacked_out + lost_out
      in_flight = packets_out - left_out + retrans_out
      
      Using NewReno without this patch, on very large network losses,
      left_out becames bigger than packets_out + retrans_out (!!).
      
      For this reason unsigned integer in_flight overflows to 2^32 - something.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8872d8e1
  12. 15 4月, 2006 1 次提交
    • A
      [IPV4]: Possible cleanups. · 6c97e72a
      Adrian Bunk 提交于
      This patch contains the following possible cleanups:
      - make the following needlessly global function static:
        - arp.c: arp_rcv()
      - remove the following unused EXPORT_SYMBOL's:
        - devinet.c: devinet_ioctl
        - fib_frontend.c: ip_rt_ioctl
        - inet_hashtables.c: inet_bind_bucket_create
        - inet_hashtables.c: inet_bind_hash
        - tcp_input.c: sysctl_tcp_abc
        - tcp_ipv4.c: sysctl_tcp_tw_reuse
        - tcp_output.c: sysctl_tcp_mtu_probing
        - tcp_output.c: sysctl_tcp_base_mss
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c97e72a
  13. 21 3月, 2006 2 次提交
  14. 10 2月, 2006 1 次提交
  15. 10 1月, 2006 1 次提交
  16. 04 1月, 2006 3 次提交
  17. 16 11月, 2005 1 次提交
  18. 11 11月, 2005 5 次提交
  19. 28 10月, 2005 1 次提交
    • H
      [TCP]: Clear stale pred_flags when snd_wnd changes · 2ad41065
      Herbert Xu 提交于
      This bug is responsible for causing the infamous "Treason uncloaked"
      messages that's been popping up everywhere since the printk was added.
      It has usually been blamed on foreign operating systems.  However,
      some of those reports implicate Linux as both systems are running
      Linux or the TCP connection is going across the loopback interface.
      
      In fact, there really is a bug in the Linux TCP header prediction code
      that's been there since at least 2.1.8.  This bug was tracked down with
      help from Dale Blount.
      
      The effect of this bug ranges from harmless "Treason uncloaked"
      messages to hung/aborted TCP connections.  The details of the bug
      and fix is as follows.
      
      When snd_wnd is updated, we only update pred_flags if
      tcp_fast_path_check succeeds.  When it fails (for example,
      when our rcvbuf is used up), we will leave pred_flags with
      an out-of-date snd_wnd value.
      
      When the out-of-date pred_flags happens to match the next incoming
      packet we will again hit the fast path and use the current snd_wnd
      which will be wrong.
      
      In the case of the treason messages, it just happens that the snd_wnd
      cached in pred_flags is zero while tp->snd_wnd is non-zero.  Therefore
      when a zero-window packet comes in we incorrectly conclude that the
      window is non-zero.
      
      In fact if the peer continues to send us zero-window pure ACKs we
      will continue making the same mistake.  It's only when the peer
      transmits a zero-window packet with data attached that we get a
      chance to snap out of it.  This is what triggers the treason
      message at the next retransmit timeout.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
      2ad41065
  20. 30 9月, 2005 1 次提交
    • A
      [TCP]: Don't over-clamp window in tcp_clamp_window() · 09e9ec87
      Alexey Kuznetsov 提交于
      From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      
      Handle better the case where the sender sends full sized
      frames initially, then moves to a mode where it trickles
      out small amounts of data at a time.
      
      This known problem is even mentioned in the comments
      above tcp_grow_window() in tcp_input.c, specifically:
      
      ...
       * The scheme does not work when sender sends good segments opening
       * window and then starts to feed us spagetti. But it should work
       * in common situations. Otherwise, we have to rely on queue collapsing.
      ...
      
      When the sender gives full sized frames, the "struct sk_buff" overhead
      from each packet is small.  So we'll advertize a larger window.
      If the sender moves to a mode where small segments are sent, this
      ratio becomes tilted to the other extreme and we start overrunning
      the socket buffer space.
      
      tcp_clamp_window() tries to address this, but it's clamping of
      tp->window_clamp is a wee bit too aggressive for this particular case.
      
      Fix confirmed by Ion Badulescu.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09e9ec87
  21. 15 9月, 2005 1 次提交
  22. 02 9月, 2005 1 次提交
  23. 30 8月, 2005 6 次提交