1. 20 7月, 2012 2 次提交
  2. 19 7月, 2012 1 次提交
  3. 17 7月, 2012 3 次提交
    • E
      tcp: implement RFC 5961 4.2 · 0c24604b
      Eric Dumazet 提交于
      Implement the RFC 5691 mitigation against Blind
      Reset attack using SYN bit.
      
      Section 4.2 of RFC 5961 advises to send a Challenge ACK and drop
      incoming packet, instead of resetting the session.
      
      Add a new SNMP counter to count number of challenge acks sent
      in response to SYN packets.
      (netstat -s | grep TCPSYNChallenge)
      
      Remove obsolete TCPAbortOnSyn, since we no longer abort a TCP session
      because of a SYN flag.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Kiran Kumar Kella <kkiran@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0c24604b
    • E
      tcp: implement RFC 5961 3.2 · 282f23c6
      Eric Dumazet 提交于
      Implement the RFC 5691 mitigation against Blind
      Reset attack using RST bit.
      
      Idea is to validate incoming RST sequence,
      to match RCV.NXT value, instead of previouly accepted
      window : (RCV.NXT <= SEG.SEQ < RCV.NXT+RCV.WND)
      
      If sequence is in window but not an exact match, send
      a "challenge ACK", so that the other part can resend an
      RST with the appropriate sequence.
      
      Add a new sysctl, tcp_challenge_ack_limit, to limit
      number of challenge ACK sent per second.
      
      Add a new SNMP counter to count number of challenge acks sent.
      (netstat -s | grep TCPChallengeACK)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Kiran Kumar Kella <kkiran@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      282f23c6
    • E
      tcp: add OFO snmp counters · a6df1ae9
      Eric Dumazet 提交于
      Add three SNMP TCP counters, to better track TCP behavior
      at global stage (netstat -s), when packets are received
      Out Of Order (OFO)
      
      TCPOFOQueue : Number of packets queued in OFO queue
      
      TCPOFODrop  : Number of packets meant to be queued in OFO
                    but dropped because socket rcvbuf limit hit.
      
      TCPOFOMerge : Number of packets in OFO that were merged with
                    other packets.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6df1ae9
  4. 11 7月, 2012 1 次提交
  5. 05 7月, 2012 1 次提交
    • D
      net: Do delayed neigh confirmation. · 5110effe
      David S. Miller 提交于
      When a dst_confirm() happens, mark the confirmation as pending in the
      dst.  Then on the next packet out, when we have the neigh in-hand, do
      the update.
      
      This removes the dependency in dst_confirm() of dst's having an
      attached neigh.
      
      While we're here, remove the explicit 'dst' NULL check, all except 2
      or 3 call sites ensure it's not NULL.  So just fix those cases up.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5110effe
  6. 20 6月, 2012 1 次提交
    • D
      ipv4: Early TCP socket demux. · 41063e9d
      David S. Miller 提交于
      Input packet processing for local sockets involves two major demuxes.
      One for the route and one for the socket.
      
      But we can optimize this down to one demux for certain kinds of local
      sockets.
      
      Currently we only do this for established TCP sockets, but it could
      at least in theory be expanded to other kinds of connections.
      
      If a TCP socket is established then it's identity is fully specified.
      
      This means that whatever input route was used during the three-way
      handshake must work equally well for the rest of the connection since
      the keys will not change.
      
      Once we move to established state, we cache the receive packet's input
      route to use later.
      
      Like the existing cached route in sk->sk_dst_cache used for output
      packets, we have to check for route invalidations using dst->obsolete
      and dst->ops->check().
      
      Early demux occurs outside of a socket locked section, so when a route
      invalidation occurs we defer the fixup of sk->sk_rx_dst until we are
      actually inside of established state packet processing and thus have
      the socket locked.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41063e9d
  7. 24 5月, 2012 1 次提交
    • E
      tcp: take care of overlaps in tcp_try_coalesce() · 1ca7ee30
      Eric Dumazet 提交于
      Sergio Correia reported following warning :
      
      WARNING: at net/ipv4/tcp.c:1301 tcp_cleanup_rbuf+0x4f/0x110()
      
      WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
           "cleanup rbuf bug: copied %X seq %X rcvnxt %X\n",
           tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt);
      
      It appears TCP coalescing, and more specifically commit b081f85c
      (net: implement tcp coalescing in tcp_queue_rcv()) should take care of
      possible segment overlaps in receive queue. This was properly done in
      the case of out_or_order_queue by the caller.
      
      For example, segment at tail of queue have sequence 1000-2000, and we
      add a segment with sequence 1500-2500.
      This can happen in case of retransmits.
      
      In this case, just don't do the coalescing.
      Reported-by: NSergio Correia <lists@uece.net>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Tested-by: NSergio Correia <lists@uece.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ca7ee30
  8. 20 5月, 2012 1 次提交
    • E
      net: introduce skb_try_coalesce() · bad43ca8
      Eric Dumazet 提交于
      Move tcp_try_coalesce() protocol independent part to
      skb_try_coalesce().
      
      skb_try_coalesce() can be used in IPv4 defrag and IPv6 reassembly,
      to build optimized skbs (less sk_buff, and possibly less 'headers')
      
      skb_try_coalesce() is zero copy, unless the copy can fit in destination
      header (its a rare case)
      
      kfree_skb_partial() is also moved to net/core/skbuff.c and exported,
      because IPv6 will need it in patch (ipv6: use skb coalescing in
      reassembly).
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bad43ca8
  9. 18 5月, 2012 1 次提交
  10. 16 5月, 2012 2 次提交
  11. 11 5月, 2012 3 次提交
  12. 04 5月, 2012 1 次提交
  13. 03 5月, 2012 10 次提交
    • A
      tcp: move stats merge to the end of tcp_try_coalesce · 34a802a5
      Alexander Duyck 提交于
      This change cleans up the last bits of tcp_try_coalesce so that we only
      need one goto which jumps to the end of the function.  The idea is to make
      the code more readable by putting things in a linear order so that we start
      execution at the top of the function, and end it at the bottom.
      
      I also made a slight tweak to the code for handling frags when we are a
      clone.  Instead of making it an if (clone) loop else nr_frags = 0 I changed
      the logic so that if (!clone) we just set the number of frags to 0 which
      disables the for loop anyway.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      34a802a5
    • A
      tcp: Move code related to head frag in tcp_try_coalesce · 57b55a7e
      Alexander Duyck 提交于
      This change reorders the code related to the use of an skb->head_frag so it
      is placed before we check the rest of the frags.  This allows the code to
      read more linearly instead of like some sort of loop.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      57b55a7e
    • A
      tcp: Fix truesize accounting in tcp_try_coalesce · c73c3d9c
      Alexander Duyck 提交于
      This patch addresses several issues in the way we were tracking the
      truesize in tcp_try_coalesce.
      
      First it was using ksize which prevents us from having a 0 sized head frag
      and getting a usable result.  To resolve that this patch uses the end
      pointer which is set based off either ksize, or the frag_size supplied in
      build_skb.  This allows us to compute the original truesize of the entire
      buffer and remove that value leaving us with just what was added as pages.
      
      The second issue was the use of skb->len if there is a mergeable head frag.
      We should only need to remove the size of an data aligned sk_buff from our
      current skb->truesize to compute the delta for a buffer with a reused head.
      By using skb->len the value of truesize was being artificially reduced
      which means that head frags could use more memory than buffers using
      standard allocations.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c73c3d9c
    • A
      net: Stop decapitating clones that have a head_frag · 2996d31f
      Alexander Duyck 提交于
      This change is meant ot prevent stealing the skb->head to use as a page in
      the event that the skb->head was cloned.  This allows the other clones to
      track each other via shinfo->dataref.
      
      Without this we break down to two methods for tracking the reference count,
      one being dataref, the other being the page count.  As a result it becomes
      difficult to track how many references there are to skb->head.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2996d31f
    • E
      net: implement tcp coalescing in tcp_queue_rcv() · b081f85c
      Eric Dumazet 提交于
      Extend tcp coalescing implementing it from tcp_queue_rcv(), the main
      receiver function when application is not blocked in recvmsg().
      
      Function tcp_queue_rcv() is moved a bit to allow its call from
      tcp_data_queue()
      
      This gives good results especially if GRO could not kick, and if skb
      head is a fragment.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b081f85c
    • E
      net: take care of cloned skbs in tcp_try_coalesce() · 923dd347
      Eric Dumazet 提交于
      Before stealing fragments or skb head, we must make sure skbs are not
      cloned.
      
      Alexander was worried about destination skb being cloned : In bridge
      setups, a driver could be fooled if skb->data_len would not match skb
      nr_frags.
      
      If source skb is cloned, we must take references on pages instead.
      
      Bug happened using tcpdump (if not using mmap())
      
      Introduce kfree_skb_partial() helper to cleanup code.
      Reported-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      923dd347
    • E
      tcp: change tcp_adv_win_scale and tcp_rmem[2] · b49960a0
      Eric Dumazet 提交于
      tcp_adv_win_scale default value is 2, meaning we expect a good citizen
      skb to have skb->len / skb->truesize ratio of 75% (3/4)
      
      In 2.6 kernels we (mis)accounted for typical MSS=1460 frame :
      1536 + 64 + 256 = 1856 'estimated truesize', and 1856 * 3/4 = 1392.
      So these skbs were considered as not bloated.
      
      With recent truesize fixes, a typical MSS=1460 frame truesize is now the
      more precise :
      2048 + 256 = 2304. But 2304 * 3/4 = 1728.
      So these skb are not good citizen anymore, because 1460 < 1728
      
      (GRO can escape this problem because it build skbs with a too low
      truesize.)
      
      This also means tcp advertises a too optimistic window for a given
      allocated rcvspace : When receiving frames, sk_rmem_alloc can hit
      sk_rcvbuf limit and we call tcp_prune_queue()/tcp_collapse() too often,
      especially when application is slow to drain its receive queue or in
      case of losses (netperf is fast, scp is slow). This is a major latency
      source.
      
      We should adjust the len/truesize ratio to 50% instead of 75%
      
      This patch :
      
      1) changes tcp_adv_win_scale default to 1 instead of 2
      
      2) increase tcp_rmem[2] limit from 4MB to 6MB to take into account
      better truesize tracking and to allow autotuning tcp receive window to
      reach same value than before. Note that same amount of kernel memory is
      consumed compared to 2.6 kernels.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b49960a0
    • Y
      tcp: early retransmit: delayed fast retransmit · 750ea2ba
      Yuchung Cheng 提交于
      Implementing the advanced early retransmit (sysctl_tcp_early_retrans==2).
      Delays the fast retransmit by an interval of RTT/4. We borrow the
      RTO timer to implement the delay. If we receive another ACK or send
      a new packet, the timer is cancelled and restored to original RTO
      value offset by time elapsed.  When the delayed-ER timer fires,
      we enter fast recovery and perform fast retransmit.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      750ea2ba
    • Y
      tcp: early retransmit · eed530b6
      Yuchung Cheng 提交于
      This patch implements RFC 5827 early retransmit (ER) for TCP.
      It reduces DUPACK threshold (dupthresh) if outstanding packets are
      less than 4 to recover losses by fast recovery instead of timeout.
      
      While the algorithm is simple, small but frequent network reordering
      makes this feature dangerous: the connection repeatedly enter
      false recovery and degrade performance. Therefore we implement
      a mitigation suggested in the appendix of the RFC that delays
      entering fast recovery by a small interval, i.e., RTT/4. Currently
      ER is conservative and is disabled for the rest of the connection
      after the first reordering event. A large scale web server
      experiment on the performance impact of ER is summarized in
      section 6 of the paper "Proportional Rate Reduction for TCP”,
      IMC 2011. http://conferences.sigcomm.org/imc/2011/docs/p155.pdf
      
      Note that Linux has a similar feature called THIN_DUPACK. The
      differences are THIN_DUPACK do not mitigate reorderings and is only
      used after slow start. Currently ER is disabled if THIN_DUPACK is
      enabled. I would be happy to merge THIN_DUPACK feature with ER if
      people think it's a good idea.
      
      ER is enabled by sysctl_tcp_early_retrans:
        0: Disables ER
      
        1: Reduce dupthresh to packets_out - 1 when outstanding packets < 4.
      
        2: (Default) reduce dupthresh like mode 1. In addition, delay
           entering fast recovery by RTT/4.
      
      Note: mode 2 is implemented in the third part of this patch series.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eed530b6
    • Y
      tcp: early retransmit: tcp_enter_recovery() · 1fbc3405
      Yuchung Cheng 提交于
      This a prepartion patch that refactors the code to enter recovery
      into a new function tcp_enter_recovery(). It's needed to implement
      the delayed fast retransmit in ER.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1fbc3405
  14. 01 5月, 2012 2 次提交
    • E
      tcp: makes tcp_try_coalesce aware of skb->head_frag · 329033f6
      Eric Dumazet 提交于
      TCP coalesce can check if skb to be merged has its skb->head mapped to a
      page fragment, instead of a kmalloc() area.
      
      We had to disable coalescing in this case, for performance reasons.
      
      We 'upgrade' skb->head as a fragment in itself.
      
      This reduces number of cache misses when user makes its copies, since a
      less sk_buff are fetched.
      
      This makes receive and ofo queues shorter and thus reduce cache line
      misses in TCP stack.
      
      This is a followup of patch "net: allow skb->head to be a page fragment"
      
      Tested with tg3 nic, with GRO on or off. We can see "TCPRcvCoalesce"
      counter being incremented.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      329033f6
    • Y
      tcp: fix infinite cwnd in tcp_complete_cwr() · 1cebce36
      Yuchung Cheng 提交于
      When the cwnd reduction is done, ssthresh may be infinite
      if TCP enters CWR via ECN or F-RTO. If cwnd is not undone, i.e.,
      undo_marker is set, tcp_complete_cwr() falsely set cwnd to the
      infinite ssthresh value. The correct operation is to keep cwnd
      intact because it has been updated in ECN or F-RTO.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1cebce36
  15. 28 4月, 2012 1 次提交
  16. 24 4月, 2012 2 次提交
  17. 22 4月, 2012 1 次提交
  18. 18 4月, 2012 1 次提交
  19. 16 4月, 2012 1 次提交
  20. 15 4月, 2012 2 次提交
  21. 11 4月, 2012 1 次提交
  22. 06 4月, 2012 1 次提交