1. 14 1月, 2017 5 次提交
    • Y
      tcp: check undo conditions before detecting losses · 98e36d44
      Yuchung Cheng 提交于
      Currently RACK would mark loss before the undo operations in TCP
      loss recovery. This could incorrectly identify real losses as
      spurious. For example a sender first experiences a delay spike and
      then eventually some packets were lost due to buffer overrun.
      In this case, the sender should perform fast recovery b/c not all
      the packets were lost.
      
      But the sender may first trigger a (spurious) RTO and reset
      cwnd to 1. The following ACKs may used to mark real losses by
      tcp_rack_mark_lost. Then in tcp_process_loss this ACK could trigger
      F-RTO undo condition and unmark real losses and revert the cwnd
      reduction. If there are no more ACKs coming back, eventually the
      sender would timeout again instead of performing fast recovery.
      
      The patch fixes this incorrect process by always performing
      the undo checks before detecting losses.
      
      Fixes: 4f41b1c5 ("tcp: use RACK to detect losses")
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      98e36d44
    • Y
      tcp: use sequence to break TS ties for RACK loss detection · 1d0833df
      Yuchung Cheng 提交于
      The packets inside a jumbo skb (e.g., TSO) share the same skb
      timestamp, even though they are sent sequentially on the wire. Since
      RACK is based on time, it can not detect some packets inside the
      same skb are lost.  However, we can leverage the packet sequence
      numbers as extended timestamps to detect losses. Therefore, when
      RACK timestamp is identical to skb's timestamp (i.e., one of the
      packets of the skb is acked or sacked), we use the sequence numbers
      of the acked and unacked packets to break ties.
      
      We can use the same sequence logic to advance RACK xmit time as
      well to detect more losses and avoid timeout.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d0833df
    • Y
      tcp: add reordering timer in RACK loss detection · 57dde7f7
      Yuchung Cheng 提交于
      This patch makes RACK install a reordering timer when it suspects
      some packets might be lost, but wants to delay the decision
      a little bit to accomodate reordering.
      
      It does not create a new timer but instead repurposes the existing
      RTO timer, because both are meant to retransmit packets.
      Specifically it arms a timer ICSK_TIME_REO_TIMEOUT when
      the RACK timing check fails. The wait time is set to
      
        RACK.RTT + RACK.reo_wnd - (NOW - Packet.xmit_time) + fudge
      
      This translates to expecting a packet (Packet) should take
      (RACK.RTT + RACK.reo_wnd + fudge) to deliver after it was sent.
      
      When there are multiple packets that need a timer, we use one timer
      with the maximum timeout. Therefore the timer conservatively uses
      the maximum window to expire N packets by one timeout, instead of
      N timeouts to expire N packets sent at different times.
      
      The fudge factor is 2 jiffies to ensure when the timer fires, all
      the suspected packets would exceed the deadline and be marked lost
      by tcp_rack_detect_loss(). It has to be at least 1 jiffy because the
      clock may tick between calling icsk_reset_xmit_timer(timeout) and
      actually hang the timer. The next jiffy is to lower-bound the timeout
      to 2 jiffies when reo_wnd is < 1ms.
      
      When the reordering timer fires (tcp_rack_reo_timeout): If we aren't
      in Recovery we'll enter fast recovery and force fast retransmit.
      This is very similar to the early retransmit (RFC5827) except RACK
      is not constrained to only enter recovery for small outstanding
      flights.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      57dde7f7
    • Y
      tcp: record most recent RTT in RACK loss detection · deed7be7
      Yuchung Cheng 提交于
      Record the most recent RTT in RACK. It is often identical to the
      "ca_rtt_us" values in tcp_clean_rtx_queue. But when the packet has
      been retransmitted, RACK choses to believe the ACK is for the
      (latest) retransmitted packet if the RTT is over minimum RTT.
      
      This requires passing the arrival time of the most recent ACK to
      RACK routines. The timestamp is now recorded in the "ack_time"
      in tcp_sacktag_state during the ACK processing.
      
      This patch does not change the RACK algorithm itself. It only adds
      the RTT variable to prepare the next main patch.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      deed7be7
    • Y
      tcp: new helper for RACK to detect loss · e636f8b0
      Yuchung Cheng 提交于
      Create a new helper tcp_rack_detect_loss to prepare the upcoming
      RACK reordering timer patch.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e636f8b0
  2. 30 12月, 2016 2 次提交
  3. 07 12月, 2016 1 次提交
    • M
      tcp: warn on bogus MSS and try to amend it · dcb17d22
      Marcelo Ricardo Leitner 提交于
      There have been some reports lately about TCP connection stalls caused
      by NIC drivers that aren't setting gso_size on aggregated packets on rx
      path. This causes TCP to assume that the MSS is actually the size of the
      aggregated packet, which is invalid.
      
      Although the proper fix is to be done at each driver, it's often hard
      and cumbersome for one to debug, come to such root cause and report/fix
      it.
      
      This patch amends this situation in two ways. First, it adds a warning
      on when this situation occurs, so it gives a hint to those trying to
      debug this. It also limit the maximum probed MSS to the adverised MSS,
      as it should never be any higher than that.
      
      The result is that the connection may not have the best performance ever
      but it shouldn't stall, and the admin will have a hint on what to look
      for.
      
      Tested with virtio by forcing gso_size to 0.
      
      v2: updated msg per David's suggestion
      v3: use skb_iif to find the interface and also log its name, per Eric
          Dumazet's suggestion. As the skb may be backlogged and the interface
          gone by then, we need to check if the number still has a meaning.
      v4: use helper tcp_gro_dev_warn() and avoid pr_warn_once inside __once, per
          David's suggestion
      
      Cc: Jonathan Maxwell <jmaxwell37@gmail.com>
      Signed-off-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dcb17d22
  4. 03 12月, 2016 2 次提交
  5. 30 11月, 2016 2 次提交
  6. 22 11月, 2016 1 次提交
  7. 10 11月, 2016 1 次提交
  8. 30 10月, 2016 1 次提交
  9. 26 9月, 2016 1 次提交
    • K
      netfilter: xt_socket: fix transparent match for IPv6 request sockets · 7a682575
      KOVACS Krisztian 提交于
      The introduction of TCP_NEW_SYN_RECV state, and the addition of request
      sockets to the ehash table seems to have broken the --transparent option
      of the socket match for IPv6 (around commit a9407000).
      
      Now that the socket lookup finds the TCP_NEW_SYN_RECV socket instead of the
      listener, the --transparent option tries to match on the no_srccheck flag
      of the request socket.
      
      Unfortunately, that flag was only set for IPv4 sockets in tcp_v4_init_req()
      by copying the transparent flag of the listener socket. This effectively
      causes '-m socket --transparent' not match on the ACK packet sent by the
      client in a TCP handshake.
      
      Based on the suggestion from Eric Dumazet, this change moves the code
      initializing no_srccheck to tcp_conn_request(), rendering the above
      scenario working again.
      
      Fixes: a9407000 ("netfilter: xt_socket: prepare for TCP_NEW_SYN_RECV support")
      Signed-off-by: NAlex Badics <alex.badics@balabit.com>
      Signed-off-by: NKOVACS Krisztian <hidden@balabit.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      7a682575
  10. 23 9月, 2016 1 次提交
  11. 22 9月, 2016 1 次提交
  12. 21 9月, 2016 5 次提交
  13. 16 9月, 2016 1 次提交
    • E
      tcp: fix a stale ooo_last_skb after a replace · 76f0dcbb
      Eric Dumazet 提交于
      When skb replaces another one in ooo queue, I forgot to also
      update tp->ooo_last_skb as well, if the replaced skb was the last one
      in the queue.
      
      To fix this, we simply can re-use the code that runs after an insertion,
      trying to merge skbs at the right of current skb.
      
      This not only fixes the bug, but also remove all small skbs that might
      be a subset of the new one.
      
      Example:
      
      We receive segments 2001:3001,  4001:5001
      
      Then we receive 2001:8001 : We should replace 2001:3001 with the big
      skb, but also remove 4001:50001 from the queue to save space.
      
      packetdrill test demonstrating the bug
      
      0.000 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
      +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
      +0 bind(3, ..., ...) = 0
      +0 listen(3, 1) = 0
      
      +0 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 7>
      +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
      +0.100 < . 1:1(0) ack 1 win 1024
      +0 accept(3, ..., ...) = 4
      
      +0.01 < . 1001:2001(1000) ack 1 win 1024
      +0    > . 1:1(0) ack 1 <nop,nop, sack 1001:2001>
      
      +0.01 < . 1001:3001(2000) ack 1 win 1024
      +0    > . 1:1(0) ack 1 <nop,nop, sack 1001:2001 1001:3001>
      
      Fixes: 9f5afeae ("tcp: use an RB tree for ooo receive queue")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NYuchung Cheng <ycheng@google.com>
      Cc: Yaogong Wang <wygivan@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76f0dcbb
  14. 11 9月, 2016 1 次提交
  15. 09 9月, 2016 1 次提交
    • Y
      tcp: use an RB tree for ooo receive queue · 9f5afeae
      Yaogong Wang 提交于
      Over the years, TCP BDP has increased by several orders of magnitude,
      and some people are considering to reach the 2 Gbytes limit.
      
      Even with current window scale limit of 14, ~1 Gbytes maps to ~740,000
      MSS.
      
      In presence of packet losses (or reorders), TCP stores incoming packets
      into an out of order queue, and number of skbs sitting there waiting for
      the missing packets to be received can be in the 10^5 range.
      
      Most packets are appended to the tail of this queue, and when
      packets can finally be transferred to receive queue, we scan the queue
      from its head.
      
      However, in presence of heavy losses, we might have to find an arbitrary
      point in this queue, involving a linear scan for every incoming packet,
      throwing away cpu caches.
      
      This patch converts it to a RB tree, to get bounded latencies.
      
      Yaogong wrote a preliminary patch about 2 years ago.
      Eric did the rebase, added ofo_last_skb cache, polishing and tests.
      
      Tested with network dropping between 1 and 10 % packets, with good
      success (about 30 % increase of throughput in stress tests)
      
      Next step would be to also use an RB tree for the write queue at sender
      side ;)
      Signed-off-by: NYaogong Wang <wygivan@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Acked-By: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f5afeae
  16. 19 8月, 2016 1 次提交
    • E
      tcp: refine tcp_prune_ofo_queue() to not drop all packets · 36a6503f
      Eric Dumazet 提交于
      Over the years, TCP BDP has increased a lot, and is typically
      in the order of ~10 Mbytes with help of clever Congestion Control
      modules.
      
      In presence of packet losses, TCP stores incoming packets into an out of
      order queue, and number of skbs sitting there waiting for the missing
      packets to be received can match the BDP (~10 Mbytes)
      
      In some cases, TCP needs to make room for incoming skbs, and current
      strategy can simply remove all skbs in the out of order queue as a last
      resort, incurring a huge penalty, both for receiver and sender.
      
      Unfortunately these 'last resort events' are quite frequent, forcing
      sender to send all packets again, stalling the flow and wasting a lot of
      resources.
      
      This patch cleans only a part of the out of order queue in order
      to meet the memory constraints.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Soheil Hassas Yeganeh <soheil@google.com>
      Cc: C. Stephen Gun <csg@google.com>
      Cc: Van Jacobson <vanj@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Acked-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      36a6503f
  17. 16 7月, 2016 1 次提交
    • J
      tcp: enable per-socket rate limiting of all 'challenge acks' · 083ae308
      Jason Baron 提交于
      The per-socket rate limit for 'challenge acks' was introduced in the
      context of limiting ack loops:
      
      commit f2b2c582 ("tcp: mitigate ACK loops for connections as tcp_sock")
      
      And I think it can be extended to rate limit all 'challenge acks' on a
      per-socket basis.
      
      Since we have the global tcp_challenge_ack_limit, this patch allows for
      tcp_challenge_ack_limit to be set to a large value and effectively rely on
      the per-socket limit, or set tcp_challenge_ack_limit to a lower value and
      still prevents a single connections from consuming the entire challenge ack
      quota.
      
      It further moves in the direction of eliminating the global limit at some
      point, as Eric Dumazet has suggested. This a follow-up to:
      Subject: tcp: make challenge acks less predictable
      
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Yue Cao <ycao009@ucr.edu>
      Signed-off-by: NJason Baron <jbaron@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      083ae308
  18. 12 7月, 2016 1 次提交
  19. 28 6月, 2016 1 次提交
  20. 11 6月, 2016 1 次提交
  21. 08 6月, 2016 1 次提交
    • P
      tcp: accept RST if SEQ matches right edge of right-most SACK block · e00431bc
      Pau Espin Pedrol 提交于
      RFC 5961 advises to only accept RST packets containing a seq number
      matching the next expected seq number instead of the whole receive
      window in order to avoid spoofing attacks.
      
      However, this situation is not optimal in the case SACK is in use at the
      time the RST is sent. I recently run into a scenario in which packet
      losses were high while uploading data to a server, and userspace was
      willing to frequently terminate connections by sending a RST. In
      this case, the ACK sent on the receiver side (rcv_nxt) is frozen waiting
      for a lost packet retransmission and SACK blocks are used to let the
      client continue uploading data. At some point later on, the client sends
      the RST (snd_nxt), which matches the next expected seq number of the
      right-most SACK block on the receiver side which is going forward
      receiving data.
      
      In this scenario, as RFC 5961 defines, the RST SEQ doesn't match the
      frozen main ACK at receiver side and thus gets dropped and a challenge
      ACK is sent, which gets usually lost due to network conditions. The main
      consequence is that the connection stays alive for a while even if it
      made sense to accept the RST. This can get really bad if lots of
      connections like this one are created in few seconds, allocating all the
      resources of the server easily.
      
      For security reasons, not all SACK blocks are checked (there could be a
      big amount of SACK blocks => acceptable SEQ numbers). Furthermore, it
      wouldn't make sense to check for RST in blocks other than the right-most
      received one because the sender is not expected to be sending new data
      after the RST. For simplicity, only up to the 4 most recently updated
      SACK blocks (selective_acks[4] field) are compared to find the
      right-most block, as usually those are the ones with bigger probability
      to contain it.
      
      This patch was tested in a 3.18 kernel and probed to improve the
      situation in the scenario described above.
      Signed-off-by: NPau Espin Pedrol <pau.espin@tessares.net>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Tested-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e00431bc
  22. 12 5月, 2016 1 次提交
  23. 05 5月, 2016 1 次提交
  24. 03 5月, 2016 2 次提交
  25. 29 4月, 2016 2 次提交
    • M
      tcp: Handle eor bit when coalescing skb · a643b5d4
      Martin KaFai Lau 提交于
      This patch:
      1. Prevent next_skb from coalescing to the prev_skb if
         TCP_SKB_CB(prev_skb)->eor is set
      2. Update the TCP_SKB_CB(prev_skb)->eor if coalescing is
         allowed
      
      Packetdrill script for testing:
      ~~~~~~
      +0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
      +0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
      +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
      +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
      +0 bind(3, ..., ...) = 0
      +0 listen(3, 1) = 0
      
      0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
      0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
      0.200 < . 1:1(0) ack 1 win 257
      0.200 accept(3, ..., ...) = 4
      +0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
      
      0.200 sendto(4, ..., 730, MSG_EOR, ..., ...) = 730
      0.200 sendto(4, ..., 730, MSG_EOR, ..., ...) = 730
      0.200 write(4, ..., 11680) = 11680
      
      0.200 > P. 1:731(730) ack 1
      0.200 > P. 731:1461(730) ack 1
      0.200 > . 1461:8761(7300) ack 1
      0.200 > P. 8761:13141(4380) ack 1
      
      0.300 < . 1:1(0) ack 1 win 257 <sack 1461:13141,nop,nop>
      0.300 > P. 1:731(730) ack 1
      0.300 > P. 731:1461(730) ack 1
      0.400 < . 1:1(0) ack 13141 win 257
      
      0.400 close(4) = 0
      0.400 > F. 13141:13141(0) ack 1
      0.500 < F. 1:1(0) ack 13142 win 257
      0.500 > . 13142:13142(0) ack 2
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Soheil Hassas Yeganeh <soheil@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a643b5d4
    • S
      tcp: remove SKBTX_ACK_TSTAMP since it is redundant · 0a2cf20c
      Soheil Hassas Yeganeh 提交于
      The SKBTX_ACK_TSTAMP flag is set in skb_shinfo->tx_flags when
      the timestamp of the TCP acknowledgement should be reported on
      error queue. Since accessing skb_shinfo is likely to incur a
      cache-line miss at the time of receiving the ack, the
      txstamp_ack bit was added in tcp_skb_cb, which is set iff
      the SKBTX_ACK_TSTAMP flag is set for an skb. This makes
      SKBTX_ACK_TSTAMP flag redundant.
      
      Remove the SKBTX_ACK_TSTAMP and instead use the txstamp_ack bit
      everywhere.
      
      Note that this frees one bit in shinfo->tx_flags.
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Suggested-by: NWillem de Bruijn <willemb@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a2cf20c
  26. 28 4月, 2016 2 次提交